id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
253057437
pes2o/s2orc
v3-fos-license
Clinical Outcome, Socioeconomic Status and Psychological Constrains of Patients Undergoing Preimplantation Genetic Testing (PGT) in Northern Greece Background and objectives: Preimplantation genetic testing (PGT) offers patients the possibility of having a healthy baby free of chromosomal or genetic disorders. The present study focuses on the application of PGT for patients located in Northern Greece, investigating their clinical outcomes, their motives, and their overall physical and emotional experience during the treatment, in association with their socioeconomic background. Materials and Methods: Couples who underwent PGT for a monogenic condition (PGT-M, n = 19 cycles) or aneuploidy (PGT-A, n = 22 cycles) participated in the study. Fertilization, implantation, and pregnancy rates were recorded for all cycles. The couples were asked to fill in a questionnaire about the consultation they had received prior to treatment, their sociodemographic information, and the psychological impact PGT had on both the female and male partner. Results: The fertilization, implantation, and ongoing pregnancy rates for the PGT-M and PGT-A cycles were 81.3%, 70.6%, and 52.9%, and 78.2%, 64.3%, and 57.1%, respectively. Females experienced more intense physical pain than their male partners while psychological pain was encountered by both partners and occasionally in higher instances in males. No typical socioeconomic background of the patients referred for PGT in Northern Greece was noticed. Conclusion: PGT is an attractive alternative to prenatal diagnosis (PND), aiming to establisha healthy pregnancy by identifying and avoiding the transfer of chromosomally or genetically abnormal embryos to the uterus. Although the benefits of PGT were well-received by all patients undergoing the procedure, psychological pain was evident and especially prominent in patients with a previous affected child or no normal embryos for transfer. Holistic counseling is of utmost importance in order to make patients’ experience during their journey to have a healthy baby less emotionally demanding and help them make the right choices for the future. Introduction In vitro fertilization (IVF) involves the collection of eggs from the ovary, their fertilization in the laboratory, and the transfer of the developing embryos to the woman's uterus [1]. However, despite the huge advancements in the field of assisted reproduction, the success rates of IVF remain low, mainly because a large proportion of embryos are either unable to reach the blastocyst stage and implant, or result in miscarriage. One possible cause for this is the high incidence of chromosomal abnormalities found at the early stages of preimplantation development [2][3][4][5][6][7][8][9]. Preimplantation genetic testing for aneuploidy (PGT-A) has therefore been proposed as a solution that allows the diagnosis of chromosomal abnormalities in embryos before implantation to the uterus [10,11]. PGT-A is offered to patients of advanced maternal age (AMA) ≥ 35 years, repeated implantation failure (RIF) > 3, repeated miscarriages (RM) > 2, previous pregnancy with a chromosomally abnormal fetus irrespective of maternal age, and in cases of severe male factor infertility including non-obstructive azoospermia (NOA) and oligoasthenoteratozoospermia (OAT) [9][10][11][12][13][14][15]. The method was first introduced in 1990 as a diagnostic tool to avoid the inheritance of an X-linked disease in male offspring [16]. Since then, the method has been successfully applied for the diagnosis of monogenic disorders (PGT-M), chromosomal translocations (PGT-SR),late-onset diseases includinginherited cancers, and for Human Leukocyte Antigens (HLA) matching to identify HLA-compatibleembryos that can save the life of family members (usually an affected child) through umbilical cord blood stem cell transplantation and future bone marrow transplantations [11,[17][18][19]. The most widely used embryo biopsy strategies for PGT are: (a) cleavage stage biopsy on day 3 followed by aspiration of a single blastomere and fresh embryo transfer on day 5, and (b) blastocyst stage biopsy on day 5 followed by aspiration of 5-10 trophectoderm (TE) cells, immediate vitrification of the blastocyst post biopsy, and transfer after warming at a future cycle, in a synchronized well-prepared endometrium [13,14,16,[20][21][22]. Although the benefits of PGT in the establishment of healthy pregnancies and the avoidance of a potential termination are well-received, patients undergoing PGT often demonstrate signs of psychological burden which may remain up to 3 years after the procedure. Uncertainty of outcome, lack of in-depth information or understanding of the provided information, physical pain, gender differences, socioeconomical background, religious beliefs, ethical or moral perceptions, unrealistic expectations, and cost are crucial factors affecting the overall psychology and decision making [23][24][25][26][27][28][29][30][31][32][33][34][35]. The aims of the present study were as follows. 1. To give an insight into the motives and clinical outcomes of couples undergoing PGT in Northern Greece. 2. To investigate the patients' profiles with respect to their socioeconomic background, educational level, and emotional experience during the process, as assessed by questionnaires. Methods The study was performed between 2017 and 2020. Clinical data were collected from the IVF units Biogenesis and Fertilia by Genesis, where couples underwent preimplantation genetic testing for monogenic disorders (PGT-M, 19 cycles) and aneuploidies (PGT-A, 22 cycles). Embryo biopsies were performed according to Chatzimeletiou et al., 2021;2005 [13,20]. Embryos were removed from culture on day 3 and placed in a drop of calcium/magnesium-free medium (Origio, Malov, Denmark) under oil (Origio, Malov, Denmark) and immobilized by suction using a holding pipette (Humagen-Origio Malov, Denmark). A hole in the zona was created with the aid of the Saturn laser (RI; Bickland Industrial Park, Falmouth, UK) or the Lycos-DTS laser (Hamilton Thorne, Beverly, MA, USA) and a blastomere was aspirated using a biopsy pipette (COOK Medical, Bloomington, IN, USA). The blastomere was placed in a PCR tube in Ca 2+ /Mg 2+ -free phosphate buffered saline (PBS) (Gibco; Grand Island, NY, USA) and analyzed for all chromosomes by array-Comparative Genome Hybridisation(CGH) (PGT-A) or for mutation analysis by PCR (PGT-M), respectively. The analytical validity of the PGT-M results was recorded on the genetic analysis report provided by the genetics lab that performed the analysis and ranged between 95-99%. It was also noted that due to the risk of allele drop out and mosaicism, follow-up of the pregnancies established by invasive prenatal testing is recommended. For the PGT-A results it was also noted that, due to the risk of mosaicism, the accuracy of results is around 90% based on the current literature and therefore follow-upby prenatal testing is recommended [4,11]. The culture of all embryos up to day 5 in Irvine(FUJIFILM Irvine Scientific-Santa Ana, CA, USA) or SAGE (Malov, Denmark) culture medium followed. One to a maximum of two embryos that were diagnosed as normal after PGT-A/M were transferred to the uterus on day 5 and any surplus normal embryos were vitrified for clinical purposes. The patients were also asked about the fate of the embryos that were diagnosed as affected, following PGT-M, or chromosomally abnormal, following PGT-A, and signed whether they consent to donate them for research or destroy them. A questionnaire was handed out to couples undergoing preimplantation genetic testing (PGT-M, PGT-A). The questionnaire was divided into three parts and addressed issues regarding the process of PGT and patient experience (Parts A and B), as well as socioeconomic and demographic parameters (Part C). Twenty couples returned the answered questionnaires for evaluation anonymously. Data were collected and statistically analyzed with statistical significance set at 0.05 (p < 0.05). Linear and binary regression analysis was used to explore both the physical and emotional impact of PGT on the patients. Clinical Data The clinical data for PGT-M (n = 19 cycles) and PGT-A (n = 22 cycles) are shown in Tables 1 and 2, respectively, and the couples' indications for PGT-M and PGT-A are shown in Table 3. In the PGT-M group, 2/19 cycles (10.5%) had no normal or carrier embryos for transfer, while in the PGT-A group, 8 cycles (38.1%) had no normal/euploid embryos for transfer. The fertilization, implantation (+hCG/ET), and ongoing pregnancy/ET rates for the PGT-M cycles were 81.3% (174/214), 70.6% (12/17), and 52.9% (9/17), and for the PGT-A cycles, 78.2% (176/225), 64.3% (9/14), and 57.1% (8/14), respectively (Tables 1 and 2). In the PGT-M group, one twin pregnancy was complicated by premature rapture of amniotic membranes and was lost, and another pregnancy with a normal fetus for cystic fibrosis was terminated after being diagnosed with Trisomy 14 during prenatal diagnosis by chorionic villus sampling. In total in the PGT-M group, 9 pregnancies went to term (8 singleton and 1 twin) resulting in 10 healthy babies born (8 males and 2 females), while in the PGT-A group, 8 pregnancies went to term (6 singleton and 2 twin) resulting in 10 healthy babies born (6 males and 4 females). Questionnaire Results Twenty couples returned the questionnaires of which eight underwent PGT-M (12 cycles) and the remaining twelve PGT-A(15 cycles). In Part A, the couples were asked to answer questions about the purpose of their choice to try assisted reproduction services and to undergo PGT, as well as whether they had received consultation or guidance towards this decision and by whom. The majority of couples stated that their gynecologist suggested IVF with PGT (19/20 95%).Part B included questions regarding the sources of information on IVF and PGT and the majority of couples again stated that their gynecologist (private sector) gave them most information and referred them to an embryologist/geneticist to provide them with all the technical information regarding embryo biopsy and PGT(18/20, 90%). Moreover, Part B was designed to provide insight into the physical and emotional impact the entire process had on the couples, as well as whether there was a distinction in the levels of pain felt between the two sexes. The couples were asked to evaluate the stress inflicted on them during the assisted reproduction experience ona scale from zero (0) to ten (10), both for physical and psychological pain. Physical pain was described as generally low but noticeably increased in females, whereas psychological pain ranged ona higher scale and was interestingly increased in males ( Figure 1). The binary logistic regression analysis indicated no significant difference in the experience of psychological pain between male and female participants (p = 0.056), whereas a significant difference was recorded in the experience of physical pain (p = 0.042) ( Table 4). Table 1. Clinical outcome of PGT-M cycles. Questionnaire Results Twenty couples returned the questionnaires of which eight underwent PGT-M (12 cycles) and the remaining twelve PGT-A(15 cycles). In Part A, the couples were asked to answer questions about the purpose of their choice to try assisted reproduction services and to undergo PGT, as well as whether they had received consultation or guidance towards this decision and by whom. The majority of couples stated that their gynecologist suggested IVF with PGT (19/20 95%).Part B included questions regarding the sources of information on IVF and PGT and the majority of couples again stated that their gynecologist (private sector) gave them most information and referred them to an embryologist/geneticist to provide them with all the technical information regarding embryo biopsy and PGT(18/20, 90%). Moreover, Part B was designed to provide insight into the physical and emotional impact the entire process had on the couples, as well as whether there was a distinction in the levels of pain felt between the two sexes. The couples were asked to evaluate the stress inflicted on them during the assisted reproduction experience ona scale from zero (0) to ten (10), both for physical and psychological pain. Physical pain was described as generally low but noticeably increased in females, whereas psychological pain ranged ona higher scale and was interestingly increased in males (Figure 1). The binary logistic regression analysis indicated no significant difference in the experience of psychological pain between male and female participants (p = 0.056), whereas a significant difference was recorded in the experience of physical pain (p = 0.042) ( Table 4). The sociodemographic determinants of Part C included age, education, origin, employment, religious affiliations, and income (Table 5). More specifically, the mean age of the male participants was 40.9 years, whereas the mean age of the female ones was 38.0 years. Age as a factor generally played a major role in the couples' decision-making about opting for preimplantation genetic testing (PGT-A). Regarding the couple's origin, only a mere 5% of the participants were not of Greek origin. In terms of education, the majority of them had a university degree (48.6%) and a further (14.3%) had postgraduate studies with Master of Sciences or Doctor of Philosophy degrees; 34.3% had received secondary education and a minority of patients (2.9%) had received primary education only. There were variations in the employment status of both male and female participants, the majority of whom were working either in the private (48.6%) or the public sector (25.7%) and only 2.9% being unemployed. Finally, religious affiliations were of significant importance. All males and most of the females (90%) had ties to religion. Regarding couple's annual earned income, 11.4% of the households earned less than 10,000 € per year, 11.4% earned 10,000 €-19,999 €, 34.3% earned 20,000 €-29,999 €, 28.6% earned 30,000 €-39,999 €, and 8.6% earned 40,000 €-49,999 € ( Table 5). The couples were asked if they consent to the donation of embryos identified as abnormal following PGT for research purposes, prior to their destruction. 90% of the couples (18/20) allowed the donation of their chromosomally abnormal and affected embryos for research. Discussion This study investigated both the clinical outcomes and impact of PGT on couplesin Northern Greece, with respect to their socioeconomic background, educational level, and emotional experience during the process, as assessed by questionnaires.The main reason for choosing PGT-M was the prevention of pregnancy termination or birth of a child that would suffer from a severe genetic disease, and in the case of HLA-matching to create a saviour sibling. For PGT-A the primary cause was the establishment of a healthy pregnancy and avoidance of miscarriage or termination of a pregnancy with a chromosomally abnormal fetus. Religion, patient's ethical status regarding embryo creation, and patient's experience of a previous affected/aneuploid pregnancy played a key role in the decision to undergo PGT.These results are in agreement with other studies reporting that patients with a sick child or previous experience of termination were more keen to use the technology in order to have a healthy baby [24][25][26]. Most of the PGT-M cycles performed were for beta-thalassemia (Table 3). In total, 9 PGT-M cycles were performed for b-thalassemia of which 6/9 had +vehCG/ET, but one twin pregnancy was complicated by premature rapture of amniotic membranes, leading to only five singleton term pregnancies and the birth of five boys. One patient had 2 PGT-M cycles forb-thalassemia and microdrepanocytosis, of which one had ET with no positive hCG result, and another patient underwent a successful PGT-M cycle for b-thalassemia and sickle-cell anemia, which led to the birth of a boy. B-thalassemia is unevenly distributed among different geographical regions, with high prevalence in the Mediterranean countries, particularly Greece, hence the alternative name Mediterranean anaemia [36]. Beta thalassemia has been associated with Plasmodium falciparum, responsible for the deadly form of malaria. Carriers of the mutated hemoglobulin are believed to show resistance to the plasmodium through cellular and/or immune-related mechanisms [37]. As a result, during past high incidences of malaria in the Mediterranean, carriers survived and prevailed in the regions, transmitting the gene through generations. One couple in our study underwent two PGT-M cycles with HLA typing for chronic granulomatous disease (CGD).The couple had an affected boy with CGD inherited by the mother who was a carrier of mutation c.674 + 4A > T in gene CYBB in chromosome X and needed stem cell transplantation but both parents were not compatible donors. As a result, the couple decided to undergo PGT-M with HLA matching in order to find non-affected by CGD/HLA-matched embryos for transfer. Unfortunately, none of the ten biopsied embryos in this first cycle were HLAmatched with the affected child. Five were affected, three carriers, and two normal for CGD. The couple decided to vitrify the two normal and three carrier embryos and proceed to a second cycle in the hopes that HLA-matched embryos would be available this time to save the life of their son. Indeed, in the second cycle 14 embryos were biopsied and the results showed four normal HLA-matched, two carrier HLA-matched, fivecarrier not HLA-matched, and three affected not HLA-matched embryos. One normal HLA-matched embryo was transferred to the mother's uterus leading to the establishment of a normal pregnancy and the birth of a healthy baby girl. All remaining HLA-matched normal or carrier embryos and the normal or carrier not HLA-matched embryos were vitrified. Cord blood stem cells were collected at birth and a year later cord blood stem cellstransplantation and bone marrow stem cells transplantation from the HLA-matched savior sister saved the life of her affected brother. Embryo cryopreservation is generally widely used in the European Union (EU), with minor exceptions like in Poland, where it is still under legal debate, and Germany, where cryopreservation is legal at the 2PN stage and prohibited at the cleavage/blastocyst stage unless there is a medical emergency. In Greece, embryos can remain cryopreserved for 5 years with the possibility of extension to another 5 years during which time patients have the option to transfer them should they wish to have another child. If they do not wish to use them for their own clinical purposes, the patients may consent to donate their cryopreserved embryos to other childless couples or for research purposes or sign for destruction. Complex ethical and legal issues may also arise related to disposition of cryopreserved gametes and embryos in the event of divorce or death of one of the prospective parents. According to the legislation in Greece, both men and women can use their own cryopreserved gametes after divorce without the need for the husband or wife to sign a consent form. However, embryos can only be used after informed signed consent of both spouses. The same applies for the use after death. In the event that one does not sign, the embryos can remain cryopreserved and then either be used for clinical purposes or destroyed or donated for research depending on the decision/act of the National Committee of Assisted Reproduction [Law no 4958 ΦEK21/7/2022]. Reflecting society's difficulties in defining the moral, ethical, and legal status of human embryos, such cases can become legal battle grounds. Of special interest is a PGT-A case of a woman with repeated miscarriages that developed Asherman Syndrome. Although chromosomally normal embryos were diagnosed after PGT-A, pregnancy could not be sustained due to the affected endometrium. Therefore, this patient chose to use a surrogate mother who successfully carried out the pregnancy and gave birth to a healthy baby boy. Gestational surrogacy can be commercial or non-commercial (altruistic), depending on whether the surrogate mother is entitled to a financial profit from the gestation or not. Enforcement of Medically Assisted Reproduction (Law 3305/2005) in Greece stipulates altruistic surrogacy, as a part of assisted reproduction, but legislation on the matter differs among the rest of the European countries. In Finland, Norway, Sweden, Austria, France, Germany, Iceland, Bulgaria, Hungary, Lithuania, Serbia, Slovenia, Slovakia, Spain, Italy, and Switzerland, any altruistic or commercial surrogacy arrangements are prohibited (European Society of Human Reproduction and Embryology), contrary to Georgia, Ukraine, and Russia. Commercial surrogacy alone is illegal in Belgium, the Netherlands, Portugal, and the United Kingdom. Surrogacy, in both its forms, has been met with debate; commercial surrogacy raises ethical concerns since it objectifies the surrogate mother for the fulfilment of a cause that does not contribute to her own well-being [38]. Additionally, the cost of nonaltruistic surrogacy is a privilege of the minority that can afford it, discriminating against aspiring parents with less financial income [39]. As far as altruistic surrogacy is concerned, it has been surrounded with controversy due to the lack of insight regarding the surrogate's true motivation to selflessly give birth to another person's offspring; it is believed that some women opt for surrogacy to deal with personal guilt due to terminating pregnancy in the past or placing their child up for adoption [40]. Finally, opponentsof surrogacy in all its forms advocate that both the biological and psychological bond formed during pregnancy between the surrogate and the baby eventually has to be broken, increasing the chances of depression or other emotional burdens in the postpartum period for the gestational mother [41]. The variations in the social, educational, and economical background of the couples included in the present study indicate that there is no typical profile for aspiring parents resorting to IVF and PGT; their common ground, despite religious affiliations, incomederiving hesitations, and the ever-present possibility of treatment failure, is their strong motivation for childbearing. One exemplary case is a couple that underwent one cycle of PGT-A for gender selection due to X-linked disease. The results showed one normal female and one normal male embryo, while all remaining embryos were aneuploid. The couple decided to transfer together with the female embryo the male one, taking the risk that it had 50% chance of being affected. Prenatal diagnosis by chorionic villus sampling confirmed that the male embryo was unaffected and healthy twins (girl and boy) were born. The contribution of PGT-M to prevent the transmission of severe and potentially fatal conditions and avoid miscarriages and implantation failure is positively acknowledged, leaving little moral controversy over the way conditions are classified as severe enough to be tested by PGT.In order for a genetic disorder to be classified as severe, simple identification of the mutation alone is inadequate. Other determining factors including impact on health, degree of penetrance (the potential for a mutated allele in the genotype to be expressed in the phenotype), therapy, hereditability, age of onset, and rate of progression should also be considered [11,26,27]. In addition, apart from the controversy over the determinants that classify a disorder as severe enough for an affected embryo to be deemed unfit for implantation, other ethical concerns arise. For instance, there is a possibility that individuals affected by a mutation accountable for a condition with incomplete penetrance eventually develop no symptoms. This possibility may lead to moral complications regarding the disposal of clinically affected embryos. Preimplantation genetic diagnosis can be applied to couples at high risk of developing multifactorial or late-onset diseases. In these cases, the method is close to the limits of eugenics as a genetic predisposition does not determine whether the disease manifests in the end. Concerns become apparent about the decision parents will make, as they find themselves in the dilemma of preventing the birth of such an embryo, which does not suffer and may never get sick, or eventually choosing to carry and give birth to an embryo with such predisposition and raise it accordingly, knowing this future danger. According to the opinions contradicting PGT with HLA typing, the ethical complexity of the procedure regards the establishment of a "designer baby" era and the instrumentalization of the saviour child. Although the core purpose of PGT is to diminish the chance that a child affected by a severe genetic disease be born, in the case of HLA-matching, out of the total number of normal embryos created through IVF, one with the desired characteristic-HLA compatibility-is selected, in favour of every other embryo biopsied and disposed as healthy, but not HLA matched. In this particular aspect, the saviour child born has been exploited as a means, since its creation originally served a purpose. On the other hand, the nature of the purpose pursued is such that ultimately the management of the donor serves human values in the best way, since from the beginning of their life they will have been credited with the salvation of a fellow human being [17,18]. In general, PGT has been met with strict debate. More specifically, among the numerical chromosomal abnormalities diagnosed by PGT is trisomy 21 (Down syndrome), contributing to the ongoing discussion regarding discrimination against people born with the particular syndrome and the potential use of PGT as a eugenics tool, as well as presenting future parents with the dilemma to proceed with the implantation of an aneuploid embryo or discard it. Moreover, the accuracy of the biopsy itself to properly determine embryo ploidy has been challenged, since studies showed that mosaicism exists at all stages of preimplantation development, due to the lack of cell cycle check pointsleading to spindle and nuclear abnormalities [4,5,14,42,43]. It is important to note that low-medium mosaicism in the trophectoderm mostly arises after TE and ICM differentiation, and such embryos have equivalent developmental potential as fully euploid ones [44][45][46][47]. Mosaic embryos have the ability to implant, but it is uncertain whether they can go to full term. Mosaicism is a major factor affecting ongoing pregnancy rates but other factors including thrombophilia, infections that can lead to premature rapture of amniotic membranes, and immunological causes may also lead to miscarriages [4,8,20,22,42,47]. In cases of embryos analyzed only by PGT-M for mutations, the possibility of having aneuploidies too cannot be ruled out and this may also be a contributing factor affecting ongoing pregnancy rates. In the unfortunate event that a PGT-M pregnancy is complicated by additional aneuploidy, the pregnancy may end up in miscarriage or be terminated. Simultaneous analysis of both mutations and aneuploidies may be an attractive option but the increased cost in that case may be prohibitive. Currently in Greece, the cost for PGT-A is approximately 1900 euros for up to eight embryos and for PGT-M, 2000 euros including mutation analysis in patients' blood and genetic counseling. An important finding of the present study was the stress imposed on the couples, which overall, on a scale of 1 to 10, did not exceed 5; PGT was physically experienced no-tably more painful by the female partners, whereas the process had a higher psychological impact on the male ones. While there is rich literature regarding the emotional and physical stressors of IVF, such as anxiety, stress and depression, there is poor research specifically for PGT. The technical steps of the two processes are similar [28][29][30][31][32][33][34][35]. There is, however, a major difference regarding the added emotional burdens of couples undergoing PGT, related to family histories of genetic disorders [29]. One study regarding particularly PGT [28] interviewed 134 patients referred for PGT in the form of questionnaires; 55 deemed the experience extremely stressful; out of 20 patients who had undergone both prenatal diagnosis and PGT, 8 considered PGT less painful than prenatal diagnosis. The time during first consultation prior to a PGT cycle and the anticipation for a pregnancy result post embryo transfer were considered the most stressful stages of PGT treatment. Findings from other studies showed that oocyte retrieval can be a highly distressing period for women subjected to IVF in general [31,48]. A prospective study evaluating fluctuations in anxiety and distress levels in Australian women subjected to PGT [29] predicted that anxiety was dramatically increased post embryo transfer and following pregnancy test indication, with insignificant divergence in the distress experienced by women with a positive result from the ones with a negative one. Another aspect investigated by Lavery et al. [28] was the impact on the couples' relationships; one third declared it affected the relationship with their partner negatively and one third indicated that it was a bonding experience for both of them. These findings suggest that PGT treatment can be experienced in different ways at different stages but can be deemed as an overall stressful process. Patients with no normal embryos for transfer after PGT face the difficult situation of coming to terms with remaining childless or exploring other ways to parenthood including gamete donation or adoption. On top of that, the fate of the rejected embryos following PGT, whether to be simply discarded or donated for research raises further ethical dilemmas especially in cases of HLA matching in which genetically normal but not HLA-compatible embryos are not chosen for transfer as they cannot save the life of their sibling. However, those embryos can be a valuable source of embryonic stem cell research should the patients not consider them for clinical purposes or donate them to a childless couple. The bioethics and morality of human embryo research raises various concerns [49,50]. In our study, the majority of patients (90%) were keen to donate their chromosomally or genetically abnormal embryos for research instead of simply discarding them. The sample of patients referred for PGT in Northern Greece in the present study is limited, both numerically and geographically, thus not allowing for a generalized conclusion. However, the diversity of the cases studied in such a small sample provides useful insight into the background of the couples choosing PGT. Conclusions PGT is an attractive alternative to prenatal diagnosis (PND) aiming to establishing a healthy pregnancy by identifying and avoiding the transfer of chromosomally or genetically abnormal embryos to the uterus. The fertilization, implantation, and ongoing pregnancy rates for the PGT-M and PGT-A cycles in the present study were 81.3%, 70.6%, and 52.9%, and 78.2%, 64.3%, and 57.1%, respectively. Although the benefits of PGT were wellreceived by all patients undergoing the procedure, psychological pain was evident and especially prominent in patients with a previous affected child or no normal embryos for transfer. Females experienced more intense physical pain than their male partners while psychological pain was encountered by both partners and occasionally in higher instances in males. No typical socioeconomic background of the patients referred for PGT in Northern Greece was noticed. Holistic counseling is of utmost importance in order to make patients' experience during their journey to have a healthy baby less emotionally demanding and help them make the right choices for the future. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-22T15:03:34.371Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "1bc7cb621a2e2fc6edc709b0c716430c182c77a1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/58/10/1493/pdf?version=1666233012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e19e0d6fb0004b28432f0009fe0fd58bdc8c2c15", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
54591162
pes2o/s2orc
v3-fos-license
Factors Affecting Circular Economy Promotion in Indonesia: the Revival of Agribusiness Partnership Konsep ekonomi sirkuler sebenarnya telah dilakukan sejak lama di Indonesia, khususnya pada industri besar dan menengah, namun belakangan ini perencanaan pembangunan ekonomi kurang memperhatikannya. Perusahaan-Perusahaan pabrik kertas dan perkebunan besar termasuk diantara USAha ekonomi yang melaksanakan konsep ini. Pada skala ekonomi yang lebih kecil dalam bidang pertanian, kerjasama antara Perusahaan besar dengan pertanian rakyat sudah berlangsung dengan baik, memberikan keuntungan ekonomi, memperbaiki kualitas lingkungan dan menjanjikan persaingan yang kompetitif. Masyarakat sebenarnya sudah menerapkan konsep ekonomi sirkuler pada sistem USAhatani terintegrasi yang mereka lakukan, namun perkembangannya masih kurang memuaskan. Faktor-faktor yang terkait dengan kelembagaan sangat berperan dalam mempromosikan konsep ekonomi sirkuler ini di pedesaan dan menjadi penentu keberhasilan program kemitraan hingga mencapai level tertentu. Melalui kerjasama model kemitraan sebagai perwujudan konsep ekonomi sirkuler, ketiga pilar ekonomi, yakni lembaga pemerintah, sektor swasta, dan masyarakat harus saling mendukung dan berpartisipasi menurut kapasitasnya masing-masing memberikan sumbangan pada pembangunan ekonomi regional. Pemerintah sebagai fasilitator dan regulator, Perusahaan swasta sebagai penghela USAha, dan masyarakat sebagai pemasok bahan baku atau pelaku USAha kecil harus saling berinteraksi, bekerjasama dan berpartisipasi dalam program pembangunan ekonomi. Makalah ini menjelaskan faktor-faktor yang mempengaruhi konsep ekonomi sirkuler untuk mempercepat bangkitnya ekonomi rakyat melalui kemitraan agribisnis INTRODUCTION Indonesia is well known as an agricultural country with some 51.6 million ha of agricultural land that constitutes 70% of the total area. As per 2004 (BPS-Statistics Indonesia, 2005), land area for estates was the largest with about 18.3 million ha (25.56% of the total area), followed by arable dry land, garden, barren land, and shifting cultivation land at around 15.6 million ha (21.73%), woods 10.4 million ha (14.46%) and wetland amounted to about 8.4 million ha (11.71%). The smallest area was land used for brackish and fresh water pond that occupied 0.4 million ha (0.70%) and 0.3 million ha (0.35%), respectively. The rest 18.3 million ha consisted of fallow land, house compound and surrounding, and grassland. Major food crops cultivated by the farmers consist of paddy, corn, cassava, sweet potato, peanut, and soybean. Except the main crop paddy, the other major food crops are known as palawija (secondary crops). Subject to the availability of water for irrigation, paddy is cultivated both in wet land and dry land. In an official report published by BPS-Statistics Indonesia (2005), the harvested area of paddy in 2004 was 11.91 million ha, an increased by 3.66% compared to the area in 2003. This harvested area was increased by 3.81% of wet land and by 2.23% of dry land area. The total production in 2004 was 54.06 million tons of dry unhusked paddy, an increased by 3.69% compared to 2003 production (52.14 million tons). In 2003, the yield rate of dry unhusked paddy was 4.538 ton/ha which a year later increased by 0.04% to 4.540 ton/ha (2004). The harvested area of corn in 2004 was reported 3.35 million ha, a decrease by 0.36% compared to that in 2003 (decreased by 10 thousand ha). The harvested area of soybean and peanut increased in 2004 (6.87% and 5.99%, respectively). However, the harvested area of cassava and sweet potato were decreased by around 0.38 and 7.29, respectively compared to that in 2003. The production of corn, soybean, peanut, and sweet potato in 2004 was increased by 2.54%, 7.40%, 6.83%, and 4.00%, respectively, whereas sweet potato decreased by 5.13% compared to production in 2003. Horticulture crops have also fluctuated in terms of harvested area and production. The harvested area of vegetables, such as spring onion, shallot, potato, cabbage, mustard green, and carrot in 2004 was 318.3 thousand ha, a decreased by 1.20% compared to that in 2003 (322.1 thousand ha). However, the harvested area of shallot and spring onion in 2003 were increased by 7.6% and 0.45%, respectively. Production of vegetables in 2003 were 4.3 million tons of which the highest production was enjoyed from cabbage (about 1.2 million tons) and shallot (around 1.0 million ton). The productivity of cabbage was 19.6 tons/ha while shallot was 8.5 tons/ha (in 2004). Main fruit crops in Indonesia are banana, orange and mango. In 2004, production of banana was 4.2 million tons or 35.29% of the total national fruit production. Orange and mango production were about the same amount (1.5 million tons each or about 12.93% and 12.90%, respectively). Java is the main producer of fruits in Indonesia with the largest contribution come from West Java Province (25.59%). Lampung and North Sumatera provinces are among the main production centers outside Java. Estate crops consist of large-scale estate (private or state-owned plantations) and smallholding estates. The most popular estate crops are palm oil, rubber, coffee, tea, cocoa, coconut and sugarcane. The planted area of large-scale estates for several commodities remained unchanged from 2003 to 2004. Several commodities experiencing increases in area planted were palm oil (1.0%) and tea (1.37%) and in production were rubber (1.01%), coconut (1.03%), palm oil (2.19%), coffee (0.89%) and tea (0.67%). However, production of cocoa was slightly decreased by 1.05%. The planted area of sugarcane increased by 0.80% (from 364.4 thousand ha in 2003 to 367.3 thousand ha in 2004) and its production was also increased by 11.85% during the same period. The planted area of smallholding estates for almost all commodities remained unchanged from 2003 to 2004. Significant increase of planted area occurred on coffee, from 1.328 million ha in 2003 to 1.344 ha in 2004 (about 1.2%) and its production, from 0.658 million tons to 0.671 million tons during the same period (increased by around 2.0%). In contrary, planted area of rubber decreased by 0. 98% (2003 to 2004) Sahat M. Pasaribu although its production increased by about 3.95% (from 1.387 million tons in 2003 to 1.442 million tons in 2004). With similar trend, planted area of tea was decreased by 0.50% while its production increased by 11.5%. The planted area of other annual crops was slightly increased as well as their productions. The population of livestock, in general, showed an increasing trend. In 2004, the population of big ruminant, such as milk cows, cattle, water buffalo, and horse increased by 2.11%, 1.90%, 4.00%, and 4.62%, respectively compared to situation in 2003. Small ruminants, such as goat, sheep, and swine are reported increasing in 2004 by 5.65%, 5.56%, and 6.80%, respectively compared to those in 2003. Similarly, the population of poultry, such as layer, broiler, and duck was also increased as many as 1.8%, 5.59%, and 4.92%, respectively in 2004 compared to population in 2003. Population of domestic chicken, unfortunately, was decreased by 1.98% during the same period, particularly due to the endemic development of avian influenza in Indonesia. Fishery sub-sector is also reflecting promising trend. In 2002, the marine fishery production was recorded to reach 4.1 million tons whereas inland fishery produced 1.6 million tons. In 2003, the production of marine fishery was increased to 5.6 million tons and with improvement of fishery techniques along with other supporting policy instruments; the trend of production is expected to increase in the following years. These fact figures present recent development of selected crops, including high economic value commodities of estate crops, livestock and fishery sub-sectors and they are reflecting the importance of agricultural sector in the overall economy in Indonesia. The annual growth rate of this sector is 3% in average over 25 years and has always been very important in supporting Indonesia's economic development. Five strategies to improve farmers' empowerment are listed in agricultural policy mission (Solahuddin, 1999): improve farm management, develop farmer's group/cooperation, develop marketing efficiency with market oriented, promote mutual business partnership, and provide input production and policy instruments to encourage better farm performance. This mission, to some extent, has been certainly achieved its target, although the magnitude of the achievements could always be debatable. Such achievements would heavily depend on how close various local institutions manage their respective mandate and how intensive their coordination in program implementation. Food security is one of the most important goals to achieve. Food, for majority of rural dwellers, is identical with rice. Its availability implies three different aspects as mentioned by Thomson and Metz (1997): availability, stability and accessibility. Available in the sense that food is equally distributed, stable in the context that food is available and reliable at all times, and accessible at the stipulated but achievable prices. Wirakartakusumah (1999) indicates that based on the availability of potential resources, the policy agenda for food security in Indonesia include: (a) improve food availability and security, (b) diversify food consumption, (c) improve food safety, (d) institutionalize development, and (e) improve nutritional status. To our concern, farming systems enhancement is considered as a way to approach part of the agenda and agribusiness partnership is one of a number of operational modes to achieve certain level of food security. The objective of this paper is to discuss institutional factors affecting mutual partnerships between small-scale farmers, private sector, and the government to promote circular economy. More specifically, this paper is intended to: (a) to elaborate existing agricultural situation and its promotion through circular economy and (b) to provide outstanding suggestions for agri-business partnership promotion in Indonesia. FARMING SYSTEMS IN INDONESIA Basic problem in farming systems in Indonesia should not far from the size of landholding or land employed. In Java, the most populous and yet most fertile island in Indonesia, the optimistic average landholding size is 0.41 ha per household and 0.83 ha outside Java (Widodo, 2002). Other research mentioned that the current average size of landholding size in Java is 0.25 ha per household (Undang, 2003). This small farm size is practically not providing sufficient income to provide basic needs for the whole family members. The pressure of land conversion to non-agricultural purposes also threatens this farm size, particularly in Java. Sumaryanto, et al. (1996) reports that the average magnitude of such conversion in Java during 1990 to 2000 was 22,500 ha/year. Somehow, efforts to maintain land productivity in Java and area expansion outside Java could have resulted in an increasing production of several crops, particularly food crops. Since food crops demand is also increasing following the population growth and change in meals pattern by people in urban area, it is obvious that production breakthrough has to be redesigned. This also means that dependency on import duty would be reduced. Farmer's long time experienced in integrated farming system has been proven to give synergy effect between crops, i.e., between food and horticultural crops and raising cattle (Ilham and Saktyanu, 1999). In this regard, circular economy has been mutually providing benefit to the farmers. However, the improved performance of integrated farming system as a recycle economic activity has not been adequately enhanced even with the new farming systems approach to promote significant production methods for small farmers. In this regard, factors stimulated farming systems research include inability of small farmers with limited resources to adopt improved technology, the need to reduce risk, to increase productivity, to promote employment and to strengthen onfarm income, and the need for sustainable resources (Adnyana, 2000). In this context, given a small landholding, when cattle, small ruminants or poultry are included in the system, the carrying capacity of the land could be easily determined and accordingly the magnitude of agricultural waste to be utilized as feed or green manure as sources of organic fertilizer. The introduction of improved farming systems at farm household level may lead the farmers to a certain level of achievement, when it optimally applied. FAO farming system development model presented in Figure 1 could be considered as a model development of integrated farming system, and hence circular economy, to anticipate business partnership activities. Given a number of constraints and challenges in the development of agriculture in Indonesia, the focus aspect of agricultural development should be directed to achieve optimal utilization synergistic relationships among the subsystems, to develop less external inputs of sustainable farming system, prioritize and develop long lasting participation of farm households, and improve organic farming, biological farming, ecological agriculture, low external input agriculture, biodynamic agriculture, and regenerative agriculture (Adnyana, Ibid.). This direction would certainly promote circular economy to the higher level of achievement. However, farming systems research and development is very important to recommend certain commodities to be developed. Selection of target areas, for farming systems research and development, according to Shaner et al. (1981) is very critical following the possibility of further farm improvement in the selected areas and its technologies diffusion into other areas. Crosson (1994), in fact, has delivered a warning statement about inadequate of agricultural knowledge, and hence, limited capability in expanding the supply of knowledge about agricultural production. With the increasing demand for food by 2050, demand could increase by 2.5 to 3.0 times the present level. To strengthen human resources at all levels through the supply of agricultural knowledge would be one of the responses to such insight. The role of stakeholders in this situation is highly respected to take appropriate policy actions. INFLUENCING FACTORS In respect to farming systems direction, the government policy in agricultural development is particularly to steadily increase food crops production to meet domestic demand. Its role is instrumental in encouraging, regulating, and enforcing laws to achieve certain level of advantage in favor of the farmers. The private sectors, on the other hand, are economic oriented but very flexible in determining the enterprises' future development. The government and the private sectors are two symbiotic institutions which could collaborate to enhance the performance of farming systems (farmers and farmer's group), hence encouraging good governance, pro-moting sustainable and friendly environment, and improving farmer's income. Uphoff (1999) elaborates local institutions as part of the public sector and the private sector (Table 1). The public sector operates with authority behind its decisions. It can mobilize considerable resources through their ability to tax. The private sector operates according to individual desires and individual's accountability will control private resources. The participatory institution is outside the public sector and is differ from the private sector. Perhaps, to this participatory institution include farmer's group as an informal organization or local-level institutions. The "three group sectors" have its individual roles but form a synergy performance when they become collaborative institutions for certain goals. More detail about the people participation in program development, Messer and Townsley (2003) indicate the community, the households within the community and their livelihood strategies, and institutions found at all levels are the core elements in the development process. The relationships between these elements are very important for which various development programs could be introduced. An institution normally has its structure to distribute tasks and accountability and to coordinate institution's functions. With planning and goals at hand, the process of implementation would create such interaction to produce certain output. The existing environment would be externality factor, beyond the institution to control. However, friendly environment can be experienced with harmonious interaction among the components within the functions to create specific output. This description reflects the three pillars' roles, the government, the private sector and the farmers whom are in a strategic position to build partnership activities adjusted to their respective role. Theoretically, factors embedded in institutions include cooperation and coordination, rules and regulations, rights, penalty/ punishment, negotiation, and communication/ management procedures. Meanwhile, related institutional attributes bounded to these factors are rules of representation, jurisdiction boundary, and level of institutions. These factors along with its attributes are sufficiently covering the role of institutions in farm activities. In the implementation stage, however, the creation of policy instruments issued by the government (national or local level), willingness of private sectors to tie good relation with surroundings, and farmers' proactive participation for development are the most influencing factors to achieve partnership revival in respect to circular economy promotion. AGRICULTURAL BUSINESS PARTNERSHIP Large-scale enterprises have been experiencing circular economy applications for their own benefit. The pulp and paper industries have been using their waste product (bark) to generate energy for processing activities. Some of the companies have even experienced over production that they have to sell to other parties, meaning additional income for the companies. In Indonesia, the local electricity company has been using barkgenerated energy to produce electricity. This is the real example of how circular economy is promoted among the large industries. Another example is the use of palm shells/fibers/empty fruit bunches as fuels which provide an effective avenue to dispose the processing residues from palm oil milling activities. Such disposal also means generating additional income for the company. The use of these Uphoff, 1999 Sahat M. Pasaribu waste materials have been helped reduce environmental pollution and at the same time, due to excess of energy production, other companies may reduce the use of fossil fuel to achieve more benefit from using biomass energy. Since large-scale companies also interact with local community, caring local people would not be a problem. In fact, the company should give their hands to help the rural poor, when the company located in rural areas. Building partnerships between the company and the local people could create harmonious and sustainable relations. With flexibility, skill, and capital the company has, the rural people who are mostly engaged in agricultural sector could gain benefit from a partnership/collaborative type of activity between the company and the farmers. When the company takes initiative to build partnership with the local people, the local government would take initiative to support such effort with guidance and other related activities, including necessary paper works (legal documents, activity approval, rules, training, etc.). Any companies are welcome for such collaboration as long as the initiative is to bring the farmers to a more advantageous level or a more competitive farming system. Capital is essential in launching programs. The financial institutions as sources of budget to promote off-farm and on-farm activities (agribusiness and farming systems) and its infrastructures are considered in backward and linkage programs to achieve food security and to improve farmers' income. Switching mindset from resource-based to knowledge-based development is taken as an appropriate direction in agribusiness revival. Mastur (2005) indicates the central role of financial institutions to support off-farm, onfarm, and infrastructure facilities to develop agricultural-based industries in Indonesia. Large-scale industries could place necessary investment in agribusiness partnership programs from financial institutions (banks) with intervention of related institutions through operational policy instruments. Based on the earlier description on farming system's goals, this partnership revival should increasingly aware the environment condition. Organic farming could be one option, quite attractive in terms of crop cultivation, and should be most interested taking into account the expected higher price of such product in the market. On this organic agriculture, the UN-ESCAP (2002) reports that organic agriculture can help raise the productivity of low-input agricultural systems. New market economies have its prospect to discover something new through a combination of indigenous knowledge and modern science, create innovation in rural areas, and contribute environmentally sustainable. However, Sumarno (2006) warns that organic farming trend would not change the agricultural economic structure, except that such program could only provide significant margin for largescale enterpreneurs. Following this advice, collaboration between large-scale industries as patron instead of large-scale agricultural companies with small-scale farmers as clients would be encouraged. Further idea about the organic farming, findings presented in Table 2 could be reviewed for further actions. Two cases briefly elaborated below are provided as examples of how to revive agribusiness partnership. Large industry is encouraged to embrace local people and with local participation their activities are expected to end up with profit-oriented agricultural program. Case 1: Thailand In an investigation to obtain more information about partnership pattern between large-scale company (large cement producer) and smallholding farmers surrounding the company factory, a visit to a project site had been arranged in October 2004. The project name is Green Community (GC) and is located in Saraburi Province, about 120 km from Bangkok to the north. A large cement factory and a farmer's group that consists of 26 members (farmers) from four villages nearby the factory tied in a partnership farming activities since 2003. Within one year of operation, the food crops previously cultivated by the farmers have been shifted to grow organic vegetables under the supervision of the designated personnel of the company. Led by a geologist and an engineer, the project, so far, has been successfully worked with harvesting and cultivating new different kind of vegetables. The farmers have also been enjoying the profit and the expansion of cultivation area has been initiated to include other farmers. The company provides guidance in coordination with local government, including on-farm techniques and marketing activities. The GC program is the example of a successful partnership relation. The company cares the surrounding and introduces new technology to the farmers. The company initiates the partnership, provides seed capital (non-repayment but revolving), looking for fresh vegetables (hotels, restaurants, supermarkets) to market the vegetables and negotiates the price in favor of the farmers. The farmers were interested because of the increasing cost of crop production in a less fertile land. They are proactively participating in related activities with collective decisions and always maintain togetherness. The government was invited to participate and, of course, welcome such a sympathetic approach from a private company. Training and knowledge were delivered to the farmers to improve vegetable quality they produce. Case 2: Indonesia Indonesia has been experiencing an agricultural partnership pattern during the past three decades. The nation-wide program named Nucleus Estate Smallholders (NES) is participated by the government (national or local levels), large-scale estate plantations Sri Lanka  Organized into community-based organizations to tap the potential for export  Non-certified organic farming remained one of the predominant farming systems  Demand for safe food or chemical-free was gradually growing  Research institute involve in research related to organic agriculture Thailand  Government strong commitment to develop organic agriculture  Farm input costs reduced in organic farming but increasing farm labor costs  Organic farming increase gross and net farm income  Generate higher employment opportunities compared to conventional farming  The key challenge was how to sustain organic farming expansion Indonesia  Developing a major organic agriculture program  Agriculture sector was second largest economic engine after manufacturing  Organic farming was seen as one option to regain momentum for agricultural sector Source: UN-ESCAP (2002) Sahat M. Pasaribu (state-owned companies or private companies), and smallholders farmers to form a patron-client relation in estate farming systems. The objective is very clear, to improve smallholder farmers' performance while maintaining environment friendly. Although the project last for about 30 years, however, the smallholder farmers (clients) have been confronted with various difficulties. Perhaps, supervision functions (from the government side) are not optimally worked, the large-scale estates (patron) are reluctant to support smallholders because of different orientation, and the smallholder farmers are left behind with lack of knowledge and improvement. It is sad to say that the project is not well performed even with their constant partnership. Future development, in fact, has been recently initiated for enhancing the performance of this NES program. Evaluation and redesigning of NES program is expected to find an improved outcome for better business partnership among the stakeholders. CONCLUSIONS AND OUTSTANDING SUGGESTIONS The concept of circular economy has been considered as integrated farming systems activities in the context of recycling economy; one facility's waste is another facility's input. Farming systems are directed to develop less external inputs for sustainable farming system, prioritize and develop long lasting participation of farm households, and improve organic farming, biological farming, ecological agriculture, low external input agriculture, biodynamic agriculture, and regenerative agriculture. Organic farming has been widely practiced by farmers (individuals or groups), however, less support from the government and private sector to enhance organic farming. Private sectors are in the strategic position to take initiative to help small-scale farmers with government support and facilities for a better living standard and encourage friendly environment. The three development pillars, i.e., government, private sectors, and farmers/ rural community are considered as key factors (with their respective roles) to accelerate business partnership for sustainable regional development. Government's strong promotion to accelerate regional economy through enhanced farming systems is encouraged. Agricultural development in the context of circular economy will promote good food quality, health safety, and environment friendly. Farming systems research and market oriented approach are suggested to improve farming technique and associated economic activities. Private enterprise's initiative to revive mutual synergic partnership in concern of farmer's standard of living is appreciated for culture, social, and economy mutual benefit. In the implementation stage, the private sector is expected to take a leading role (planning, capital, marketing, direction, guidance, etc.), whereas the government at local level will facilitate the initiative with necessary actions (farming technique, extension, administrative matters, rules, rights, promotion, etc.). The farmers will play an important role to make the success of the program (proactive, participation, collective decisions, learning by doing, etc.). The success of the current farmers is a demonstration effect for the others and this snowball pattern will enhance farming systems performance, hence the success of circular economy not only at regional level but perhaps nationwide.
2018-12-04T16:54:21.232Z
2016-08-18T00:00:00.000
{ "year": 2006, "sha1": "b394f2705e6319a25551d8c574199470f731cb02", "oa_license": "CCBYNCSA", "oa_url": "http://ejurnal.litbang.pertanian.go.id/index.php/fae/article/download/4049/3378", "oa_status": "HYBRID", "pdf_src": "Neliti", "pdf_hash": "576cb706ca7d1a0d2276a9fcdd691c5f48a756e0", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
226322983
pes2o/s2orc
v3-fos-license
Raising Curiosity about Open Data via the ‘Physiradio’ Musicalization IoT Device Open data is a technical concept and a political movement since datasets (e.g., on environment and business) can be used to verify/falsify (ex ante and ex post) governmental policies. But data analysis is not for the masses and non-experts may not even know the existence of open data. Here the challenge is to raise interest, curiosity and the need for knowledge in the average person. Data physicalization may be of some help: by creating a familiar device (e.g., a radio) that ‘physicalizes’ some publicly available data, the authors are trying to raise curiosity about the source and availability of open data and the techniques underlying data access, extraction and analysis. This paper presents the prototype of a desktop ‘Physiradio’ that plays internet streams according to a mapping between weather conditions and musical genre, i.e., a musicalization process. The association (weather → musical genre) is subjective but understandable by most people: this device internal workings can be almost fully grasped by the non-experts, thus it can be used as a conversation starter. Physiradio was field-tested among coworkers, students and other people through a quanti-qualitative information gathering process. The field test data presented here can be useful to measure the efficacy in: Data Physicalization may help open data Open data (OD) has become a worldwide movement involving governmental and non-governmental actors. Berners-Lee (2009) and Davies (2010) defined OD ratings to highlight the importance of technical aspects of openness such as the use of open standards and non-proprietary file formats for publishing. OD is also becoming a political movement since datasets (e.g., on environment and business) can be used to verify/falsify (ex ante and ex post) governmental policies: anyone with enough knowledge can retrieve data from public servers and study the effect of laws such as tobacco taxation change on number of smokers, vehicle banning on air quality (Trentini 2014) and initiatives to lower the number of unemployed people. But data analysis is not for the masses (Puussaar 2018): non-experts either do not know the existence of OD, or they are not able to extract information from a dataset (Frank et al., 2016). Here the challenge is to raise interest, curiosity and thus the need for knowledge in the average person. Even if the less-tech-savvy will eventually decide not to learn how to analyse datasets, at least they will have had the chance to reason about the possibility to leverage (with the help of a data scientist) their right to civic accountability. Data Physicalization (DP) is a recent research area based on the physical representation of any nature and kind of data. While physical data representations have existed for centuries (Dragicevic and Jansen, n.d.), the availability of actuated tangible interfaces, advanced pervasive technologies and the increasingly widespread distribution of embedded systems and components, led to the development of DP: creating "modern" ways to (often dynamically and interactively) represent data through informatics tools coupled with sensors and actuators. The authors wonder if DP may be of some help spreading OD awareness by creating familiar and unthreatening physicalizing device, which may increase the curiosity about the source and availability of some data. According to (Jansen et al., 2015) DP may: • "help people explore, understand and communicate data using computer-supported physical data representations" • make data more accessible/reachable • foster cognitive benefits • democratize data into the real world • engage people The democratization aspect is proposed in (Verweij et al., 2019) where 'Domestic Widgets' are used to successfully support household creativity and co-creation of data representations. IoT (Internet of Things) devices are exploited in (Houben et al., 2016) with respect to "the potential to democratize the access, use, and appropriation of data", since "most of the data is 'black box' in nature: users often do not know how to access or interpret data". These devices, "blended into homes", can be used to engage users. For their project, the authors were intrigued by data representations not mainly based on visualization (i.e., sight-based), that is the most used and maybe natural choice in the technical context, but also through physicalizations relying on other human senses. A preliminary analysis of the papers listed on the official DP website (Jansen, n.d.) was completed to extract the ones describing a physical prototype/product that could give a real implementation of the DP concept. Three main qualitative factors were analysed, listed below side by side with the corresponding quantitative implementation in the dataset (Scaravati, 2020a): 1. human senses exploited → SENSES, boolean flags for every human sense (SIGHT, TOUCH, HEAR-ING, TASTE, SMELL) where 1 = exploited and 0 = not exploited 2. interactivity level → INTERACTION, described by 3 values: • 0 = no interaction • 1 = interaction changes physicalization parameters (data remains the same) • 2 = interaction changes the dataset or updates the physicalization 3. data dynamicity level → DYNAMICITY, a boolean flag (STATIC=0, DYNAMIC=1), where DYNAMIC means there is a constant connection between physicalization and data (real time), i.e., if data changes the physicalization is updated Results (Figure 1) show that the most used senses are sight and touch; while hearing, taste and smell are seldom exploited. Thus the authors wondered if there was a way to take advantage of these "minor" senses. Avoiding the technological difficulties in using smell and taste as senses for a physicalization, the authors' choice fell on hearing: i.e., using music, and genre in particular. In fact, music is part of everyone's life: willing or unwilling we all listen to it, it stimulates emotions and moods which may be useful to convey data. Although the perception of every song, of any genre and artist, is different for each one of us and brings subjectivity in the interpretation, the thought of using music as a means of "physicalizing" data, obviously through hearing, was sound (pun intended). Finally, object interactivity and data dynamicity were considered: the papers analysis shows low levels for both aspects. Many devices use static data and allow little physical interaction. Thus the authors devised a solution, named 'Physiradio', capable of managing data in real-time, where both the data to represent and the experience with which to interact, are dynamic and continuously updated. Blended Internet of Things IoT (Internet of Things) refers to (often) small devices directly connected to a network. In particular, they usually have the ability to transfer data back and forth without human intervention, they can be simple sensors/actuators or more complex devices like personal assistants to manage environmental conditions (e.g., air conditioning, kitchen, production lines, vehicles). To implement a device to be accepted by many people, the Physiradio creators decided to build a couple of IoT appliances inside vintage wooden Magneti Marelli speaker boxes. Why going with IoT instead of developing other implementations? The authors thought about the use cases of this specific physicalization, and realized that the best idea was to create something that every person (regardless of age) could play with and represent data in the comfort of her/his own home/workspace/car. A kind of blended 1 "smart home" device, which, in future developments, may be adapted to diverse situations. Next, the problem of choosing an easy-to-grasp OD was addressed. The authors searched for some kind of data that can be interpreted and understood by anyone, not only by people coming from scientific/technical studies. Weather conditions came out as the most natural choice because it is something within the reach of all; while it may not seem the most useful information, nonetheless in the context of this experiment it was just a starter for conversations and stimulus for suggestions that in fact came in quantities (see below in 6). Physiradio relies on OpenWeatherMap, 2 an OD platform that provides many standard meteorological services. In addition, it supplies an API (Application Programming Interface) to allow software access to real-time meteorological data. Music physicalization ("Musicalization") When trying to analyze a music track, there are a lot of parameters to consider (e.g., tempo, mode, pitch, loudness) and it is complicated to evaluate which song is more suited to a particular context. Another fundamental aspect is that the lyrics of a song can convey information to the listener (if the words are understood, of course), but music and lyrics may be discordant for the mood that the song wants to convey (e.g., 'Some Nights' by Fun is a good example of this feature, it sports sad lyrics with an upbeat tempo) thus causing problems to any classification effort. There are specific techniques to transfer information via simple sounds generation such as the so called sonification, 3 see for example (Bonafede et al., 2018) and (Ludovico and Giorgio, 2016) who present a facetracking and sound-synthesis techniques to sonify facial expressions in order to help people with visual problems, and a reference system to interpret already-existing and future sonification models. A simple and steady sound may fail to hold attention in the listener and may become unbearable so when a softer technique must be used, musification comes to help. Musification has been defined as the musical representation of data. It is designed to go beyond direct sonification and includes elements of tonality and the use of modal scales to build music compositions (see Coop, 2016). The resulting musical structures take advantage of higher-level musical features such as polyphony and tonal modulation in order to entertain the listener more than in the case of sonification. Musification and sonification have a feature in common, namely the fact of being monotonically deterministic in the results: given the same input, the output sound/music sheet or track will always be the same. Physiradio tries to reduce the degree of determinism by broadening the association ' data → music' introducing the concept of "musicalization": instead of generating sounds (i.e., "sonification") "musicalization" is the act of selecting music according to data. In particular, Physiradio chooses and plays categorized streams available on the Internet, these streams are genre tagged. How to map weather conditions? Physiradio gets weather conditions values of a configured city (through the OpenWeatherMap APIs), it elaborates them by extracting the main description and the relative humidity level only, and then it "physicalizes" them into a combination of music genre and colour, i.e., the mapping function is: map(WeatherConditionDescription, Humidity) → (MusicGenre, Colour) To implement an initial mapping to experiment with, a reference study was examined: in (Karmaker et al., 2015) many parameters are taken into account to create a model of a music selector based on weather condition: mode, tempo, pitch, rhythm, harmony, and dynamics for music; temperature, humidity, pressure, wind, sunshine, cloudiness and precipitation for weather. But the main goal of Physiradio is just to increase curiosity about OD and DP, so there is no need for a perfect mapping. Moreover, there is not enough metadata available through the freely usable Internet radio streams. So the authors tried to reduce the number of information needed, and searched for previous studies on how musical genres inspire specific moods to people, such as (Worlu, 2017). In the papers examined for this work, modern musical genres, such as LoFi, ChillOut, Smooth Jazz and various types of extreme metal, were not found. Nonetheless, these genres are very suggestive and extremely specific: most of the songs belonging to those genres will sound very similar to each other, which is useful if they want to convey the same information but with different songs, so the authors chose to use them in the final mapping. During prototype development, the authors thought about adding an option to display coloured light (using RGB LEDs) to help device readability. Sight is the most used sense in data representation and this could be an interesting factor that, mixed with music, could bring semantics and help the user interacting with the device, to better interpret the data. To introduce colours, the authors relied on Robert Plutchick's model (Plutchik, 2001), in particular the wheel of emotions, because even if it is a dated work (1980) it is still considered one of the most important psychological study on human emotions, with its useful mapping to colours, which is now part of this project. To pair genre and colour values in the mapping function, the authors applied studies such as (Worlu, 2017) that analyse how musical genres trigger specific moods and emotions in people. Moreover, during development, interventions on genre choice based on cultural association were applied, such as the 'Xmas mood' and specific music genres cited above. via the 'Physiradio' Musicalization IoT Device Art. 39, page 5 of 8 The field tested mapping is presented in Table 1, L# refers to the listening order played during experiments. The Physiradio prototype Physiradio (see Figure 3) is a desktop streaming-based IoT radio built around the following components: 1. an ESP8266 (Wemos 4 D1 mini, an Arduino compatible MCU) 2. a VS1053 MP3 codec (LC Technology) 3. a WS2801 RGB LED strip with 5 LEDs 4. a "Vintage" (circa 1940) Magneti Marelli wooden speaker box mounted with a modern 4Ω loudspeaker The prototype is built around an ESP8266 MCU (Micro Controller Unit) board sporting an integrated WiFi chip that can easily connect to a local network. The software, developed in ArduinoIDE, inside Physiradio is GPL 5 licensed (since the authors believe in the verifiability and reproducibility of Free Software) and can be downloaded at https://github.com/ simoneScaravati/Physiradio. Physiradio is an IoT device, it supports a popular interaction protocol adopted for those kind of appliances, i.e., MQTT (Message Queue Telemetry Transport). Supporting this protocol is essential to add (remote) interaction with Physiradio: through MQTT commands it is possible to control the behaviour of the device, such as changing the volume or switching to another station (stream). At present, Physiradio connects to the OpenWeatherMap APIs, gets the weather data (in JSON format) of the given city, maps them to a web radio stream, plays the stream and, at the same time, waits for commands (through serial port or MQTT). The stream is buffered and sent to the VS1053 codec (to convert byte packets into sound) to be played through the speaker. At the same time, the WS2801 strip will light its LEDs with specific colours, according to the mapping explained in section 4. Field test, analysis and conclusions Physiradio is a working prototype of an IoT device playing radio streams according to a mapping between weather conditions (accessed via open APIs) and musical genres. The main goal of this experiment is to test if such a device can be a conversation starter to raise curiosity about the OD world, the secondary is to test if musical genre can be used in a data mapping through a data physicalization process. The device was field-tested among colleagues, students, friends and frequenters of a local library. Subjects filled a questionnaire during live presentations of Physiradio. Sample selection is biased because most subjects work in the computer science field. As for the ethical aspect of the questionnaire, the authors did not ask for a formal approval since they gathered only anonymous data, all fields were optional and all subjects were informed of the goals of research and that the resulting dataset would have been published on the web. All gathered data is available on Zenodo (Scaravati, 2020b). Results show that Physiradio is well accepted (remarks such as "what a beautiful object!" were common) and that it effectively stimulates curiosity about the "internals" and OD: 64% of the subjects declared high or very high "curiosity raising" effect, without age correlation (i.e., on all ages). A high percentage of users who found it interesting also assigned high effectiveness (correlation: 0.49). Only 43.6% of the subjects declared the mapping between weather condition and musical genre coherent. This evaluation is confirmed by analyzing the actual matchings in Figure 4 where four (namely 4, 5, 6, 7) listenings out of seven were often (between 56% and 80% of the times) guessed right while the remaining three (namely 1, 2, 3) where rarely (between 0% and 8% of the times) guessed right. One problem emerged very soon: 'musical genre' is a definition too wide to be usefully inverted, i.e., extrapolate the original weather condition. E.g., a "latin american" webradio may play "salsa", "bachata", "reggaeton", "chacha", … (very different subgenres). In fact, in musical terms, there is no universally accepted definition of specific genres. In addition, the association between music, mood and weather is, of course, subjective. While this mapping satisfies the precondition to be a formal 'sonification' (i.e., same weather condition in → same genre out) it is also true that genre may not be enough for everyone to associate to a specific weather condition, even with the help of colour. 6 In fact, any webradio available on the Internet may even be genre-centric but nonetheless it usually plays a vast range of songs belonging to the genre itself, so the actual experimental sessions were somewhat influenced by the song currently playing. A very important suggestion received from a colleague is: "instead of using a genre-centric webradio for every streaming • what data should be taken as input, mapping to an enumeration of values; • creating playlists; • associating playlists to values. In fact Physiradio can be fully controlled via an MQTT API with commands such as 'volume', 'station' and ' city' (to get specific weather data). This way, even a less-tech-savvy user 7 could implement a suitable customization. Of course, the simplest solution to the 'subjectivity problem' would be to describe the mapping in the documentation (or via a small display) so that once assimilated any user would not need to look at the device later. More suggestions (to address the subjectivity) received were: • using music from 'formal' dancing (e.g., 'salsa', 'tango', ' can can', 'tarantella', not general dance/disco music), these are more canonized and recognizable • exploiting lights better, e.g., by pulsating the LEDs according to tempo Anyway, the primary goal of Physiradio was achieved since during experimental sessions many subjects were genuinely inspired by the device and started suggesting other mappings/musicalizations based on their work and life experience, this is a list of their feedbacks: • overall CPU load (in a server farm); • network traffic, not only in terms of volume but also in terms of type of traffic (denial of service attacks, mail spamming with many repeated messages); • city traffic conditions; • train timings/delays ("If you need to get out of your home at the right time, but you're shaving/bathing or having breakfast"); • cooking times ("If you fancy a musical oven timer."); • call center waiting times (music representing the time to wait for an operator to answer); • an outsourced (i.e., probably very far away) call centre operator could "listen" (in background) to the current weather condition at the caller's location thus adapting the style of the call according to the weather experienced by the caller; • in general, any situation where the need to continuously monitor a 'variable' (data) cannot be represented with simple and very annoying tone/sound, to take advantage of the "superior" discerning (in terms of change recognition) power (Rabenhorst et al., 1990) of hearing over sight; • in general, contexts where children may be involved, they are more sensitive to music and colours. These field experiments were very satisfying: subjects became very talkative and asked many questions, the mapping is far from perfect but Physiradio succeeds in stimulating curiosity and imagination and that was the authors' main goal. During questionnaire sessions, genre mapping was soon shaded by the brainstorming about other data mapping, showing that a device that does not look like a computer can be appreciated better and is less frightening, above all for less technology oriented people. In conclusion, the authors think that this kind of data physicalization -the musicalization process combined with a non-frightening device (i.e., a software-only solution would have been less effective) -could be a useful starting point to develop new ways to raise interest in data and OD in particular. Next step will be to use the device as a soft leverage to introduce mini seminars on OD: will subjects be more keen on listening to technical content after having been exposed to Physiradio?
2020-10-28T19:18:18.076Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "57ce0fe6d00113bab4e6f58d740cc4a218bb4c48", "oa_license": "CCBY", "oa_url": "http://datascience.codata.org/articles/10.5334/dsj-2020-039/galley/1002/download/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f63e97b39ae1744cf8736811d4c9995d3f4ceb54", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16687893
pes2o/s2orc
v3-fos-license
Isolated trisomy 7q21.2-31.31 resulting from a complex familial rearrangement involving chromosomes 7, 9 and 10 Background Genotype-phenotype correlations for chromosomal imbalances are often limited by overlapping effects of partial trisomy and monosomy resulting from unbalanced translocations and by poor resolution of banding analysis for breakpoint designation. Here we report the clinical features of isolated partial trisomy 7q21.2 to 7q31.31 without overlapping phenotypic effects of partial monosomy in an 8 years old girl. The breakpoints of the unbalanced rearranged chromosome 7 could be defined precisely by array-CGH and a further imbalance could be excluded. The breakpoints of the balanced rearranged chromosomes 9 and 10 were identified by microdissection of fluorescence labelled derivative chromosomes 9 and 10. Results The proband's mother showed a complex balanced translocation t(9;10)(p13;q23) with insertion of 7q21.2-31.31 at the translocation breakpoint at 9p13. The daughter inherited the rearranged chromosomes 9 and 10 but the normal chromosome 7 from her mother, resulting in partial trisomy 7q21.2 to 7q31.31. The phenotype of the patient consisted of marked developmental retardation, facial dysmorphism, short stature, strabism, and hyperextensible metacarpophalangeal joints. Discussion For better understanding of genotype-phenotype correlation a new classification of 7q duplications which will be based on findings of molecular karyotyping is needed. Therefore, the description of well-defined patients is valuable. This case shows that FISH-microdissection is of great benefit for precise breakpoint designation in balanced rearrangements. Background Phenotypic reports of chromosomal imbalances are an important source for genetic counselling especially in prenatal diagnosis. Chromosomal imbalances arise de novo or as the result of a familial rearrangement. The most common familial rearrangements are translocations. In case of an unbalanced segregation in an offspring the resulting imbalances consist of a combination of partial trisomy and partial monosomy. In most of the cases it is impossible to exactly relate the phenotypic consequences of an unbalanced translocation to either the underlying partial trisomy or the partial monosomy. Therefore many case reports are of limited value for genetic counselling because the phenotypic effects of trisomy and monosomy overlap [1]. Another difficulty in the description of phenotypic consequences of a certain chromosomal imbalance is the breakpoint designation. The precise description of the breakpoint is important for the genotype-phenotype correlation. In solely cytogenetically investigated patients, breakpoint designation remains doubtful due to the limited resolution of chromosome banding techniques. In recent years comparative genomic hybridisation (CGH) such as array-CGH has overcome many of the limitations of classical chromosomal banding analysis and can estimate the breakpoints with an accuracy of some kb. However, breakpoint designation by CGH and Array-CGH is restricted to unbalanced rearrangements. In case of balanced rearrangements or combinations of balanced and unbalanced rearrangements as in the present case further molecular cytogenetic techniques have to be combined with array CGH such as microdissection and Fluorescence-in-situ-hybridisation (FISH). Case report The female patient is the first child of healthy non consanguineous parents. The father is German, the mother is from Pakistan. The family history was unremarkable. The girl was born spontaneously after an uneventful pregnancy at 39 weeks + 0 days with a length of 48 cm (-1.3 SD), a weight of 2260 g (-3.2 SD) and a head circumference of 34 cm (-0.4 SD). The APGAR scores were 8/9/10 and the umbilical cord pH was 7.2. Due to muscular hypotonia nasogastral feeding had to be initiated. At the age of four months she was admitted to hospital due to repeated vomiting. At that time developmental delay was noted. At the age of five months frontal bossing, relative macrocephaly and strabismus were observed. With 5 3/12 years she started walking. During the last presentation at the age of 7 8/12 years she only spoke single words while, according to her parents, her receptive language skills were considerably better. There were no behaviour problems. At that time she was not continent yet. Her general health was good. Length was 112.5 cm (-2.6 SD) and head circumference 51.5 cm (-0.4 SD). The inner canthal distance was 3.3 cm (+ 2.0 SD). She had bilateral epicanthus and slightly downslanting palpebral fissures. The previously noted strabismus had improved. The metacarpophalangeal joints of the fingers were hyperextensible. Due to recurrent ear infections she had received ventilation tubes twice. Generalised hypertrichosis was observed ( Figure 1). Methods and Results Chromosome analysis in the girl was performed on peripheral blood lymophocytes according to standard techniques and revealed derivative chromosomes 9 and 10. Chromosome analysis of the parents revealed a normal male karyotype in the father and a balanced rearrangement t(9;10)(p13;q23)ins(9;7)(p13;q21.3q31.3) in 20 metaphases analysed (karyotype described according to ISCN 2009) in the mother. This unmasked the derivative chromosomes of the daughter as the result of a malsegregation of the complex maternal translocation ( Figure 2): the girl inherited the derivative chromosomes 9 and 10 but a normal chromosome 7 from the mother resulting in isolated partial trisomy 7q21.3 to 7q31.3. To estimate the chromosomal breakpoints of the derivative chromosome 7 and to exclude further imbalances we performed array-CGH from the patient's lymphocytes using the Human Genome CGH Microarray 244A platform (overall resolution 0,15 Mb, Agilent Technologies, Santa Clara, USA) according to the manufacturer's instructions. The array was scanned with the G2565CA Microarray Scanner System (Agilent Technologies, To estimate the chromosomal breakpoints of the derivative chromosomes 9 and 10 these chromosomes as Figure 2 Partial karyograms after GTG-banding. A mother: 46,XX,t(9;10)(p13;q23)ins(9;7)(p13; q21.2-31.31) and B daughter: der(9)(10qter 10q23::7q21.2 7q31.31::9p13 9qter), der(10)(10pter 10q23::9p13 9pter) (according to ISCN 2009). The derivative chromosomes are marked by arrows. well as the derivative chromosome 7 were microdissected from chromosome preparations of the mother and rehybridised to normal human chromosomes [2]. In brief, to detect the chromosomal breakpoints spreads of the derivative metaphases of the mother were hybridised with three self made whole chromosome painting probes (wcp): the wcp probe for chromosome 7 was labelled with DEAC (Diethylaminocoumarin-5-dUTP; NEN Life Science Prod. Inc.; Boston, MA, U.S.A.), the wcp probe for chromosome 10 was labelled with R110-dUTP (Perkin Elmer; Waltham, MA, U.S.A.), and the wcp probe for chromosome 9 was labelled with Spectrum Orange-dUTP (Vysis Inc.; Downers Grove, Il, U.S.A.). Subsequently, the fluorescence labelled derivative chromosomes were isolated by a glass needle, amplified by DOP-PCR, labelled with three different fluorochromes and re-hybridised to normal human chromosomes. The microdissected chromosome 7 was labelled with R110-dUTP, the microdissected chromosome 10 was labelled with Spectrum Orange-dUTP and the microdissected chromosome 9 was labelled with Texas Red-12-dUTP. For better discrimination between the labelling of the wcp probes and the subsequent labelling of the microdissected chromosomes the rehybridised chromosomes were displayed in different colours (R110-dUTP in ice blue, Spectrum orange-dUTP in purple and Texas red-dUTP in yellow; Figure 4). The breakpoints of the derivative chromosomes could be identified by tracing back the labelled chromosomal segments to the ideograms of chromosomes 7, 9 and 10 ( Figure 4). Because of the complex maternal rearrangement amniocentesis was performed in a subsequent pregnancy revealing a balanced complex translocation in a male fetus. The boy was born at term with normal measurements (length 56 cm (+2 SD), weight 3020 g (-1.7 SD)). His motor development was normal. He started walking at the age of 11 months. At the age of 5 10/12 years length was 121 cm (0.74 SD). He attended preschool timely. Discussion There are many publications on partial trisomies in 7q. In most cases the duplication resulted from a familial translocation involving the long arm of chromosome 7 and another chromosome leading to partial trisomy/ monosomy 7 and partial trisomy/monosomy of the translocation partner, respectively [3][4][5][6][7][8][9]. About 19 patients with isolated trisomy 7 involving various regions of 7q have been described [10,11]. The phenotype varies according to the region which is duplicated and the size of the duplication. In an attempt to correlate the karyotype with the phenotype, patients with partial trisomy 7 have been divided into groups. Novales and co-workers suggested three groups [3]. Patients with a duplication 7q21 or q22 to 7q31 belong to group 1. The phenotype includes facial dysmorphism (frontal bossing, narrow palpebral fissures, epicanthus, and hypertelorism), strabism, hypotonia, and developmental delay. Group 2 includes patients with duplication 7q31 to 7qter. The phenotype is characterised by low birth weight, large fontanel, facial dysmorphism (narrow palpebral fissures, hypertelorism, small nose, low-set and malformed ears, microretrognathia), cleft palate, developmental delay, skeletal anomalies, and a reduced life expectancy. Group 3 is defined by a duplication of 7q32 to 7qter. These patients show low birth weight, facial dysmorphism (low-set ears, small nose, and hypotonia), kyphoscoliosis, skeletal anomalies, hypotonia and developmental delay. Courtens et al. described group 4 with a duplication involving 7q21 or q22 to 7qter [12]. One has to bear in mind that the clinical descriptions are mainly based on patients assessed by chromosome banding analyses. The patient described herein has isolated partial trisomy 7q21.2 to 7q31.31 without additional chromosomal imbalances as confirmed by array-CGH. She therefore fits best into group 1 and displays the typical symptoms, namely low birth weight, global developmental delay with marked hypotonia in infancy, marked delay in speech development, mild short stature, normal head circumference, strabism and mild unspecific facial dysmorphism. Our patient can be best compared to the patients described by Humphreys et al. and Romain et al. [13,14]. Low birth weight was also a symptom in the patients described by Grace et al. and Berger et al. [15,16]. In contrast to other descriptions the palpebral fissures were of normal size. Conclusion To enable a future new classification of duplications in 7q which will be based on findings of molecular karyotyping the description of well-defined patients is valuable. Furthermore, this case shows that FISHmicrodissection is of great benefit for breakpoint designation in cases of balanced and/or complex rearrangements. Consent Written informed consent was obtained from the parents of the patient for publication of this case report and all images.
2017-06-25T00:18:41.376Z
2011-12-05T00:00:00.000
{ "year": 2011, "sha1": "283b209be5539cb676d412c2e7b21d1b73b8334d", "oa_license": "CCBY", "oa_url": "https://molecularcytogenetics.biomedcentral.com/track/pdf/10.1186/1755-8166-4-28", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "283b209be5539cb676d412c2e7b21d1b73b8334d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
243954599
pes2o/s2orc
v3-fos-license
Microencapsulation for Functional Textile Coatings with Emphasis on Biodegradability—A Systematic Review : The review provides an overview of research findings on microencapsulation for functional textile coatings. Methods for the preparation of microcapsules in textiles include in situ and interfacial polymerization, simple and complex coacervation, molecular inclusion and solvent evap-oration from emulsions. Binders play a crucial role in coating formulations. Acrylic and polyurethane binders are commonly used in textile finishing, while organic acids and catalysts can be used for chemical grafting as crosslinkers between microcapsules and cotton fibres. Most of the conventional coating processes can be used for microcapsule-containing coatings, provided that the properties of the microcapsules are appropriate. There are standardised test methods available to evaluate the characteristics and washfastness of coated textiles. Among the functional textiles, the field of environmentally friendly biodegradable textiles with microcapsules is still at an early stage of development. So far, some physicochemical and physical microencapsulation methods using natural polymers or biodegradable synthetic polymers have been applied to produce environmentally friendly antimicrobial, anti-inflammatory or fragranced textiles. Standardised test methods for evaluating the biodegradability of textile materials are available. The stability of biodegradable micro-capsules and the durability of coatings during the use and care of textiles still present several challenges that offer many opportunities for further research Motivation and Research Questions The aim of this article is to provide a systematic review of recent research in the field of functional textile coatings based on microcapsules, with a particular focus on the biodegradability of microcapsules and textile products.More specifically, the review aims to elucidate the following questions: What are the trends in research publications on microcapsules for functional textiles?What are the purposes, effects and uses of microencapsulation in functional textile coatings?Which of the microencapsulation methods are suitable for applications in functional textiles, for which active ingredients and shell materials?Which application techniques are used on textiles, with which binders, catalysts, and pretreatments of textiles?What is the stability of microcapsules during formation, application, and textile care, and what standardized testing methods are available?What proportion of research is focused on biodegradable microcapsules applied to biodegradable textiles?Which microencapsulation methods and materials are used in such products?Are the biodegradable microcapsules and formulations sufficiently stable for application and use on functional textile coatings?To our knowledge, these aspects have not yet been covered and presented in the review articles to date. Publication Trends in Microencapsulation for Textiles Industrial applications of microencapsulation were first introduced by the National Cash Register Company in the late 1950s for the encapsulation of leuco dyes in the manufacture of carbonless copying paper [1][2][3].Since then, microencapsulation has been continuously improved and adapted for a variety of applications and functional effects.In addition to the printing industry on paper, microcapsules have also been used for textiles, pharmaceutical and medical purposes, in cosmetics and food products, agricultural formulations, and in the chemical, biotechnology, photography, electronics, building materials and waste treatment industries [4][5][6][7][8][9]. The first ideas for the specific use of microcapsules in textile products were born about five decades ago and soon developed into an important research field, not only with the growth of scientific publications, but also with pronounced protection of industrial intellectual property rights.As a result, the number of patent applications significantly exceeded the number of scientific articles (Figure 1), with China, Turkey and India being the leading countries of origin for scientific articles, and Japan, USA and China being the top countries of origin for patents (Figure 2). Microcapsules in Functional Textile Products In the manufacture of textiles, garments and apparel, microencapsulation offers many opportunities to improve properties or provide entirely new functionalities, leading to broader usability and higher market value of the products.Scientific and patent literature reviews on microencapsulation applications in textiles have already been published [12][13][14][15][16][17][18].In this paper, we aim to analyse recent scientific articles and highlight examples of research results in the field of functional textiles with microcapsules, as summarized in Figure 3, with a particular focus on the biodegradability of the final products.One of the initial microencapsulation applications to achieve innovative effects in textile processing have been microencapsulated dyes and pigments for special textile printing and dyeing.Varieties of these include microencapsulated colorants for permanent dyeing and printing of textiles [19][20][21], as well as colour changing textiles based on thermochromic microcapsules [22][23][24][25], photochromic dyes [26][27][28][29], and electrochromic textiles containing microencapsulated liquid crystals [30][31][32]. To achieve durable flame-resistance of textiles, organic or inorganic fire retardants have been microencapsulated and applied to textile substrates.Microencapsulation has been used to prevent exudation or sublimation of fire-retardant chemicals, to avoid reactions with textile polymers, and/or to overcome the hydrophilicity of the substances.Products include firefighting and military protective clothing, as well as textiles for automotive and domestic interiors [33][34][35][36]. One of the flourishing applications of microencapsulation is functional textiles for active thermoregulation, used in insulating textiles, technical clothing, and sportswear.Most textiles for thermal regulation use phase change materials (PCMs), in which a dynamic heat exchange process occurs at the melting point temperature.To overcome the practical problems of solid-liquid phase transitions, PCMs must be microencapsulated and converted into solid formulations.When a PCM undergoes a solid-to-liquid phase transition, energy is stored in the form of latent heat at a constant temperature.The accumulated latent heat energy is released when the PCM re-solidifies, and the transition process is reversible.Typical organic PCMs are paraffin hydrocarbons or lipids with a melting point close to body temperature [37][38][39][40][41][42][43][44][45][46].In addition to classical PCMs, photothermal energy conversion materials also perform similar functionalities.By absorbing light and converting it into thermal energy, they are used in light-absorbing thermoregulatory textiles [47][48][49]. Fragranced textiles often contain essential oils, perfumes, or aromas in microencapsulated form to either gradually release the active ingredients through permeable shells, or to protect the cores inside the impermeable microcapsules until they are released by mechanical pressure or rubbing during product use.Modifications of the shell materials and binding formulations play an essential role in achieving better washing resistance over multiple washing cycles and in prolonging olfactory sensations [63][64][65][66][67][68]. Some aromatic compounds, such as essential oils and their components, not only provide a pleasant fragrance effect but also offer antimicrobial protection [69].Being liquid, volatile and susceptible to oxidation, microencapsulation is required for their protection and conversion to solid state.The release mechanisms vary from slow diffusion through the permeable shell to instantaneous release triggered by pressure or melting.Antimicrobial textile products include hygiene masks, footwear, sportswear, medical garments and biofunctional materials [70][71][72][73][74][75]. Functional textiles initially focused only on individual value-adding properties.However, recent research has targeted combinations of multiple properties and effects, leading to new multifunctional smart textiles with three or more functionalities in one product, such as simultaneous aromatic, antimicrobial, UV-protective and superhydrophobic effects [86][87][88][89]. Purposes and Effects of Microencapsulation in Functional Textiles Not all functional textiles need to contain microcapsules.Microencapsulation has been used as a means to impart finishes and properties to textiles that were not possible or cost effective with other technologies [18].Therefore, microencapsulation can be used beneficially in functional textile products for the three main groups of purposes and effects [14,15,45,68,69] (Figure 4):  Permanent protection or separation of a core material for the life of the product.Such long-life microcapsules provide localized activity by permanently confining the liquid core within the mechanically resistant shell, as in PCMs for active thermal regulation or colour-changing textiles with electrochromic, photochromic and thermochromic materials.  Targeted release of the core under planned conditions that trigger the opening of the shell.When temporary isolation and rapid, targeted release of active components from the core is envisaged, microcapsules with impermeable shells burst open by mechanical pressure, abrasion, melting or thermal decomposition.Until release, the active components in the microcapsule core remain separated from the reactive components (leuco dyes and colour developers), converted from a liquid to a solid state and protected against evaporation (essential oils) or protected against environmental influences and oxidation (essential oils, lipids and vitamins).  Long-lasting, gradual release by diffusion through the permeable microcapsule shell.This principle is used in long-lasting perfumed textiles, in insect repellent fabrics and in sustained-release medical and cosmetic textiles. Microencapsulation Methods for Functional Textiles During the evolution of microencapsulation technologies, a wide range of possible processes and techniques have been developed and modified to produce microcapsules with desired materials and target properties.Microcapsule size, morphology, shell and core materials, release mechanisms, compatibility with other components of the formulation, application technologies to textiles, environmental impact and biodegradability can be imaginatively defined and thoughtfully planned.Various authors of reviews in the field of microencapsulation have classified microencapsulation methods in different ways [81,[90][91][92][93][94][95].However, not all methods are applicable specifically to textile applications, and microcapsules have most commonly been produced using one of the methods shown in Figure 5, with examples analysed and specified in sections 5.1-5.3. Chemical Microencapsulation Methods for Functional Textiles Chemical microencapsulation methods take place in emulsions and are based on the polymerization of monomers around emulsified core droplets to form a solid and durable polymer wall.Research examples of chemical microencapsulation methods used for textile functionalization are listed in Table 1.In situ polymerization of aminoaldehyde resins appears to be the most frequently used microencapsulation method for functional textiles, particularly to produce scented textiles, thermal protective clothing with PCMs, flame retardant textiles, photochromic fabrics and antimicrobial textile products.Interfacial polymerization has been used to prepare polyurea or polyurethane microcapsules for scented or perfumed textiles, while photopolymerization has been experimentally applied for multifunctional cotton textiles.Cotton woven fabrics with fragrant, antimicrobial, or flame-retardant functionalisation. Male and female fragrance oils. Cotton fabrics with reversible photochromic response.[104] Interfacial Polymerization Method Polyurea from hexamethylene diisocyanate and guanidine carbonate.N-TiO2 particles loaded on the shell surface. In Situ Polymerization Microencapsulation In situ polymerization takes place in oil-in-water emulsions, allowing the formation of microcapsules with hydrophobic core materials immiscible with water.The result is spherical, reservoir-like microcapsules with smooth, transparent, durable and pressuresensitive microcapsule shells (Figure 6).Typical shell materials for in situ polymerization are amine-aldehyde resins (aminoplasts), such as urea-formaldehyde, melamine-formaldehyde, urea-melamine-formaldehyde, or resorcinol-modified melamine-formaldehyde synthetic polymers.The synthesis processes can either start from monomers, such as urea and formaldehyde or melamine and formaldehyde, or from prepolymers, such as partially methylated trimethylolmelamine or hexamethoxymethylolmelamine, which are easier to control [108,109]. Since polymerization of all materials for the microcapsule shell occurs exclusively in the continuous water phase and on the side of the continuous phase at the interface formed by the dispersed core material and the continuous phase, all shell monomers or prepolymers must be water soluble.Polymerization initially produces prepolymers of relatively low molecular weight that remain soluble in the continuous phase, but as the molecular weight of the prepolymer increases, the polymers deposit on the surface of the dispersed cores in the emulsion.Polymerization continues at the surface of the droplets, and a solid shell is formed as crosslinking occurs.The separation and deposition process by which the microcapsule shell is formed largely determines the encapsulation efficiency and shell morphology and can be controlled by changing the pH and temperature, by the amine and aldehyde type and molar ratio and by the type and amount of emulsifier used [110]. Under ideal conditions, all of the shell material precipitates and distributes evenly over the surfaces of the hydrophobic cores in the emulsion.To achieve better process control and improved mechanical properties of the microcapsules, emulsifiers/modifiers must be added to initially improve emulsification and later ensure that polymerization develops only on the surface of the emulsified microcapsule cores and not throughout the whole aqueous phase [109].Examples of such emulsifiers/modifiers include styrene-maleic anhydride polymer [43,48,100,109] and polyacrylic acid [96]. Aminoaldehyde microcapsules have excellent technological properties, are durable and resistant to chemical and physical agents.However, they have two important disadvantages-poor environmental degradability and the release of formaldehyde.Hydrolytic degradation of urea-formaldehyde polymers leads to significant weakening of resin bonds and is a source of formaldehyde emissions [111,112].This is also particularly problematic in textile applications, as the limits for maximum allowable formaldehyde concentrations in various products, including textiles, have been lowered in recent decades.To reduce the formaldehyde content, formaldehyde residues can be removed from the suspension of microcapsules at the end of the in situ process by adding scavengers such as urea, melamine, ammonia or ammonium chloride [109,111,113,114]. Interfacial Polymerization Microencapsulation In microencapsulation by interfacial polymerization, one of the monomers is dissolved in the aqueous phase and the other in an organic lipophilic solvent of the emulsion.Both monomers react at the droplet interface to form a polymer membrane-the microcapsule shell.The active core material can be oil-soluble or water-soluble, so the oil-inwater or water-in-oil type emulsion must be selected accordingly.Four main types of shell polymers have been developed and used in microencapsulation by interfacial polymerization, consisting of polyamides (reaction of diamines and diacid chlorides) (Figure 7), polyurethanes (reaction of diisocyanates with diols), polyureas (reaction of diamines with diisocyanates) and polyesters (reaction between diacid chlorides and diols).The formation of a polymer shell at an interface involves complex mechanisms that are not yet fully understood.The reaction begins at the liquid interface, and as the shell initially forms, the reaction site moves.As the oligomers in the dispersed droplet become largely insoluble, the polymer precipitates near to the interface and reservoir-type microcapsules form.A further comprehensive analysis of both the chemical and physical processes involved in microencapsulation by interfacial polymerization and the implications for membrane formation and structure was published in [115].In microcapsules for textile applications, shells have been reported of polyurea [105], polyurethane urea [63,106] and biopolyurethane [107]. Physico-Chemical Microencapsulation Methods for Functional Textiles. Physico-chemical microencapsulation methods for textile applications (Table 2) consist of simple and complex coacervation processes, and of molecular inclusion with cyclodextrins.Their important advantage is that environmentally friendly shell materials can be used, often of natural origin, that are safer for direct skin contact and textile degradation after use.However, the disadvantage is lower durability and resistance to physical and chemical agents in the processes of microcapsule application, washing and use of functional textiles.Cinnamon essential oil.Antimicrobial woven cotton fabrics.[70] Ethyl cellulose.Photochromic dyes in ethyl acetate. Eucalyptus oil and cedarwood oil as insect repellent. Cotton and polyamide Jersey knitted pharmaceutical textiles. Coacervation Coacervation microencapsulation processes occur in colloidal systems in which macromolecular, colloid-rich coacervate droplets surround dispersed microcapsule cores and form viscous microcapsule shells that are solidified with crosslinking agents (Figures 8 and 9).In practice, water-insoluble actives are emulsified into the continuous aqueous phase containing a dissolved macromolecular colloid to form an oil-in-water emulsion.The coacervation process is induced by controlled modification of parameters such as pH, ionic strength, temperature or solubility.Shell formation is driven by the surface tension difference between the coacervate phase, the water and the hydrophobic material.Gelation is achieved by lowering the temperature of the reaction mixture below the gelling point of the gellable hydrocolloid.Permanent hardening of the microcapsule shells is achieved by cross-linking and formation of new covalent bonds or by non-covalent hardening by hydrogen bonds formed between molecules.Often both types of processes occur simultaneously or successively.Among the cross-linking agents, aldehydes (formaldehyde, glutaraldehyde) are mostly used. Based on the polymer-colloid systems involved, coacervation processes are divided into two subgroups: (a) simple coacervation process, when a single polymer is involved and coacervates are formed due to reduced hydration by the addition of a salt or desolvation liquid, such as alcohol, and (b) complex coacervation, when two or more polymer colloids with opposite charges are used to form shells. Common pairs are proteins and polysaccharides, such as gelatine and gum Arabic.The ionic interactions between them lead to coacervate formation and phase separation.A comprehensive analysis of the coacervation processes, their mechanisms, process parameters, materials and applications has been described in [124]. Molecular Inclusion with Cyclodextrins Cyclodextrins are cyclic oligosaccharides containing at least 6 D-(+)-glucopyranose units linked by α-(1,4)-glucoside bonds.With lipophilic inner cavities and hydrophilic outer surfaces, they can interact with a variety of guest molecules to form non-covalent inclusion complexes that provide protection and improve solubility, bioavailability and safety of active compounds.Natural α-, β-and γ-cyclodextrins (with 6, 7 and 8 glucose units, respectively) differ in ring size and solubility, and are most frequently used [125,126]. Direct interaction between cyclodextrin complexes and fibres has been reported in the functionalization of textiles.The use of poly(carboxylic) acids allows some fixation of cyclodextrin complexes to fibres.An example is the grafting of β-cyclodextrin onto hydroxyl groups of cellulose using butane-1,2,3,4-tetracarboxylic acid as crosslinking agent and sodium hypophosphite as catalyst [122]. Physical Microencapsulation Methods for Functional Textiles A wide range of physical microencapsulation methods using natural shell materials have been developed to produce microcapsules for applications in pharmaceutical, food, cosmetic and detergent formulations [4,[127][128][129].For textiles, the range of physical microencapsulation methods is limited and restricted to a few methods, as shown in Table 3, mainly due to the higher requirements in terms of durability, mechanical strength and resistance to washing.In both spray drying and emulsification/solvent evaporation methods, the wall materials are dissolved in a solvent, the core materials are emulsified, and the active ingredient is encapsulated by the shell material after solvent evaporation.Simple emulsions or multiple reverse-phase solvent evaporation methods can be used.An advantage of physical methods is the possibility to use biodegradable materials such as acacia gum, chitosan, ethyl cellulose and polylactic acid. Formulation Composition Microcapsules have to be formulated for applications on woven or non-woven textiles without significantly altering the feel or colour of the textile products.Formulation additives typically include binders, crosslinking agents, organic or inorganic pigments and fillers, defoamers and/or other surfactants and viscosity control agents/thickeners. Binders play a crucial role in microcapsule formulations for textiles.They largely determine the quality, durability and washability of textile materials containing microencapsulated ingredients.Typically, binders are selected from the following groups: The use of acrylic or polyurethane binders has predominated among other binders in the finishing of textiles with microcapsules in the last 5 years, but it must be emphasised that cotton finishing or so-called chemical grafting using CA or BTCA as crosslinker between microcapsule and cotton fibre and sodium hypophosphite as a catalyst [73,116,131,135,142,143] is on the rise due to the increasing use of biodegradable polymers (chitosan alone or chitosan-gum Arabic or gelatine-gum Arabic) as shell-forming materials in microcapsule production.Increased environmental awareness promoted the use of sustainable and biodegradable polymers in the finishing of textiles with microcapsules and the production of functional textiles.The advantage of using chemical grafting instead of polymeric binders is the flexibility and breathability of the textiles, which are retained after application.In contrast, polymeric binders form a binder layer during curing which can significantly reduce the air permeability of the fabric, change the tensile strength of the fabric, increase stiffness and reduce softness [73]. Durability of Coatings The design of functional textiles for single use does not necessarily require the study of all types of durability to the same extent as for textiles for long-term use.When designing functional textiles with microcapsules, the performance of various basic resistances, such as resistance to rubbing, light, washing and wet-and dry cleaning, which are standardised in the textile industry, should be considered. As mentioned above, the application of microcapsules to textiles requires the addition of a binder, as a microcapsule shell is not able to interact strongly with functional groups of textile fibres.An exception is the chemical grafting of cotton with citric acid, where citric acid is used as a non-toxic crosslinker to covalently bind the microcapsule wall material to hydroxyl groups of cotton via ester bonds [73,131].During curing, the binder forms a thin, elastic and transparent binder layer on the textile surface in which the microcapsules are enclosed.Therefore, the adhesion between the binder layer and the textile substrate plays a crucial role. The durability of the microcapsules and the maintenance of the functionality of the textile during its lifetime depend on the resistance of the binder layer to washing, dry and wet cleaning, rubbing and light.It should be emphasised that all washfastness properties in textiles are standardised by the standards ISO 105-C01 [131,139], ISO 105-C10 [141], ISO 105-C06, ISO 6330 or AATCC TM61 [77,89,121,136,145].The standard ISO 105-C01 is no longer valid anymore and is replaced by ISO 105-C10.The use of non-standardised test methods does not provide reliable insight into the actual behaviour of a functional textile during the care and wearing process but can provide a rough estimate. When functional textiles are directly involved in a domestic [70] or industrial washing, wet cleaning and dry-cleaning process [28], a more realistic assessment of the durability of the textile can be achieved as the textile is exposed to real-life care conditions (detergent, mechanical action, abrasion, temperature, time and solvent).Moreover, the washing or cleaning conditions are standardised by the machine manufacturer. Figure 10 shows the SEM images of a polyester fabric coated with photochromic microcapsules before and after washing, wet cleaning and dry cleaning.It can be clearly seen that the cleaning process strongly affected the adhesion between the binder layer, in which the microcapsules were enclosed, and the polyester fibres, resulting in the loss of the functional coating. Coating Techniques Various techniques can be used to apply microcapsules to textiles.Patents and published articles describe the incorporation of microencapsulated compounds onto textiles using the following application techniques: Instead of coating techniques, some researchers developed and used other technological solutions for incorporating microcapsules into textile products, such as: Biodegradability of Synthetic Materials In general, high molecular weight synthetic polymers such as urea-formaldehyde resins, nylon, polyvinyl chloride, polystyrene and polyethylene are classified as non-biodegradable plastics [165].However, some microbial research has also addressed the question of whether and to what extent microorganisms can degrade plastics in the environment and has focused on biodegradation and biotreatment of plastic wastes.A recent review paper by Danso et al. [166] summarized current knowledge on microbial plastic degradation and indicated that microorganisms and enzymes can act on some high molecular weight polymers of polyethylene terephthalate and ester-based polyurethane at moderate turnover rates, while no efficient enzymes are known for the high molecular weight polymers of polystyrene, polyamide, polyvinyl chloride, polypropylene, ether-based polyurethane and polyethylene. Although, to our knowledge, no biodegradability studies have been published specifically for aminoaldehyde microcapsules prepared by in situ polymerization or for microcapsules synthesised by interfacial polymerization with polyamide, polyurethane, polyurea or polyester shells, some publications described biodegradability studies conducted specifically with these synthetic polymeric materials in the natural environment or under controlled laboratory conditions.The following literature review provides examples of research studies on the biodegradability of generally non-biodegradable synthetic polymers. Melamine-Formaldehyde Resins Otake et al. [165] studied the biodegradation of polymers buried in soil for over 32 years.Remarkable degradation was found only for thin films of low-density polyethylene in direct contact with soil, while no evidence of biodegradation was found for polystyrene, polyvinyl chloride and urea-formaldehyde resin.However, from the wastewater of an aminoplast industrial plant, El Sayed et al. [167] isolated a novel bacterial strain Micrococcus sp.MF-1 that was able to use melamine-formaldehyde resin as its main carbon and nitrogen source.Melamine, cyanuric acid and biuret were detected as intermediate metabolites in the filtrate of the culture.Biodegradation of the resin proceeded via successive deamination reactions of melamine to cyanuric acid, which was hydrolysed to biuret and finally to NH3 and CO2. Polyurethane and Polyester-Polyurethane Polyurethane, especially polyester-polyurethane, appears to be more susceptible to microbial infestation.Nakajima-Kambe et al. [168] analysed reports on the degradation of polyester-polyurethane by microorganisms and fungi and concluded that biodegradation occurs mainly through the hydrolysis of ester bonds by esterases. Ibrahim et al. [169] reported that out of 70 fungal isolates recovered from soils, wall paints and plastic wastes from different habitats, 35 isolates showed potential to degrade polyester polyurethane.Six of these isolates (Fusarium solani, Alternaria solani, Spicaria spp., Aspergillus fumigatus, Aspergillus terreus and Aspergillus flavus) grew on basal salt media amended with polyester-polyurethane as the sole carbon source.Maximum degradation activity was achieved by the isolate Aspergillus flavus, which caused a weight loss of 94% of the polyester-polyurethane pieces. Khan et al. [170] isolated and characterised polyester-polyurethane degrading fungi from the soil of a general municipal solid waste landfill.Among them, a novel polyesterpolyurethane-degrading fungus was isolated and identified as Aspergillus tubingensis.Khan et al. [171] also investigated the ability of the fungus Aspergillus flavus G10, isolated from the gut of the common cricket Gryllus bimaculatus, to biodegrade polyurethane.The biodegradation was maximal in fungus cultured on a malt extract medium. Brunner et al. [172] reported on the ability of some fungal strains to degrade plastics.Three litter-saprotrophic fungi found on floating plastic waste in the shoreline of a lake, Cladosporium cladosporioides, Xepiculopsis graminea and Penicillium griseofulvum, and the plant pathogen Leptosphaeria sp. were able to degrade polyurethane.In addition, two other litter-saprotrophic fungi, Agaricus bisporus and Marasmius oreades, which were not isolated from floating plastic waste, also showed the ability to degrade polyurethane. Similarly, Russel et al. [173] screened numerous endophytic fungi for their ability to degrade polyester-polyurethane.While several isolates showed the ability to efficiently degrade polyester-polyurethane in both solid and liquid suspensions, particularly strong activity was observed in isolates of the genus Pestalotiopsis.Two isolates of the species Pestalotiopsis microspora were uniquely capable of growing on polyester polyurethane as a sole carbon source under both aerobic and anaerobic conditions and were found to be promising sources of biodiversity useful for bioremediation. Nylon/Polyamides The biodegradation of nylon membranes by lignin-degrading fungi was studied by Deguchi et al. [174].The white rot fungal strain IZU-154 oxidatively degraded nylon-66 membrane under ligninolytic conditions.The nylon-degrading activity was closely related to the ligninolytic activity of the fungus. Negoro [175] reviewed the degradation of nylon oligomers and reported that two strains that originally lacked metabolic activity for nylon oligomers, namely Flavobacterium sp.KI725 and Pseudomonas aeruginosa PAO1 developed the ability to degrade nylon oligomers as xenobiotic compounds by selective cultivation with nylon oligomers as the sole carbon and nitrogen source. According to Sudhakar et al., [176], the marine bacteria Bacillus cereus, Bacillus sphericus, Vibrio furnisii and Brevundimonas vesicularis were shown to degrade nylon 6 and 66 in a mineral salt medium, with the polymer being the sole carbon source. Polyesters Kim and Rhee [177] published a review on fungal degradation of microbial and synthetic polyesters and discussed the ecological significance and contribution of fungi in the biological recycling of polymeric waste materials in the biosphere.In general, aromatic polyesters are more resistant to microbial attack.In contrast, due to their potentially hydrolysable ester bonds, most aliphatic polyesters can be mineralized by several aerobic and anaerobic microorganisms that are widely distributed in nature.To obtain useful biomaterials and reduce the impact of environmental pollution caused by non-degradable polymers, biodegradable polyesters have been developed, such as polyhydroxyalkanoates, poly(ε-caprolactone), poly(L-lactide), aliphatic and aromatic polyalkylene dicarboxylic acids. While aromatic polyesters, such as poly(ethylene terephthalate), have excellent material properties but are resistant to microbial attack, many biodegradable aliphatic polyesters lack technological properties important for the application.Aliphatic-aromatic copolyesters have been developed to combine good material properties with biodegradability.Müller et al. [178] reviewed the attempts to combine aromatic and aliphatic structures in biodegradable polyester plastics and evaluated the degradation behaviour and environmental safety of biodegradable polyesters containing aromatic components. Based on the results of these studies, it can be assumed that synthetic microcapsule shells, such as those made from amino resins, polyurethane or polyamide, are generally not readily biodegradable in the environment, but can be biodegraded by selected and adapted strains of microorganisms and fungi. Biodegradable Polymers The rapid development of the textile industry and the use of non-biodegradable and non-biocompatible materials have had a negative impact on the environment.Due to the negative impact on the environment, biodegradable polymeric materials have been increasingly used in the last decade [179]. The rate and degree of biodegradation of fibre-forming polymers depend on several factors, of which the following are important: properties of fibre-forming polymers (chemical structure, molecular mass, degree of polymerization, crystallinity, degree of orientation and the hydrophilicity/hydrophobicity of textile materials), environment (presence of oxygen, temperature, humidity, pH, light and the presence of metals and salts) and microbial flora in a given environment, with appropriate secreted enzymes for the degradation of polymers [180]. Polysaccharides, especially cellulose, are widely used in the textile industry due to their nontoxicity, biodegradability and biocompatibility [182].Cotton, a natural cellulose fibre, is the most used material.Due to its specific structure, cotton becomes stronger when it is wet.This makes the material suitable for textiles that need to be washed frequently.Due to the numerous functional groups on the chains, the structure can be chemically modified to improve the chemical, physical and biological properties [183]. Biodegradability Testing From the large number of standards available for testing the biodegradability of various materials, the following standardised test methods have been developed and used specifically for evaluating the biodegradability of textile materials: The soil burial test has been the most used in published articles [180,[184][185][186][187][188][189][190].In this test, the sample is buried in the soil for a certain time under specific conditions (temperature, humidity, pH) specified in the standard.After the specified burial time, the samples are removed from the soil, rinsed and dried.The burial time is specified in the standards with the loss of the maximum tensile strength of the tested sample, which can be 80% or 90% depending on the standard.Figure 11 shows the biodegradation of chemically bleached cotton fabric using the burial test (ISO 11721-1:2001), where the biodegradation of the fibres is visually apparent and accelerated by increasing the burial time.During the biodegradation process, many changes occur, and the textile material exhibits significant optical and other morphological changes.In the study of textile biodegradation, the colour change of buried textile material is evaluated spectrophotometrically by calculating the colour difference between the unburied and buried samples, surface changes of fabrics are also characterised by optical microscopy, morphological changes of fibres are characterised by scanning electron microscopy (SEM), changes in fibre crystallinity and internal structure by X-ray diffraction (XRD), the chemical structure of the textile material or its functional groups by Fourier transform infrared spectroscopy (FTIR), the change in thermal stability of the buried textile material by thermal gravimet-ric analysis (TGA) and mechanical changes by a mechanical test that determines the loss of breaking strength of the textile material [185,[187][188][189][190][191]. Previous studies on the biodegradation of a cotton fabric using the soil burial test have shown that biodegradation is very rapid in untreated cotton, whereas biodegradation in treated cotton depends on the finishing treatment [184,191].The crosslinked finishes on cotton fabrics showed lower biodegradation than the non-crosslinked finishes [184].The results of breaking strength indicated that finished cotton textiles degraded more slowly than raw cotton fabrics.The prolongation of time depended on the finishing treatment [191].Fabric construction parameters (weave, linear density and thickness) were found to affect biodegradability.Fabrics with looser weave and lower linear density showed greater loss of tensile strength than fabrics with denser weave and higher linear density.Thinner fabric degraded faster than thicker fabric [177].The hairiness of the fabric surface and the relative tightness of the yarn twist had a major effect on the speed of cellulolytic attack [187].The inhomogeneity of textile fibres (amorphous/crystalline region, surface porosity, fibre diameter, some damages, etc.) could be the cause of the uneven biodegradation of the fabric [188].The degradation of cotton resulted in a change in fabric colour [186,187,189,190], which was closely related to the burial time and finishing.The degree of polymerization of the cotton fabric decreased with increasing burial time.The intensity of the bands at 1640 and 1548 cm −1 in the FTIR analysis corresponded to the amide I and amide II groups as a result of protein production by microbial growth on the fibres [189,190].The increase in temperature and moisture content in the soil accelerated the biodegradation process as the microorganisms in the soil became more active [180,187]. Biodegradable Microcapsules for Functional Textiles One of the major challenges in the functionalization of textile materials is the production of biodegradable textiles containing biodegradable microcapsules.The extent to which this area remains unexplored can be seen in Figure 12, which shows the number of scientific articles and patents on microencapsulation for biodegradable and eco-friendly textiles.Compared to Figure 1, in 2020, for example, biodegradable microcapsules were mentioned in one out of eighteen articles and one out of twelve patents on the topic of microencapsulation for textiles.In the mid 1990s, a scientific article first mentioned that the use of microcapsules would contribute to the biodegradability of medical textiles in the future [179].In recent years, the interest of scientists in the development of biodegradable textiles containing microcapsules has increased significantly, especially to produce antimicrobial microcapsules, dominated by essential oils and plant extracts. Microcapsules made from biodegradable materials have been applied to natural textile materials, mostly cotton [19,73,180].The choice of techniques for the preparation of biodegradable microcapsules is even more limited since chemical polymerization methods are not applicable.Therefore, physico-chemical and physical methods have been used (Table 4).Among the biodegradable polymers used to form the microcapsule shell, chitosan, gum Arabic and alginate predominate, while polylactic acid, soy lecithin, cholesterol and β-cyclodextrin are used to a lesser extent. The degradability of microcapsule cores depends on the ingredients used.An overview of the biodegradable microcapsules in Table 4 shows that the core material is usually composed of natural compounds which, through controlled release from the microcapsule into the environment, impart specific functional activity to the textile, such as antimicrobial, anti-inflammatory, insect repellent or fragrance properties.Essential oils such as basil, cinnamon, clove, citronella limonene, lavender and vanillin predominate among core materials, along with other natural ingredients such as rice oil and wintergreen oil (methyl salicylate), propolis and capric acid as a saturated fatty acid.Essential oils are natural, volatile and aromatic liquids, extracted mainly from plants, and are easily degradable in the environment under suitable conditions, namely light, heat and atmospheric oxygen [192]. Cotton fabric as a biodegradable textile is commonly used as a textile material for microcapsule application.The most commonly used method for applying biodegradable microcapsules to textiles is the pad-dry-cure or pad-dry process, although screen printing may also be used.The type of coating technique can affect the rate of biodegradation, particularly if the application of the microcapsules and other chemicals required for adhesion of microcapsules to fibres increases the hydrophilicity of the textile.Hydrophilic textile surface is more susceptible to moisture absorption, which in turn accelerates the rate of textile biodegradation by microorganisms and fungi.Padding is usually accomplished by immersing the textile in a padding bath containing a suspension of microcapsules of varying concentrations with no or added binders to ensure adhesion of the microcapsules to the fibre.The duration of the immersion can last from one to several minutes, depending on the characteristics of the textile (linear density, hydrophilicity and thickness) and its ability to absorb the padding bath.The fabric wet pick-up is controlled by the pressure of the rollers during squeezing of the padded fabric. In recent years, synthetic additives in the padding bath (acrylate binders or even less environmentally friendly dimethylol dihidroxyethlylene urea, wetting and softening agents) have been successfully replaced by more promising green products based on chemical grafting of cellulose with citric acid as crosslinker and monosodium phosphate as catalyst.Microbial degradation of acrylic polymers can be carried out for some types of polymers and to some extent.The biodegradation of acrylic polymers depends on the structure of the polymer, such as C-C backbone length, side groups, quaternary carbons and molecular organisation (linear, branched or cross-linked), as well as on the microorganisms and the environment in which the process is carried out, and the techniques used to quantify acrylic polymers degradation.Other characteristics such as the purity of the product and the degree of hydrolysis also influence the acrylic polymer biodegradability assessment [193].On the other hand, dimethylol dihidroxyethylene urea reduces the hydrophilicity of the cotton fabric due to a cross-linking reaction that takes place in the amorphous regions of the fibre.The finished cotton is less wettable and can absorb less moisture from the environment, which is one of the factors that accelerate biodegradation.In addition, dimethylol dihidroxyethylene urea impairs the growth conditions for microorganisms and thus delays the biodegradation of cotton fabric [190].It should be emphasised that there is no clear limit to what is biodegradable and what is not, as some of the polymers cannot be degraded in natural environments, sludge or landfills, but only in a specific artificial environment by selected microorganisms and fungi, as already discussed in chapter 7.1. In practice, achieving the biodegradability of products often results in diminished or limited technological performance of products.Therefore, functionalized textiles containing biodegradable microcapsules should be tested for their resistance to washing, rubbing and light, especially if the functional textiles are intended for daily use.Although few studies tested the wash resistance of functionalized textiles, the test methods used were poorly described or even not standardised.Future work should focus on testing various durability properties of functional textiles, including those with biodegradable microcapsules, using only standardised methods. In the available literature, there are only a limited number of studies [194][195][196] that focussed on and specifically investigated the biodegradation of microcapsules.Since there is no standardised test method to evaluate the biodegradability of microcapsules, the next step would be to develop guidelines for testing or to create a new standard.Pad-pre-dry-cure.Antimicrobial cotton woven fabric. [70] Cinnamon and clove essential oil. Citric acid or commercial binder Pad-dry-cure. Opportunities for Further Research According to the available market reports, the microcapsules market is estimated to reach USD 8.4 billion in 2021 and USD 13.4 billion by 2026 [200] and USD 17.31 billion by 2027 [201], at an intensive compound annual growth rate of 9.8% from 2021 to 2026 (200) and 11.7% from 2020 to 2027 [201] for various vertical end-uses such as pharmaceuticals and healthcare, food, home and personal care, textiles, agrochemicals and others [200]. Research and development should focus on the production of environmentally friendly, biodegradable microcapsules that are less harmful to the environment than the use of classic synthetic shell materials, which are difficult to degrade and pose a serious environmental problem in the long term.More effective adhesion between microcapsules and textile fibres must be developed to reduce the losses of microcapsules into the wastewater during the washing process. There is a need to move away from non-degradable synthetic materials not only in the synthesis of microcapsules, but especially in the production of textile substrates, which contribute to the accumulation of solid waste, and to microplastic pollution of habitats via textile laundering wastewater [202,203].However, it should be highlighted that the cultivation of cellulosic fibres for cotton, on the other hand, requires large amounts of water for plant growth, with intensive use of fertilisers, pesticides and defoliants, all of which pose environmental challenges [204]. The classical textile pre-treatment processes of desizing, scouring and bleaching, which are crucial for making textiles suitable for adsorption of microcapsules, textile auxiliaries, dyes and pigments, need to be changed towards the use of environmentally friendly chemicals such as amylases, pectinases and hydrogen peroxide [205,206].Functionalization of textiles can be achieved using classical finishing agents and methods without or with microcapsules to provide water and oil repellent, flame retardant or antimicrobial properties.The other option is the application of nanoparticles or microcapsules using more sustainable and environmentally friendly technologies, namely plasma [207] and sol-gel technology [208]. Conclusions In the production of functional textiles, microencapsulation is used to improve properties or provide completely new functionalities, resulting in broader usability and higher added value of the products. Trends in publications on microcapsules for functional textiles show a growing number of new scientific articles and patent documents, indicating strong interest in this field.Main functionalities achieved with microcapsules in textile coatings include thermochromic and photochromic effects, flame retardancy, improved thermal regulation, superhydrophobicity, UV absorption, insecticidal and insect repellent effects, prolonged release of fragrances, antimicrobial properties and special medical or cosmetic effects. Microencapsulation has been used to impart properties to textiles that are not possible or cost effective with other technologies.For example, liquids cannot be retained on textiles.Permanent separation of a core material without release allows liquid actives to be converted into solid discrete particles that remain functional in the textile coating throughout the life of the product, as in the case of paraffinic PCMs of photochromic substances.Volatile compounds would evaporate too quickly from textiles; microcapsules with permeable shells allow sustained release by diffusion, as in the case of essential oils.Rapid release from microcapsules can be planned and triggered by external stimuli, namely pressure, abrasion or combustion, as in the case of pressure-sensitive fragrance textiles or flame-resistant textile products. Not all microencapsulation methods are specifically suitable for textile applications.Commonly used methods include in situ and interfacial polymerization, simple and complex coacervation, molecular inclusion and solvent evaporation from an emulsion.Each of these methods has advantages and disadvantages.Aminoaldehyde microcapsules prepared by in situ polymerization are widely used due to their excellent technological properties, including high impermeability, durability and shell resistance to chemical agents.However, aminoplast shells release small amounts of formaldehyde and are not readily degradable in the environment.Physico-chemical and physical methods allow the use of environmentally friendly shell materials that are safer to use and degrade more rapidly.However, lower resistance to technological parameters in the application of microcapsule coatings and in washing of textile products remains an important obstacle. Most of the traditional coating processes have been used for microcapsule-containing coatings, provided that the microcapsules are small enough and can withstand the process parameters such as temperature and pressure.Binders play a crucial role in coating formulations.Acrylic and polyurethane binders have become popular in textile finishing, while organic acids and catalysts for chemical grafting are gaining ground as crosslinkers between microcapsule shells and cotton fibres.There still seems to be much room and challenges to be explored and solved in this field to increase the durability of microcapsules on textiles during use and maintenance of textile products. There are standardised test methods recommended to evaluate the stability of microcapsules during use and care of textiles, especially the standards for washfastness properties, such as ISO 105-C06, ISO 105-C08, ISO105-C09, ISO 105-C10 and ISO 6330.The use of non-standardised test methods can only provide rough estimates, which makes comparison between different studies difficult. One of the greatest challenges for research in the field of functionalization of textiles is the production of environmentally friendly biodegradable textiles containing biodegradable microcapsules.This area is just beginning to emerge and is still largely unexplored.The choice of techniques and materials to produce biodegradable microcapsules and coatings is even more limited, as chemical polymerization methods and synthetic polymers are mostly not applicable.So far, published studies have used physicochemical and physical microencapsulation methods using natural polymers such as chitosan, gum Arabic and gelatine or biodegradable synthetic polymers such as PLA.The biodegradability of microcapsules before and after their application on textile substrates needs further investigation.One of the main obstacles to the widespread application of biodegradable microcapsules in functional textiles is their unsatisfactory durability and resistance to washing, rubbing and light, which are crucial from the perspective of textile care and of lasting functionality.Further research should investigate the possibilities of introducing new biodegradable materials for microcapsule shells and coating compositions with improved technological properties or using functional groups on the microcapsule shell to enable the formation of covalent bonds with the functional groups of the biodegradable textiles, so that a higher adhesion between microcapsule and fibre could be achieved. Figure 1 . Figure 1.Trends in the number of new scientific articles in the Web of Science database [10] and patent documents in the Espacenet database [11] on microencapsulation for textiles.Search query: (microcapsule* OR microencapsulat*) AND (textile* OR cloth OR fabric OR garment*). Figure 2 . Figure 2. Origin countries of scientific articles in the Web of Science database (a) [10] and patent documents in the Espacenet database (b) [11] on microencapsulation for textiles.Search profile: (microcapsule* OR microencapsulat*) AND (textile* OR cloth OR fabric OR garment*); analysis by country. Figure 4 . Figure 4. Main purposes for the use of microcapsules in functional textiles, in terms of shell permeability, release mechanisms and active ingredients. Figure 9 . Figure 9. Complex coacervation microcapsules with exclusively natural ingredients: core of citronella oil and shells of gelatine and gum Arabic cross-linked with tannin (authors' archive).  21701 : 2019 Textiles-Test method for accelerated hydrolysis of textile materials and biodegradation under controlled composting conditions of the resulting hydrolysate,  ISO 11721-1:2001 Textiles-Determination of resistance of cellulose-containing textiles to micro-organisms-Soil burial test-Part 1: Assessment of rot-retardant finishing,  ISO 11721-2:2003 Textiles-Determination of the resistance of cellulose-containing textiles to micro-organisms-Soil burial test-Part 2: Identification of long-term resistance of a rot retardant finish,  AATCC TM30: 2013 Antifungal activity, assessment on textile materials: Mildew and rot resistance of textile materials, Test 1 soil burial,  ASTM D 5988-18 Standard test method for determining aerobic biodegradation of plastic materials in soil) Figure 12 . Figure 12.Trends in the number of the scientific articles in the Web of Science database [10] and patent documents in Espacenet database [11] on the topic of microencapsulation for biodegradable and environmentally friendly textiles.Search query (Web of science): (microcapsule* OR microencapsulat*) AND (textile* OR cloth OR fabric OR garment*) AND (biodegrada* OR eco-friend* OR green OR biopolymer* OR biocompatib*).Search query (Espacenet): microcapsule* AND textile* environment*. Table 1 . Chemical microencapsulation methods for textile functionalization: overview of published examples. Table 2 . Physico-chemical microencapsulation methods used for textile functionalization-examples of processes and materials. Table 3 . Physical microencapsulation methods used for textile functionalization-examples of processes and materials. Table 4 . Biodegradable microcapsules for functionalization of biodegradable textiles.
2021-11-11T16:22:30.312Z
2021-11-09T00:00:00.000
{ "year": 2021, "sha1": "2465c07231257d77951fb005960346e3edc3b9e2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/11/11/1371/pdf?version=1636523944", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9ceb939421216ca7bad733f857b1bf552a82e046", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
115188985
pes2o/s2orc
v3-fos-license
Potential reduction of energy consumption in public university library Efficient electrical energy usage has been recognized as one of the important factor to reduce cost of electrical energy consumption. Various parties have been emphasized about the importance of using electrical energy efficiently. Inefficient usage of electrical energy usage lead to biggest factor increasing of administration cost in Universiti Tun Hussein Onn Malaysia. With this in view, a project the investigate potential reduction electrical energy consumption in Universiti Tun Hussein Onn Malaysia was carried out. In this project, a case study involving electrical energy consumption of Perpustakaan Tunku Tun Aminah was conducted. The scopes of this project are to identify energy consumption in selected building and to find the factors that contributing to wastage of electrical energy. The MS1525:2001, Malaysian Standard - Code of practice on energy efficiency and use of renewable energy for non-residential buildings was used as reference. From the result, 4 saving measure had been proposed which is change type of the lamp, install sensor, decrease the number of lamp and improve shading coefficient on glass. This saving measure is suggested to improve the efficiency of electrical energy consumption. Improve of human behaviour toward saving energy measure can reduce 10% from the total of saving cost while on building technical measure can reduce 90% from total saving cost. Introduction Energy sources are among one of the basic requirements that have been used since the existence of human on earth. Rapid development of human civilization demands huge usage of energy resources of the non-renewable type. According to the International Energy Agency, energy demand is growing at a faster rate of 36% between 2008 and 2035 [1]. Efficient usage of energy is one approach that is being studied and implemented. As Malaysia moves towards the status of a developed nation in 2020, our energy requirement will become more intensive energy consumption are divided into several main sector which is building, commercial, industrial and transportation [2]. Building sector is the among the major energy consumers in this country. Rapid growth of University Tun Hussein Onn Malaysia (UTHM) leads to an increase of electrical energy usage. It is expected that the university will expand six times greater than the current condition. Hence, it is crucial to study on the potential reduction of energy consumption. The objective of this study is to investigate the potential reduction of energy consumption, to investigate the cause of wastage electrical energy and to propose steps or opportunities for energy saving consumption. Case Study Tunku Tun Aminah Library building was selected as a case study in this research. The library building located in main campus of UTHM, Parit Raja, Batu Pahat, Johor, at 2° toward North and 103° toward East. The library building block is five stories building. The total floor area of the building is 16,000 m² and capable to occupied 4000 student capacity. This library was design to storage 300,000 pieces of books. The typical equipments used in this library building are including air conditioner units, lamps, computers, laptop, photocopy machine, printers and other small equipments. The building is oriented in a way that the longitudinal axis of the building is aligned with North West -South East axis. UTHM Library is shaded from North East and South West exposure due to presence of adjacent buildings. Table 1 shows the details of general information of the library building The library building operation can be categories into three main purposes. First purpose is to provide study area to the university students. Second purpose is to provide teaching and learning activity, such as seminar rooms and auditorium. Third purpose is function as administration office for top management, discussion room and meeting room. Electrical energy consumption pattern Electricity consumption in Tunku Tun Aminah Library was studied by analyzing the actual electricity consumption of the electric bills issued by TNB. Electric tariff bills are issued in the early of every month to UTHM through Pejabat Pentadbiran Hartabina. Airflows. Air flows or air leakage is one of the cooling load components. Air leakage occurs when outside air enter and conditioned air leaves uncontrollably through cracks and openings [5]. This factor will cause the increasing of cooling load and electricity consumption. After the observation is conducted, some of the causes of air leak was identified which is toilet doors at floor 4 and windows at locker area are left open. Table 4 below show the energy loss due to air flows. Usage of air conditioning in unused room. The use of air conditioning in unused room will contribute to wastage electrical energy. Based on observation conducted, it is found Permata Hikmah Library using air conditioning for 13 hours even though that library only operates for 3 hours per day. This library is supposed to using split unit air conditioning to control the room temperature inside. The table 5 below show the amount of energy wastage due to this factor. Usage of air conditioning in unused room 4100 49200 Misuse of electric equipment. Ground floor contain 24 hours study room that operating every day. There are several facilities provided such as fan, air conditioner, table and chair. Based on observation conducted, some of users still switch on the fan even though the air conditioner is already operating. This situation will contribute to the waste of electricity. The table 6 below show the amount of energy wastage due to this factor. Energy saving opportunity Key figures for energy consumption and energy saving potential for Tunku Tun Aminah Library were presented based on observation, review of relevant literature and calculation conducted. As a result, it can be concluded that the main energy consumption is the air conditioning system followed by lighting system. There are several energy saving approaches including minimize the lamp number, minimize the lamp usage, put a sensor and change the lamp type. Table 7 illustrated the amount of energy saving with its type of saving. Conclusion and recommendation There are two types of potential cost saving; with less or no costs and with costs whether medium costs or high costs. Rescheduling of air conditioner system and lighting operation is consider as no costs potential saving per year. Retrofitting specific lighting system typically would bring great saving up to RM8990 per month, but this measure is required medium initial investment costs. shading coefficient of window film will save up to RM11680 per month. As conclusion, if all the recommendation made are implemented, it is estimated possible to save up to RM20279 per month. It is suggested for further economical analysis to find out payback period and return of investment.
2019-04-16T13:25:35.042Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "617c67a64b11f5246ebbf4db4f1f0ae7f65955cc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/243/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fd484fe0fe3e0b0955d866700bb06d42bd09ef5d", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Engineering" ] }
257393228
pes2o/s2orc
v3-fos-license
Red ginseng dietary fiber promotes probiotic properties of Lactiplantibacillus plantarum and alters bacterial metabolism Korean red ginseng has been widely used as an herbal medicine. Red ginseng dietary fiber (RGDF) is a residue of the processed ginseng product but still contains bioactive constituents that can be applied as prebiotics. In this study, we evaluated changes on fermentation profiles and probiotic properties of strains that belong to family Lactobacillaceae with RGDF supplementation. Metabolomic analyses were performed to understand specific mechanisms on the metabolic alteration by RGDF and to discover novel bioactive compounds secreted by the RGDF-supplemented probiotic strain. RGDF supplementation promoted short-chain fatty acid (SCFA) production, carbon source utilization, and gut epithelial adhesion of Lactiplantibacillus plantarum and inhibited attachment of enteropathogens. Intracellular and extracellular metabolome analyses revealed that RGDF induced metabolic alteration, especially associated with central carbon metabolism, and produced RGDF-specific metabolites secreted by L. plantarum, respectively. Specifically, L. plantarum showed decreases in intracellular metabolites of oleic acid, nicotinic acid, uracil, and glyceric acid, while extracellular secretion of several metabolites including oleic acid, 2-hydroxybutanoic acid, hexanol, and butyl acetate increased. RGDF supplementation had distinct effects on L. plantarum metabolism compared with fructooligosaccharide supplementation. These findings present potential applications of RGDF as prebiotics and bioactive compounds produced by RGDF-supplemented L. plantarum as novel postbiotic metabolites for human disease prevention and treatment. Introduction Ginseng is the root of plants in the genus Panax and has been widely used as an herbal medicine in Eastern Asia (So et al., 2018). It is typically characterized by the presence of ginsenosides, which are the main bioactive components with antioxidant, antiproliferative, and neuroprotective properties (Tam et al., 2018). In recent years, the biological properties of ginseng have been extensively demonstrated; these include enhanced immune system performance and memory, and improved blood circulation (Geng et al., 2010;Cho et al., 2018;Su et al., 2022). Korean red ginseng (Panax ginseng C.A. Meyer) is a processed product made by the repetitive steaming and drying of fresh ginseng to extend shelf life, reduce toxic effects, and enhance biological benefits (He et al., 2018). Red ginseng is traditionally consumed as a water extract containing a high concentration of ginsenosides. The residues are usually discarded, but they still contain bioactive constituents, such as unextracted ginsenosides, acidic polysaccharides, mineral elements, and dietary fiber (Yu et al., 2020). Many attempts to make the most use of these residues have included pharmaceutical, health functional foods, and cosmetics applications (Truong and Jeong, 2022). Dietary fibers are carbohydrate polymers from plant-derived foods that are not digested by human enzymes or absorbed in the gut. Polymers contribute to human gut health by increasing stool weight and regularity, thickening the contents of the intestinal tract, and promoting growth of gut microbes (Makki et al., 2018). In particular, dietary fiber can be a good fermentable source for bacteria within the large intestine and influences the composition of bacterial communities as well as microbial metabolic activities producing fermentative end products, such as short-chain fatty acids (SCFAs). These prebiotic fermentable fibers promote metabolic interactions among bacterial communities that cross-feed probiotics and inhibit the proliferation of pathogens (Holscher, 2017). Lactobacillaceae (including newly defined Lactobacillusassociated genera by taxonomic changes such as Lactiplantibacillus and Limosilactobacillus) and Bifidobacteria are the most wellknown genera of probiotic organisms that normally reside in human gastrointestinal tracts. Probiotics are live microorganisms which benefit the host by producing useful physiologically bioactive compounds. These compounds have immunomodulatory, anticarcinogenic, anti-aging, and antimicrobial effects in hosts. However, the use of these compounds is currently limited by a lack of knowledge of their molecular mechanisms, strain specific behaviors, and safety (Bourebaba et al., 2022). To address these limitations, recent studies have focused on elucidating microbial metabolism and discovering postbiotic molecules, which are defined as metabolic products secreted by probiotics in cell-free supernatants (Nataraj et al., 2020). Metabolomics is the systematic study of unique chemical molecules, termed metabolites, generated by specific cellular processes (Jordan et al., 2009). Metabolomic data are used for phenotyping molecular interactions, identifying potential biomarkers, and discovering new therapeutic targets. In this study, we aimed to find an effective strategy for utilizing processed red ginseng residue as a prebiotic dietary fiber source and evaluated its prebiotic properties on the changes in growth, metabolism, and epithelial attachment ability of probiotic Lactobacillaceae strains. Comprehensive metabolomic analyses were performed to investigate the effects of red ginseng dietary fiber (RGDF) on bacterial metabolism and to discover novel bioactive compounds secreted by the RGDF-supplemented probiotic strain. Bacterial strains and media Limosilactobacillus reuteri KCTC 3594 and Lactiplantibacillus plantarum KCTC 3108 were obtained from the Korean Collection for Type Cultures (KCTC, Jeongeup, Republic of Korea). The strains were pre-cultured in 50 ml of MRS broth (BD Difco, Franklin Lakes, NJ, USA) in 50 ml conical tubes and were incubated at 37 • C without shaking (Biofree, Seoul, Republic of Korea) overnight. Cultures of the probiotic strains were then generated at 37 • C in 50 ml of MRS broth supplemented with 0.5, 1, or 2% RGDF (Korea Ginseng Corporation, Daejeon, Republic of Korea). Composition of MRS broth is as follows: 10 g/L proteose peptone, 10 g/L beef extract, 5 g/L yeast extract, 20 g/L dextrose, 1 g/L polysorbate 80, 2 g/L ammonium citrate, 5 g/L sodium acetate, 0.1 g/L magnesium sulfate, 0.05 g/L manganese sulfate, and 2 g/L dipotassium phosphate. Preparation of RGDF The residue remaining after water extraction of red ginseng at 87 • C for 24 h was provided by Korea Ginseng Corporation (Daejeon, Republic of Korea). RGDF was prepared from the residue by drying it at 115 • C and pulverizing it to 50 mesh. The physicochemical characteristics of RGDF were analyzed as previously reported , and same RGDF material was used in this study. Measurement of bacterial growth and cell mass (HPLC) using the LC-6000 system (FUTECS, Daejeon, Republic of Korea). Each 1.5 ml of culture medium was collected by centrifugation (Eppendorf, Hamburg, Germany) at 13,000 rpm for 5 min at 4 • C and filtered through a 0.45 µm nylon membrane filter. HPLC analysis was performed using an Aminex HPX-87X organic acid column (Bio-Rad, Hercules, CA, USA) with 0.005 M H 2 SO 4 as the mobile phase, with a constant elution flow of 0.5 ml/min at 55 • C. Carbon source utilization analysis An API kit (BioMérieux, Marcy l'Étoile, France) was used to compare the ability of probiotic strains to utilize the particular carbon source. Inoculation samples were prepared by collecting cultured strains from each medium that had a turbidity greater than a McFarland standard of 4. One hundred microliters of sample were inoculated into the API strip and incubated at 37 • C for 4 h. After incubation, the reagents were added for reading, incubated for 10 min, and exposed to strong light at 1,000 W for 10 s to decolorize any excess reagent. Identification and interpretation were performed using the numerical profiles. Analysis of bacterial attachment to intestinal epithelial cells Caco-2 cell line was procured from the American Type Culture Collection (ATCC, Manassas, VA, USA) and cultured in Minimum Essential Medium (MEM) supplemented with 10% fetal bovine serum, 100 U/ml penicillin, and 100 µg/ml streptomycin at 37 • C in a 5% CO 2 atmosphere. Escherichia coli, purchased from ATCC, were grown in Luria Broth (LB) overnight. E. coli and RGDFpretreated probiotic strains were harvested by centrifugation at 5,000 rpm for 10 min, washed twice with sterile PBS, and resuspended in serum and antibiotic-free MEM. For adhesion assay, Caco-2 monolayer was inoculated with approximately 10 8 CFU/ml of L. reuteri or L. plantarum and incubated for 2 h in a 5% CO 2 incubator. After incubation, the monolayers were washed three times with sterile PBS to remove non-adherent bacteria. The Caco-2 cells with adherent bacteria were detached using trypsin-EDTA solution. Bacterial counts were performed by the colony counting method on MRS agar plates. Adhesion result was expressed as the percentage of the bacteria adhered divided by the initial count of bacteria added. For competition assay, approximately 10 8 CFU/ml of each probiotic strain and E. coli was co-incubated with Caco-2 monolayer for 1 h in a 5% CO 2 incubator. Non-bounded bacteria were then washed three times with sterile PBS and the Caco-2 cells with adherent bacteria were detached using trypsin-EDTA solution. The number of viable adhering E. coli was determined using the colony counting method on LB agar plates. The competition index was expressed as the percentage inhibition of E. coli adhesion in the presence of each probiotic strain divided by the adhesion of bacteria in the absence of probiotic strains. Metabolome analysis GC-MS has advantages of a greater chromatographic resolution compared to LC-MS and large spectral libraries, although the chemical range of metabolome coverage is narrower than LC-MS (Aretz and Meierhofer, 2016). Recently, more researches have used LC-MS to detect more peaks, but most of the identified metabolites by LC-MS are considerably overlapped with GC-MS except for lipid molecules having large molecular weights. GC-MS has been the most commonly used technique for metabolite profiling because of its hard ionization method which is highly reproducible and easy for metabolite annotation, and it still has been widely applied for metabolite profiling and identification (Baiges-Gaya et al., 2023;Kurbatov et al., 2023;Neag et al., 2023). For metabolome analysis, each intracellular and extracellular metabolites were measured in L. plantarum and L. reuteri grown in MRS medium with different supplementation of RGDF, fructooligosaccharides, or without addition. To extract intracellular and extracellular metabolites from the probiotic strains, each strain was cultured in 15 ml of medium until the mid-exponential phase determined by measuring its growth curve. Fifteen milliliters of each probiotic culture was centrifuged at 4,000 rpm for 15 min at 4 • C. The supernatant was filtered through a 0.2 µm syringe filter composed of polyvinylidene fluoride for the extraction of extracellular metabolites. Aliquots (750 µL) of the filtered supernatants was mixed with 2.25 ml of 4 • C methanol (GC-grade 100%; Sigma-Aldrich, St. Louis, MO, USA) and vortexed for 1 min. The mixtures were centrifuged at 13,000 rpm and 4 • C for 10 min, and 0.1 ml of each supernatant was collected and completely dried using a Spin Driver Lite VC-36R (TAITEC Corporation, Koshigaya City, Saitama, Japan) at 2,000 rpm for 24 h. To extract intracellular metabolites from the cell pellet, 1 ml of 0.9% cold NaCl (w/v) was added to the pellet and filtered through a 0.2 µm syringe filter. Then, it was transferred to a 15 ml conical tube and washed twice with 10 ml of 0.9% cold NaCl (w/v). The final washed pellet was mixed with 2 ml methanol, vortexed for 10 min, and sonicated for 1 min on ice. The material was mixed with 2 ml of chloroform, vortexed for 10 min, and sonicated for 1 min with ice. Water (1.8 ml) was added and vortexing and sonication were repeated. The final mixtures were centrifuged at 13,000 rpm and 4 • C for 10 min, and 0.1 ml the upper supernatant layer of each was collected and completely dried using the aforementioned Spin Driver Lite VC-36R under same conditions to extract extracellular metabolites. Methoxymation and silylation were performed for the derivatization of intracellular and extracellular metabolites. For methoxymation, 10 µL containing 20,000 ppm methyl hydroxyl chloride amine in pyridine was mixed with each dried sample and incubated at 30 • C for 90 min. Next, 45 µL of N-methyl-Ntrimethylsilyl-trifluoroacetamide (Fluka, Buchs, Switzerland) and 30 µL of fluoranthene as internal standard were added, vortexed for silylation, and incubated at 37 • C for 30 min. The derivatized sample was transferred to a gas chromatography (GC) vial with an insert. Gas chromatography was performed using a Crystal 9000 chromatograph (Chromatotec, Val-de-Virvée, France) coupled with a Chromatotec-crystal mass spectrometer (photomultiplier detector) for the analysis of untargeted metabolites. One microliter of the derivatized sample was injected into a VF-5MS GC column (Agilent, Santa Clara, CA, USA). The oven temperature was initially 50 • C for 2 min, then increased to 320 • C at a rate of 5 • C/min, and held at 320 • C for 10 min. The helium carrier gas flowed at a rate of 1.5 ml/min. Statistical analysis For the deconvolution of the mass spectrometry (MS) data and identification of metabolites, MS-DIAL ver. 4.70 was used. All records of the Fiehn RI Library were used to identify metabolites by matching the MS peaks. Based on n-alkane mixture, the calculation of retention index was conducted using Kovats retention index formula: where RI, retention index of a metabolite "i"; n, carbon number of the alkane which elutes before "i"; m, number of carbons of the alkane which elutes after "i"; tri, retention time of "i"; trn, retention time of the alkane which elutes before "i"; and trm, retention time of the alkane which elutes after "i". Retention index of each metabolite was compared with the value of standards registered in NIST 2020 Mass Spectral Library (NIST, Gaithersburg, MD, USA), and metabolites were identified based on the retention indices and mass fragmentation profiles. Uni-and multi-variance analyses, principal component analysis (PCA), hierarchical clustering analysis, and metabolite set enrichment analysis (MSEA) were performed using MetaboAnalyst (Ver. 5.0). Network analysis, such as MetaMapp, was performed using Cytoscape software. Results RGDF supplementation promotes SCFA production and carbon source utilization in L. plantarum Red ginseng contains ginsenosides that have important pharmacological roles in cancer, diabetes, and aging (Yuan et al., 2012;Yu et al., 2020;Hong et al., 2022). The by-products of the processing red ginseng still contain several types of bioactive components, such as acidic polysaccharides and dietary fiber, as well as the remaining ginsenosides (Park and Kim, 2006). RGDF is a byproduct composed of approximately 31% dietary fiber (314.3 mg/g) and 0.66% ginsenoside (6.63 mg/g of total ginsenosides) . Since dietary fibers are well-known prebiotic ingredients for bacterial growth promotion and probiotic functionality, we first screened the effects of RGDF on metabolic profiles of probiotic strains, including L. reuteri, L. plantarum, Lactobacillus acidophilus, Lacticaseibacillus casei, and Lactococcus lactis (Supplementary Table 1). We selected two probiotic strains, L. plantarum and L. reuteri, which were most positively and negatively affected, respectively, by RGDF supplementation. Although RGDF supplementation slightly enhanced the growth of both probiotic Lactobacillaceae strains, the difference was not significant compared with control (Figures 1A, B). The pH change of cultured media also was not different between control and RGDF supplementation. To reveal possible associations between RGDF and probiotic functionality, we next measured the production of SCFAs and carbon source utilization profiles with RGDF. L. plantarum enhanced the production of SCFAs, specifically lactate and acetate, with RGDF supplementation in a dose-dependent manner. L. reuteri reduced the production of these metabolites (Figures 1C, D). RGDF also improved the carbon source utilization ability of L. plantarum but had no effect on L. reuteri ( Figure 1E). Thus, RGDF supplementation can promote the production of beneficial metabolites (lactate and acetate) and carbon source utilization by L. plantarum. RGDF supplementation promotes gut epithelial adhesion of L. plantarum and protects against enteropathogens Dietary fibers help maintain intestinal homeostasis by promoting probiotics, limiting the growth and adhesion of pathogenic microbes, and stimulating fiber-derived SCFA production (Cai et al., 2020). RGDF supplementation significantly increased the adhesion of L. plantarum to gut epithelial cells compared to the control. The adhesion was most pronounced in the presence of 0.5% RGDF (Figure 2A). Adhesion of L. plantarum and L. reuteri to the gut epithelium was decreased by adding RGDF (Figure 2B). L. reuteri is a probiotic that has a well-documented adhesive ability (approximately 30% in the control) (Gao et al., 2016). This behavior was confirmed in the present study; a high percentage of adhesion in the control was evident compared with L. plantarum (approximately 2% in the control). To evaluate the competitive inhibitory effects of RGDFsupplemented strains on binding of enteropathogenic bacteria to the host epithelium, E. coli and RGDF-pretreated probiotic strains were co-incubated with Caco-2 monolayer (Figures 2C, D). Supplementation with RGDF increasingly reduced the E. coli attachment in the presence of both L. plantarum and L. reuteri; greater differences were observed in L. plantarum. Similar to the epithelial adhesion of L. reuteri, the strain showed a higher basal level of competitiveness against pathogen attachment than L. plantarum. However, addition of RGDF significantly improved adhesion of the gut epithelium and protected against E. coli attachment of L. plantarum, which can broaden the applicability of the strain as a probiotic. It is noted that several factors would affect epithelial adhesion of the strains including presence of surface proteins, auto-aggregation and bacterial surface hydrophobicity. Bacterial adhesion is based on non-specific physical interactions and aggregation abilities that also form a barrier preventing colonization of pathogens (Kos et al., 2003). Dell'Anno et al. (2021), showed that both L. plantarum and L. reuteri showed autoaggregation and epithelial adhesion. L. plantarum and L. reuteri had higher hydrophobicity and greater auto-aggregation, respectively, reflecting their different colonizing ability. The collective findings indicate that RGDF supplementation promoted gut epithelial adhesion and had a protective role against enteropathogens in the presence of L. plantarum. RGDF supplementation alters intracellular metabolic profiles of L. plantarum, but not L. reuteri Although both L. plantarum and L. reuteri utilize dietary fibers as prebiotics, our results indicate that RGDF supplementation was effective in L. plantarum, but not in L. reuteri. To identify the effects of RGDF on bacterial metabolism, we first determined the intracellular metabolome changes between RGDF supplementation and control in L. plantarum and L. reuteri. Total 106 of metabolites were identified including sugars, amino acids, fatty acids, organic acids, and polyamines (Supplementary Table 2). PCA results clearly showed metabolic alterations with 0.5% (w/v) RGDF supplementation in L. plantarum, while the metabolic profile of L. reuteri with RGDF was not different (Figure 3A). Loading of PC1 and PC2 indicated that fumaric acid (−0.834 at PC1), uracil (−0.924 at PC1), picolinic acid (0.791 at PC1), and 2hydroxybutanoic acid (0.763 at PC1) were important metabolites determining the metabolic differences between L. plantarum and L. reuteri. MetaMapp, a network graph of metabolites based on biochemical pathways and chemical and mass spectral similarities, displayed significantly altered metabolites (p < 0.05) with RGDF compared to the control in L. plantarum ( Figure 3C). MSEA also supported the results of significantly altered bacterial metabolism, especially sugar (galactose, starch, and sucrose) metabolism and unsaturated fatty acid biosynthesis ( Figure 3D). Considering the significant increase in lactate and acetate production and carbohydrate utilization in L. plantarum with RGDF (Figure 1), we suggest that glycolytic metabolic flow and membrane flexibility, respectively, can be affected by RGDF supplementation. In addition, we compared the effect of RGDF on the intensity of each metabolite with that of the control using a volcano plot ( Figure 3B). The intensities of the four metabolites (oleic acid, nicotinic acid, uracil, and glyceric acid) decreased after RGDF supplementation in L. plantarum (Figure 3E). The relative abundance of these metabolites was also significantly reduced by RGDF, verifying that metabolic processes associated with the four metabolites were specifically altered by RGDF (Supplementary Figure 1). Together, these findings suggest that L. plantarum, but not L. reuteri, is specifically affected by RGDF supplementation via central carbon metabolism. RGDF supplementation promotes biosynthesis of specific metabolites in L. plantarum Postbiotics are nonviable bacterial metabolic products with biological activity in the host (Nataraj et al., 2020). These molecules have several advantages over probiotics with respect to safety and effectiveness, such as triggering only targeted responses by a defined mechanism, better accessibility of microbe-associated molecular patterns, and ease of production and storage (Nataraj et al., 2020). To systemically characterize postbiotic metabolites specifically produced by RGDF supplementation in L. plantarum, we further analyzed the extracellular metabolome in L. plantarum and L. reuteri grown with 0.5% RGDF, defined as the relative metabolite intensity in spent medium from bacterial culture to metabolite intensity in baseline medium (Jain et al., 2012). As shown in the PCA results, exometabolome profiles were clearly separated between the bacterial strains, as well as between the RGDF supplement and control ( Figure 4A). Red ginseng dietary fiber-specific bacteria-derived metabolites were distinguished from the media components based on three criteria: (1) the averaged value of metabolite intensity in the spent medium subtracted from its intensity in the uncultured medium should be positive; (2) the statistical significance between RGDF and control should be under the level of 95% confidence; and (3) the absolute change in metabolite intensity with RGDF compared to the control should be >2. Based on these criteria, we identified four L. plantarum metabolites (oleic acid, 2-hydroxybutanoic acid, hexanol, and sec-butyl acetate) biosynthesized specifically in response to the RGDF supplement (Figures 4B-E). The collective findings indicate that RGDF supplementation promoted the biosynthesis of specific metabolites in L. plantarum. These metabolites included oleic acid, 2-hydroxybutanoic acid, hexanol, and butyl acetate. RGDF supplementation has distinct effects on L. plantarum metabolism compared with fructooligosaccharide supplementation Dietary fiber, a plant-derived component that cannot be completely digested by human enzymes, consists of nonstarch polysaccharides, including cellulose and oligosaccharides (Veronese et al., 2018). Fructooligosaccharides (FOS) are dietary fibers composed of linear chains of fructose units linked by β-(2,1) bonds (Sabater-Molina et al., 2009). They naturally occur in plants, such as onion, chicory, and banana, and are increasingly used in food products because of their prebiotic effect, which stimulates the growth of probiotic gut microbiota (Sabater-Molina et al., 2009). To compare the effects of different type of dietary fibers on metabolic alteration in L. plantarum, we cultured L. plantarum on control MRS, MRS with 0.5% RGDF, and MRS with 0.5% FOS. Similar to the growth results of RGDF shown in Figure 1, supplementation with either RGDF or FOS did not have an effect on bacterial growth (Supplementary Table 3). In contrast to the lack of observable differences in bacterial growth, the metabolome profile of L. plantarum supplemented with RGDF showed a transition between MRS and FOS in both intracellular and extracellular states (Figures 5A, B). Similar to the effect of RGDF shown in Figures 3C, D, FOS also decreased the abundance of specific metabolites in sugar and central carbon metabolism, while the abundance of leucine specifically increased with FOS supplementation compared to the control (Figure 5C). MSEA analysis indicated that the citrate cycle and its associated pathways, such as alanine, aspartate, and glutamate metabolism, as well as sugar metabolism, were altered by FOS treatment (Figure 5D). Comparison of the intracellular metabolite abundance of RGDF with FOS revealed that RGDF supplementation resulted in a decreased abundance of palmitic acid and stearic acid, while uracil, raffinose, ascorbic acid, and 2-hydroxybutanoic acid comparatively increased in RGDF (Figures 5E, F). Next, we compared the extracellular metabolites differentially produced by FOS treatment to the control, applying the same criteria used for RGDF treatment (Figure 6). As expected, the culture supernatant of cells grown with FOS contained a significantly higher abundance of sugars and sugar derivatives than those grown with RGDF, including raffinose, D-glucosamine, and pinitol. Production of RGDF-specific metabolites, including oleic acid, 2-hydroxybutanoic acid, hexanol, and sec-butyl acetate, was not significantly induced by FOS, suggesting that the metabolism of these molecules is RGDF-specific. Thus, RGDF supplementation had distinct effects on L. plantarum metabolism compared with FOS supplementation. Discussion Dietary fibers and the associated phytochemicals in ginsengderived products provide various functional and health benefits. In this study, we evaluated the effects of RGDF as a prebiotic constituent on the physiological and metabolic alterations of probiotics. With RGDF supplementation in the growth media, L. plantarum showed the highest production of SCFAs, specifically lactate and acetate, and the most increased carbohydrate-fermenting capability compared with other probiotic Lactobacillaceae species, especially L. reuteri. In addition, RGDF improved gut epithelial adhesion of L. plantarum and protected against enteropathogens. Analysis of the intracellular metabolome of L. plantarum indicated decreases in metabolites of sugars and unsaturated fatty acids, and significant decreases in the abundance of oleic acid, nicotinic acid, uracil, and glyceric acid. RGDF supplementation also promoted the secretion of specific metabolites, such as oleic acid, 2-hydroxybutanoic acid, hexanol, and butyl acetate, in L. plantarum. Comparison of the metabolic alteration by red ginseng-derived dietary fiber with a representative dietary fiber, FOS, showed distinguishable effects between the two different types of fibers in L. plantarum. Although dietary fibers generally promote probiotic growth, their effects are strain specific. Our results consistently revealed that RGDF supplementation improved the probiotic properties of L. plantarum, but not of L. reuteri. L. plantarum, unlike most probiotic Lactobacillaceae species, exhibits ecological and metabolic flexibility and thus maintains a diverse functional genome that facilitates the flexibility to colonize a variety of environments (Fidanza et al., 2021). For example, L. plantarum strains exhibit acid tolerance by inducing alterations in the fatty acid composition of the bacterial membrane upon exposure to low-pH conditions (Huang et al., 2016). Genome analysis of 165 L. plantarum strains revealed the presence of a large number of carbohydrates metabolizing genes and two-component systems and signal transduction systems regulating physiological processes, facilitating the adaptability of the species in various environments compared to other lactic acid bacteria and even among probiotic Lactobacillaceae strains (Cui et al., 2021). In addition, L. plantarum produces bacteriocins termed plantaricins, which can effectively inhibit enteropathogenic bacteria, such as E. coli, under specific circumstances (Pal and Srivastava, 2014). These findings based on the diverse functional genetic characteristics support our results that L. plantarum greatly modulates and improves their metabolic functions, including acid production, carbohydrate utilization, and inhibition of pathogen growth in the presence of RGDF. To explain how RGDF promotes bacterial metabolic alterations in L. plantarum, but not in L. reuteri, and how the effects of RGDF are different from those of other dietary fibers, beyond the genetic flexibility of L. plantarum, we interpreted intracellular metabolic changes of L. plantarum and L. reuteri when supplied with RGDF and FOS. RGDF supplementation resulted in a significant decrease in the abundance of oleic acid, nicotinic acid, uracil, and glyceric acid. Oleic acid [cis-9-octadecenoic acid; 18:1(9c)] is the most common monounsaturated fatty acid in animals and vegetables. It is incorporated into the membranes of lactic acid bacteria grown in a medium, but is not synthesized (Johnsson et al., 1995). In L. plantarum, our metabolomic analysis indicated that the intracellular abundance of oleic acid decreased, while the extracellular abundance increased with RGDF supplementation. These findings suggest that oleic acid might be less incorporated from the medium, possibly by modified membrane rigidity by RGDF. Nicotinic acid, also known as niacin, is a form of vitamin B3 and is an essential human nutrient that can be supplied by plants and bacteria. Several cellular processes require the compound as a component of the coenzymes nicotinamide adenine dinucleotide (NAD) and NAD phosphate (NADP). In probiotic Lactobacillaceae spp., free nicotinic acid decrease with increasing cellular activity as it is largely incorporated in the form of cofactors (McIlwain et al., 1949). Nicotinic acid is also an important cofactor for lactate dehydrogenase, acting as the limiting factor for lactate production during fermentation, which might be associated with the reduced intracellular abundance and improved lactate production by RGDF (Colombié and Sablayrolles, 2004). Glyceric acid is a precursor of several phosphate derivatives that are important biochemical intermediates in glycolysis. 3-Phosphoglyceric acid is one derivative that is especially important for serine and cysteine biosynthesis. A recent study demonstrated that L. plantarum supplemented with 2% RGDF upregulates the expression of genes involved in serine (sdhA, sdhB, and sdaC) and cysteine metabolism (cysE) . Although further verification of the changes in specific metabolic and physiologic mechanisms is required, our results support the view that RGDF supplementation alters cellular and metabolic processes. Lactobacilli are recognized for their ability to secrete many beneficial metabolites, such as SCFAs, indole-derivatives, and vitamins Thompson et al., 2020;Sugimura et al., 2022). Our exometabolomic analysis revealed that 2-hydroxybutanoic acid, hexanol, and butyl acetate as metabolites that were secreted specifically in response to RGDF supplementation. These compounds are generally excreted as end products during propanoate biosynthesis and butanol metabolism. In mammalian tissues, 2-hydroxybutanoic acid, also known as α-hydroxybutyrate, is released as a byproduct when cystathionine is cleaved to cysteine for detoxification against oxidative stress. Although it has been used as a biomarker of type 2 diabetes and lactic acidosis, novel roles of 2-hydroxybutanoic acid have been suggested to protect against acetaminophen-induced liver injury and immune modulation against viral infection Zheng et al., 2020;Shi et al., 2021). For example, the level Metabolomic analysis of L. plantarum cultured with 0.5% (w/v) RGDF or 0.5% (w/v) FOS. Principle component analysis (PCA) score and loading plots of intracellular (A) and extracellular (B) metabolome. MetaMapp of L. plantarum cultured with FOS compared to the control MRS broth (C) and cultured with RGDF compared to FOS (E). Each node is a structurally identified metabolite. Blue nodes are decreased metabolites, red nodes are increased metabolites, and yellow nodes are unchanged metabolites. The size of nodes and labels reflect fold-changes and p-values by t-test, respectively. MSEA of L. plantarum cultured with FOS compared to the control MRS broth (D) and cultured with RGDF compared to FOS (F). of serum 2-hydroxybutanoic acid was reportedly enriched in patients with viral infections that included human papilloma virus or SARS-CoV-2 compared to healthy controls Shi et al., 2021). It could be a result of the activation of antioxidant responses and control of cellular redox balance. Hexanol is an organic alcohol used in the perfume industry; its odor is that of freshly mown grass with a hint of strawberries. Its health-related functions are unclear, but it reportedly modulates the function of the actomyosin motor (Komatsu et al., 2004). Similar to hexanol, butyl acetate possesses characteristic flavors and a sweet odor of bananas or apples (Holland et al., 2005). It also has antimicrobial activity against undesirable microorganisms in cosmetic products, such as Staphylococcus aureus and E. coli (Lens et al., 2016). The specific mechanism of the secretion of these metabolites following stimulation by RGDF supplementation and comparative studies with FOS, would provide some evidence that metabolite production is highly specific to RGDF, but not to carbohydrate polymer-based dietary fiber. Further Normalized abundance of extracellular metabolites of L. plantarum cultured with 0.5% (w/v) RGDF or 0.5% (w/v) FOS. Data are expressed as violin plots of six determinations. Differences were indicated at a significance level of 95% ( * ) and 99% ( * * ), as determined by one-way ANOVA with Tukey's post-hoc analysis. genetic investigations are required to elucidate the underlying mechanism. Conclusion Red ginseng dietary fiber supplementation promoted probiotic properties of L. plantarum, including production of SCFAs (lactate and acetate), carbohydrate utilization, epithelial attachment, and pathogen inhibition. Comparative metabolomic analyses suggested RGDF-related modification of cellular and metabolic processes, including membrane biology and central carbon metabolism. In addition, the potential applications of bioactive compounds produced by RGDF-supplemented L. plantarum have been proposed as novel postbiotic metabolites. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Author contributions S-HY, YJ, and MS designed the study, and drafted and revised the manuscript. HJ, V-LT, J-HB, Y-JB, and RR performed the experiments, analyzed the data, and collected the samples and data interpretation. EN, S-KK, and W-SJ revised the manuscript and obtained the funding. All authors had read and approved the final manuscript.
2023-03-08T16:15:09.549Z
2023-03-06T00:00:00.000
{ "year": 2023, "sha1": "3749c027b5df0e78f1589b9096b5c403d2c3a331", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1139386/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75f30a4d996ac143022a9b754fde32687d9cd6b7", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203625050
pes2o/s2orc
v3-fos-license
Autism spectrum disorder polygenic scores are associated with every day executive function in children admitted for clinical assessment Autism spectrum disorder (ASD) and other neurodevelopmental disorders (NDs) are behaviorally defined disorders with overlapping clinical features that are often associated with higher‐order cognitive dysfunction, particularly executive dysfunction. Our aim was to determine if the polygenic score (PGS) for ASD is associated with parent‐reported executive dysfunction in everyday life using the Behavior Rating Inventory of Executive Function (BRIEF). Furthermore, we investigated if PGS for general intelligence (INT) and attention deficit/hyperactivity disorder (ADHD) also correlate with BRIEF. We included 176 children, adolescents and young adults aged 5–22 years with full‐scale intelligence quotient (IQ) above 70. All were admitted for clinical assessment of ASD symptoms and 68% obtained an ASD diagnosis. We found a significant difference between low and high ASD PGS groups in the BRIEF behavior regulation index (BRI) (P = 0.015, Cohen's d = 0.69). A linear regression model accounting for age, sex, full‐scale IQ, Social Responsiveness Scale (SRS) total score, ASD, ADHD and INT PGS groups as well as genetic principal components, significantly predicted the BRI score; F(11,130) = 8.142, P < 0.001, R 2 = 0.41 (unadjusted). Only SRS total (P < 0.001), ASD PGS 0.1 group (P = 0.018), and sex (P = 0.022) made a significant contribution to the model. This suggests that the common ASD risk gene variants have a stronger association to behavioral regulation aspects of executive dysfunction than ADHD risk or INT variants in a clinical sample with ASD symptoms. Autism Res 2020, 13: 207–220. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. Lay Summary People with autism spectrum disorder (ASD) often have difficulties with higher‐order cognitive processes that regulate thoughts and actions during goal‐directed behavior, also known as executive function (EF). We studied the association between genetics related to ASD and EF and found a relation between high polygenic score (PGS) for ASD and difficulties with behavior regulation aspects of EF in children and adolescents under assessment for ASD. Furthermore, high PGS for general intelligence was related to social problems. Introduction Autism spectrum disorder (ASD) is a neurodevelopmental disorder (ND) characterized by deficits in social communication and interaction, and restricted and repetitive behaviors and interests [American Psychiatric Association, 2013]. The heritability of ASD is high, with estimates ranging from 70 to 90%, and the disorder is regarded as a complex genetic disorder with multifactorial causes [Lai, Lombardo, & Baron-Cohen, 2014]. Several rare genetic variants conferring high risk of ASD have been identified, and these rare genetic variants, for example, copy-number variations (CNVs), are estimated to explain about 5-10% of the genetic risk for ASD [Ramaswami & Geschwind, 2018]. However, most of the heritability of ASD seems to be caused by common genetic variants [Gaugler et al., 2014]. ASD, like attention deficit/hyperactivity disorder (ADHD), schizophrenia, and bipolar disorder, have been shown to be polygenic disorders, in which each single risk variant has a small effect on the disease phenotype [Demontis et al., 2018;Grove et al., 2019;International Schizophrenia Consortium et al., 2009;Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014]. Based on genome-wide association studies (GWASs) summary statistics for a given complex disorder, it is possible to derive measures of the cumulative genetic loads for the disorders inherent to an individual's genotype. Such measures are often called polygenic scores (PGS) [International Schizophrenia Consortium et al., 2009]. Recently, Grove et al. identified five risk loci for ASD based on a genome-wide association meta-analysis of 18,381 cases of ASD and 27,969 controls [Grove et al., 2019]. Even though these PGS are getting better at separating ASD cases from controls [Grove et al., 2019], their sensitivity and specificity are not yet high enough for clinical use in diagnostics or treatment planning. However, exploring the potential clinical utility of PGS is central in the pursuit for precision medicine for NDs [Editorial, 2010;Torkamani, Wineinger, & Topol, 2018]. PGS often only explain a few percentiles of variation in disease status, meaning that they have limited predictive power. For many people with "average risk," quantifying the exact risk might be of limited value. However, identifying those with particularly low or high risk may be of some value. A viable approach is to look at "extreme" PGS, as done by others [Andersson et al., 2018;Khera et al., 2018; Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014]. Individuals with high PGS for a given disorder might also be at increased risk for known comorbid traits and disorders. Thus, "extreme" PGS analysis can be a promising tool to investigate the genetic contribution to symptom severity and disease characteristics in NDs. The key characteristics of ASD, deficits in social skills and abnormal behaviors, strongly affect every day functioning [de Vries & Geurts, 2015], which is often the primary reason for seeking professional help for families with ASD children. Cognitive processes play an important role in the development of social skills, and in moderating behavioral responses [Gardiner & Iarocci, 2018], in particular executive function [Rommelse, Geurts, Franke, Buitelaar, & Hartman, 2011]. These abilities depend on response inhibition, interference control, working memory, and flexibility [Friedman & Miyake, 2017], which enable regulation of thought and goal-directed behavior [Miyake et al., 2000]. Executive dysfunction is suggested to be involved in the development of key symptoms and behaviors in ASD [Demetriou et al., 2017;Hill, 2004] and other psychiatric disorders characterized by social dysfunctions and abnormal behavior, such as schizophrenia [Amann et al., 2012]. Importantly, social difficulties are associated with executive dysfunctions in everyday life in ASD [Leung, Vogan, Powell, Anagnostou, & Taylor, 2015;Torske, Naerland, Oie, Stenberg, & Andreassen, 2017]. ASD is a spectrum diagnosis with large variation in clinical characteristics and functions. Children and adolescents with subthreshold symptoms who do not receive an ASD diagnosis might still have profound social difficulties and executive dysfunction [American Psychiatric Association, 2013]. Comorbidity is common in ASD, and about 30% also meet the diagnostic criteria for ADHD [Lord, Elsabbagh, Baird, & Veenstra-Vanderweele, 2018]. Cognitive difficulties are also observed in ADHD [Craig et al., 2016], and particularly executive dysfunctions are present across diagnostic categories like ASD and ADHD as well as many other psychiatric disorders [Dajani, Llabre, Nebel, Mostofsky, & Uddin, 2016;Snyder, Miyake, & Hankin, 2015]. Executive function is a heritable cognitive domain and deficits are found in unaffected ASD family members at higher rates than in the general population [Benca et al., 2017]. Furthermore, in typically developed children, intelligence and executive function correlate [Diamond, 2013]. Still, there are several unclear aspects related to crossdiagnostic features of NDs, which are behaviorally defined, and their association to cognitive dysfunctions [Gillberg, 2010]. Building on the recent progress in GWAS of different traits and disorders, the PGS has emerged as a tool that enables investigation of the polygenic components of different disorders and to explore the association between genes, symptoms, and functioning. Furthermore, genetic underpinnings of general cognitive ability (intelligence) can be used to identify differences in cognitive factors between NDs [Savage et al., 2018], including executive function. This could provide a novel understanding of the underlying disease mechanisms, as outlined in the Research Domain Criteria initiative [Cuthbert, 2014]. Approaches to dissect social and cognitive traits are also of clinical importance in the ASD field since children and adolescents are admitted to specialist health care due to functional deficits, mainly related to social and/or behavioral impairment. For people with ASD, structured situations with clear expectations are easier than unstructured everyday situations [Kenworthy, Yerys, Anthony, & Wallace, 2008]. Thus, standardized and structured neuropsychological tests might not capture cognitive deficits important for everyday life functions [Kenworthy et al., 2008]. Actually, questionnaires assessing executive functions have shown to have higher ecological validity than neuropsychological laboratory tests and might provide important information about how executive dysfunctions affect every day functioning. This finding applies to both clinical and nonclinical samples, as well as specifically to children and adults with ASD [Demetriou et al., 2017;Tillmann et al., 2019;Vriezen & Pigott, 2002]. The Behavior Rating Inventory of Executive Function (BRIEF) is one of the most used clinical tools to measure executive functions in everyday life [Gioia, Isquith, Guy, & Kenworthy, 2015]. Furthermore, recent evidence suggests that the Social Responsiveness Scale (SRS) is a valid measure for social impairments in ASD [Constantino & Gruber, 2005]. Because the BRIEF and the SRS both are continuous scales, they can be used to study difficulties beyond diagnostic categories [Geschwind & State, 2015]. Here, we used the BRIEF to measure executive functions and the SRS to measure social skills as they occur in natural social settings. The primary aim of the present study was to determine if ASD PGS is associated with executive dysfunction in everyday life in a sample of children and adolescents admitted for clinical assessment of ASD. Second, we aimed to disentangle the polygenic components of NDs and cognitive traits in a clinically relevant setting by investigating how the PGS for ASD, ADHD, and general intelligence (INT) are related to executive function. We hypothesized that high level of ASD PGS will be associated with worse BRIEF scores beyond ASD diagnosis in a sample referred for ASD assessment. Furthermore, since the participants in our sample have an ASD diagnosis and/or ASD symptomatology, we expected the ASD PGS to be more strongly associated with BRIEF scores than the PGS for ADHD and INT. We also investigated the association between the PGS and social skills in an everyday setting, and we hypothesized that the PGS for ASD should be positively associated with social difficulties in everyday life. Participants The total sample consisted of 176 children, adolescents, and young adults who were referred to a specialized hospital unit for NDs for evaluation of ASD. The current sample was part of the national BUPGEN network and was recruited between 2013 and 2018. For a detailed description of the recruitment procedure see Grove et al. supplementary material [Grove et al., 2019]. Inclusion criteria were ASD-related difficulties and suspected ASD diagnosis, independent of final ASD diagnosis. The participants were assessed by a team of experienced clinicians (clinical psychologists and child psychiatrist). They assessed the participants based on anamnestic information and descriptions of difficulties in everyday life activities using the gold standard tools, the Autism Diagnostic Observation Schedule (ADOS), and/or the Autism Diagnostic Interview-Revised (ADI-R). Exclusion criteria were full scale intelligence quotient (IQ) below 70. No participants had any significant sensory losses (vision and/or hearing) that could interfere with testing. The participants were in the age range 5-22 years and spoke Norwegian fluently. Of the total sample of 176 participants, 120 fulfilled the diagnostic criteria for an ASD diagnosis, and the remaining (n = 56) non-ASD diagnostic group was named the subthreshold ASD group. The subthreshold ASD group included participants with ASD symptomatology not reaching the criteria of ASD diagnosis, who may also have other diagnoses. Of the 120 children and adolescents with ASD (91 boys and 29 girls), 18 were diagnosed with childhood autism (F84.0), five with atypical autism (F84.1), 56 with Asperger syndrome (F84.5) and 41 with Pervasive developmental disorder-not otherwise specified (PDD-NOS) (F84.9). Of the 120 participants with ASD, 42 had a comorbid disorder of ADHD. In the subthreshold ASD group, 26 had an ADHD diagnosis (See Table 1). Of the participants without ASD or ADHD (n = 30), one had chronic motor or vocal tic disorder, four had Tourette syndrome, two had mixed specific developmental disorders, five had specific developmental disorders of speech and language, two had specific developmental disorders of scholastic skills, two had other specified behavioral and emotional disorders, two had disorder of psychological development, one had some other childhood disorder of social functioning and one had obsessive compulsive disorder. In addition, we included 29 typically developed controls (10 girls) with genetic and behavioral data available. The controls were recruited from local schools through invitations/bulletins to all students/parents, and had no history of learning disabilities or psychiatric problems. For a more detailed description of the controls see Hoyland et al. [2019]. We divided the clinical participants (across the ASD and the subthreshold ASD groups) into three groups (Low, Moderate, High) based on each of their PGS for ASD, ADHD and INT. The Low, Moderate and High PGS groups were identified based on the normal distribution curve of the standardized PGS scores: the Moderate group consisted of all participants with PGS within one standard ADHD, attention deficit/hyperactivity disorder; ASD, autism spectrum disorder; IQ, intelligence quotient; SD, standard deviation; PGS, polygenic scores; ASD polygenic groups Low and High based on ASD PGS at P-value <0.1. deviation from the mean, and the Low and High groups respectively comprised participants with PGS in the lower tail and the upper tail of the distribution. This subdivision roughly corresponds to the participants with the 15% lowest scores constituting the Low group, those with the 15% highest scores the High group, and everyone in between the Moderate group. In the linear regression, we used the groups Low, Moderate and High PGSs. The discovery samples our PGS are based on consisted of 18,381 ASD cases and 27,969 controls for the ASD PGS [Grove et al., 2019], 20,183 ADHD cases and 35,191 controls for the ADHD PGS [Demontis et al., 2018] and 269,867 cases for the INT PGS [Savage et al., 2018]. Clinical Measures The participants underwent a thorough clinical evaluation for ASD using the gold standard tools, the ADOS and/or the ADI-R, administered by an experienced clinician (psychiatrist and/or psychologist) [Lord et al., 2000;Lord, Rutter, & Le Couteur, 1994]. In addition, a comprehensive diagnostic assessment was done based on questionnaires, clinical interviews, naturalistic observations, and formal testing of cognitive function. ADOS and/or ADI-R were performed in connection with clinical assessment for ASD. IQ was measured using the fourth version of Wechsler's Intelligence Scale for Children [Wechsler, 2003] or another of Wechsler's tests for the appropriate age group [Wechsler, 2002[Wechsler, , 2008. The diagnostic assessments of ADHD were done by a clinician (psychologist and/or psychiatrist) specialized in child and adolescent psychology/psychiatry or a pediatrician, based on all previously mentioned measures and according to formal diagnostic criteria. Social responsiveness scale. The SRS is a 65 items questionnaire made to identify the presence and severity of social impairment in natural social settings within the autism spectrum. A parent who is familiar with the individual's current behavior and developmental history responds to each question on a Likert scale: (a) not true, (b) sometimes true, (c) often true, or (d) almost always true. The SRS consists of five treatment subscales; Social Awareness, Social Cognition, Social Communication, Social Motivation, and Autistic Mannerisms, which together form the total score [Constantino & Gruber, 2005]. We used the parent rater scale for children/ adolescents aged 4-18 years in our study [Constantino & Gruber, 2005]. For the few participants who were over the age of 18, we knew that they lived at home with their parents and that their parents therefore had a good basis for assessing their everyday function. We used continuous t-scores in our analyses and a higher score means more difficulties related to social function. A total score in the range 60-75 indicates clinically significant deficits in social reciprocal interaction, and a mild to moderate interference in everyday interactions. A t-score ≥76 represents severe interference in daily social interactions and is strongly associated with a clinical diagnosis of ASD [Constantino & Gruber, 2005]. Internal consistency is found to be high, both in population-based samples and clinical samples (Cronbach's α = 0.93-0.97) [Constantino & Gruber, 2005]. Behavior rating inventory of executive function. The BRIEF is a 86 items questionnaire designed to assess executive function in everyday life rated by parents, teachers, and/or self-report [Gioia, Isquith, Guy, & Kenworthy, 2000]. In this study, we used the t-scores from the parents' report for the age group 5-18 years. For the few participants who were over the age of 18, we knew that they lived at home with their parents and that their parents therefore had a good basis for assessing their everyday function. The parents respond if the described behavior has been a problem for the child/adolescent the past 6 months by circling: (a) never, (b) sometimes, or (c) often. The raw scores were computed into the Software Portfolio (BRIEF SP) that has separate norms based on respondent, age, and gender of the child/adolescent [Gioia et al., 2000]. The BRIEF consists of three indexes and eight nonoverlapping subscales. The Behavior Regulation Index (BRI) incorporates three subscales: inhibit, shift, and emotional control. The Metacognition Index (MI) consists of five subscales: initiate, working memory, plan/organize, organization of materials, and monitor. Together these eight subscales (three from BRI and five from MI) form the Global Executive Composite Index (GEC). The items that make up the BRI index are intended to measure the ability to modulate both behavioral and emotional control, and to move flexibly from one activity to another. The MI index on the other hand is related to active problem solving, and the ability to initiate, organize, and monitor your own actions [Gioia et al., 2000]. A higher score is associated with more problems related to executive function, and t-scores ≥65 are considered to represent clinically significant levels. In our analysis, we used the continuous t-scores. The internal consistency is reported to be high (Cronbach's α = 0.80-0.98) [Gioia et al., 2000]. Polygenic Scores Genotyping. The DNA was extracted with standard methods either from blood samples or saliva collected in the clinic. The genotypes for the study were obtained with Human Omni Express-24 v.1.1 (Illumina Inc., San Diego, CA) at deCODE Genetics (Reykjavik, Iceland). Quality control was performed using PLINK 1.9 [Purcell et al., 2007]. Briefly, variants were excluded if they had low coverage (<95%), had low MAF (<0.01), deviated from Hardy-Weinberg equilibrium (P < 10 −4 ), or occurred at significantly different frequencies in different genotyping batches (FDR < 0.5). Whole individual genotypes were excluded if they had low coverage (<95%) or high likelihood of contamination (heterozygosity above mean + 5 SD). MaCH software [Li, Willer, Ding, Scheet, & Abecasis, 2010] was used to obtain variant pseudodosages (sums of imputation probabilities for the two haplotypes) for all participants based on reference haplotypes derived from the samples of European ancestry in the 1,000 Genome Project. We had no challenges with kinship in our sample (no relatedness above pi-hat 0.1 in our sample). All our participants (controls and clinical) were collected in the same time period, genotyped on the same array, and imputed and processed together. Any variants imputed with MAF lower than 0.05 or information score lower than 0.8 were excluded from the subsequent PGS calculations. Polygenic score. The variants' effects were estimated from an inverse-variance weighted meta-analysis of the GWAS summary statistics for all original sub-studies except any sub-studies our own participants were drawn from. All summary statistics were quality controlled by removing variants that met any of the following conditions: minor allele frequency <0.05; imputation quality INFO <0.8; not present in more than half of the sub-studies. The remaining variants were clumped into independent regions on the basis of the linkage disequilibrium structure of the 1,000 Genomes Phase III European population. PLINK v1.9 was used with the following parameters: -clump-p1 1.0 -clump-p2 1.0 -clump-r2 0.2 -clump-kb 500. The allelic dosage coefficient (or logarithm of the odds ratio) of each of the variants with minimum P-values from all independent regions were taken as weights in constructing the PGSs. We obtained GWAS summary statistics for ASD from Grove et al. [2019], ADHD from Demontis et al. [2018] and INT from Savage et al. [2018] [Demontis et al., 2018;Grove et al., 2019;Savage et al., 2018]. The summary statistics were used to generate PGS, and test the association with specific phenotypes in the current sample, a procedure referred to as "genetic risk profiling" [Martin, Daly, Robinson, Hyman, & Neale, 2018]. We used a GWAS P-value threshold for inclusion of 0.1 for all PGS (ASD, ADHD, and INT). This P-value threshold was chosen because it resulted in the highest explained variance in ASD case/control status in the study our PGS was based on [Grove et al., 2019]. The scores included contributions from about 32,000 variants. The participants in our subsample and Grove et al. [2019], Demontis et al. [2018], and Savage et al. [2018] are all of European ancestry and are therefore genetically relatively homogeneous. Copy-number variations. CNVs in molecular studies have been identified as risk factors for ASD [Geschwind, 2011]. Information about the CNVs in our sample was obtained from the chip genotypes, following standard CNV calling methodology [Sonderby et al., 2018]. Statistical Analyses Data analyses were conducted using the statistical package IBM SPSS Statistics for Windows, version 25.0 (SPSS, Inc., Chicago, IL). Figure 1 was made using jamovi (Version 0.9). We used Pearson's independent t-tests to investigate the mean differences in all clinical variables (MI, BRI, SRS, fullscale IQ, and age) between the Low and the High PGS groups. Chi-square for crosstabs was used to investigate differences in the distribution of sex and ASD and ADHD diagnoses in the Low and the High ASD PGS groups. We also used Pearson's independent t-tests to investigate the differences in PGS between controls and the clinical group to validate the ASD PGS in our sample. Separate linear regression models were conducted to explain variance in BRIEF scores (BRI, MI, and GEC) across ASD, ADHD, and INT PGS groups. Age, sex, SRS scores, full-scale IQ, and PCs were entered as covariates. We also used linear regression with total SRS score as the dependent variable and ASD, ADHD and INT PGS groups, age, sex, full-scale IQ, and BRIEF scores (BRI and MI) as independent variables. To describe the distribution of data, we added scatterplots of age by BRIEF scores and ASD PGS by BRIEF scores (See Supplementary Figs. S1A-B and S2A-B) and Q-Q plots of BRI and MI scores (See Supplementary Figs. S3A-B). The genetic principal components (PCs) that entered the linear regression model as covariates of no interest were the ones with the highest correlation with ASD, ADHD, and INT PGS (highest and unique correlation PGS ASD = PC02, PGS ADHD = PC01, and PGS INT = PC04) and the dependent variables (BRIEF/GEC = PC06 and SRS = PC08, respectively). We used standard procedures to control for population stratification, and we were unable to identify clear population subgroups Figure 1. The ASD PGS scores for typically developed controls, the subthreshold ASD group, and the ASD group (ASD PGS at P < 0.1). The dots represent the mean scores, and the bands inside the boxes are the median. ASD, autism spectrum disorder; PGS, polygenic score. in our sample. For scatterplots of the PCs see Supplementary Figures S4A-D. The coordinates of the study participants in the relevant genetic PCs (PC1, 2, 3, 4, and 6) are inserted in the context of the European reference population from the 1,000 genomes project in Supplementary Figures S5A-J. We also checked if the outliers had any substantial effect on the analysis and we tried to remove seven of the participants with the highest PC01 (outliers). The main findings were still the same (the difference in BRI score between the high and low ASD groups was still significant) (P = 0.036), and in the regression analysis predicting BRI the same three covariates were significant; sex P = 0.026, SRS P = <0.001, and ASD PGS group P = 0.020; R 2 = 0.390). Furthermore, we also fitted the regression models with continuous PGS instead of PGS groups (See Supplementary Tables S1A-C). Due to moderate sample size, we did not adjust alpha levels for multiple testing because of the risk of type 2 errors. All tests were considered significant with a two-sided P-value lower than 0.05. We report unadjusted P-values and are cautious in drawing our conclusions. Ethical Considerations The study was approved by the Regional Ethical Committee and the Norwegian Data Inspectorate (REK #2012/1967), and was conducted in accordance with the Helsinki Declaration of the World Medical Association Assembly. Written informed consent was given from parents or legal guardians for participants under 18 years. Participants over 18 years gave written consent. Results Clinical characteristics. In the total sample, there was a significant difference between those who had an ASD diagnosis (n = 120) and the subthreshold ASD group (n = 56) on: total SRS (P < 0.001), total t-score Global Executive Composite (GEC) from BRIEF (P = 0.003), t-score BRI from the BRIEF (P < 0.001), and age (P = 0.001). There were no significant differences between the groups on total IQ (P = 0.841) or t-score MI from BRIEF (P = 0.068). There were no significant differences between those who had an ASD diagnosis and the subthreshold ASD group on any of the PGS; ASD (P = 0.578), ADHD (P = 0.810), and INT (P = 0.842) (all P-bin 0.1). Those who had a comorbid diagnosis of ASD and ADHD (n = 42) had a significantly higher score (were more impaired) on both the total t-score GEC (P = 0.001) and the t-score MI (P = 0.001) from the BRIEF, than those who only had an ASD diagnosis. There was no significant difference in total SRS, total IQ, or BRI scores between those with an ASD diagnosis only and those with comorbid ASD and ADHD. In the whole sample (n = 176), there was a small, positive correlation between ASD PGS and full-scale IQ, r = 0 0.033, but this was not significant (P = 0.673). Characteristics of the ASD PGS groups. In the total sample (n = 176), there were no significant group differences between the Low and High ASD PGS groups on age (P = 0.059), full-scale IQ (P = 0.439), SRS total (P = 0.921), or sex distribution (Pearson chi-square 1.113, P = 0.573). The Low and High ASD PGS groups did not, however, reflect the diagnostic groups, and there was no significance difference between the distribution of ASD or ADHD diagnosis (Pearson chi-square 0.962, P = 0.618 and Pearson chi-square 0.107, P = 0.585) in the Low and High PGS groups (See Table 1). We found no significant differences in parental education level, which is highly related to income and socioeconomic status, between the Low and the High ASD PGS groups (fathers education level P = 0.784, mothers education level P = 0.798). Validation of the PGS in the clinical group compared to typically developed controls. To validate the ASD PGS, we compared the PGS from the ASD group (n = 120), the subthreshold ASD group (n = 56), and the typically developed controls (TDC) (n = 29) using Pearson's independent t-tests. The PGS for ASD differentiated between the TDC and the clinical group (ASD and subthreshold ASD group, n = 176; t(203) = 2.61, 95% CI [1.2 × 10 −5 , 8.7 × 10 −5 ], P = 0.010. The PGS for ASD also differentiated between the TDC and both the subthreshold ASD group; t(83) = −2.07, 95% CI [−8.6 × 10 −5 , −1.7 × 10 −6 ], P = 0.042 and the ASD group t(147) = −2.63, 95% CI [−9.1 × 10 −5 , −1.3 × 10 −5 ], P = 0.009. The PGS for ASD did not significantly differ between those with ASD and subthreshold ASD group (P = 0.578). There was no significant difference between the TDC and the clinical group (ASD and subthreshold ASD group) on PGS for ADHD (P = 0.205) or INT (P = 0.944). The TDC group had significantly lower scores (less impairment/dysfunction) than the clinical group on both SRS and BRIEF (P < 0.001), and was significantly older than the clinical group (mean control group = 16.3 years, mean clinical group = 11.7 years (ASD and subthreshold ASD), P < 0.001). Altogether the ASD PGS differentiated between the ASD group and TDC, and also between the subthreshold ASD group and the TDC (Fig. 1). PGS of ASD, ADHD, and INT and their association to executive function (BRIEF). There was a significant group difference between the Low and the High ASD PGS groups in the BRI scores from the BRIEF; BRI t = 62.9 in the Low PGS group and t = 71.5 in the High PGS group; t(49) = −2.51, 95% CI [−15.45, −1.71], P = 0.015, Cohen's d = 0.69. This means that the group with the highest PGS for ASD had significantly more executive dysfunctions with BRI than the group with lowest PGS for ASD. We did not find a significant difference between the Low and High ASD PGS groups in the MI scores from the BRIEF. We also found no significant group differences between Low and High ADHD or INT PGS groups in either BRI or MI scores from the BRIEF (See Tables 2 and 3). Furthermore, we investigated the presence of CNVs in the Low and the High ASD PGS groups. Three of the participants in the Low ASD PGS group had CNVs, while no CNVs were observed in the High ASD PGS group. Regression analyses with BRIEF as the dependent variable. A linear regression model accounting for age, sex, full-scale IQ, SRS total score, ASD, ADHD and INT PGS group as well as genetic PCs explained 41% of the variance in the BRI score; F(11, 130) = 8.142, P < 0.001, R 2 = 0.41(unadjusted). Only SRS total (P = <0.001), PGS ASD 0.1 group (P = 0.018), and sex (P = 0.022) made a significant contribution to the model (See Table 4). This suggests that the ASD PGS is more important than the ADHD PGS and the INT PGS in explaining executive dysfunctions with behavior regulation (BRI) in our sample. The same model explained 36% of the variance in MI score; F(11, 134) = 6.836, P < 0.001, R 2 = 0.36 (unadjusted). In this case, only SRS total (P < 0.001) made a significant contribution to the model. None of the PGS (ASD, ADHD, Low and High ASD PGS: Autism spectrum disorder polygenic score subgroup (Low and High) at P < 0.1. Low and High ADHD PGS: Attention deficit/hyperactivity disorder polygenic score subgroup (Low and High) at P < 0.1. Low and High INT PGS: General intelligence polygenic score subgroup (Low and High) at P < 0.1. *P < 0.05 (two-tailed); **P < 0.01 (two-tailed). a Equal variance not assumed; Welch's t-test. or INT) significantly contributed to this model (See Table 5). These results indicate that the PGS did not have a significant role in explaining the metacognitive aspects of executive function in our sample. For the regression model with GEC as dependent variable see Table 6. We also fitted regression models with each PGS separately to investigate their relationship to BRI and MI, including sex, age, full-scale IQ, and total SRS as covariates. The contribution of ASD PGS is mainly the same as when all three PGS were fitted in the same model. Fitting the regression model with BRI as dependent variable results in F(9, 132) = 9.753, P < 0.001, R 2 = 0.40 (unadjusted). SRS total (P < 0.001), sex (P = 0.016), and ASD PGS (P = 0.021) made a significant contribution to the model. None of the PGSs (ASD, ADHD, or INT) had a significant contribution to the regression models, either with BRI or MI as dependent variables, when we used the continuous PGS (BRI as dependent variable P = 0.234-0.488, MI as dependent variable P = 0.175-0.983) (See Tables S1A-C). The PGS association to social function (SRS). We also investigated the association between the PGS and ASD PGS subgroup P < 0.1: ASD polygenic groups low, moderate, and high based on the autism spectrum disorder polygenic score at P < 0.1. ADHD PGS subgroup P < 0.1: ADHD polygenic groups low, moderate, and high based on the attention deficit/hyperactivity disorder polygenic score at P < 0.1. INT PGS subgroup P < 0.1: INT polygenic groups low, moderate, and high based on the general intelligence polygenic score at P < 0.1. *P < 0.05 (two-tailed); **P < 0.01 (two-tailed). a n corresponds to participants without any missing variables in outcomes or covariates (in the total sample of the study n = 176, there were 19 missing on SRS total and 11 missing on full-scale IQ). The covariates are included in the table to show their contribution to the model. Model's R 2 = 0.359 (unadjusted). Models P-value < 0.001. ASD PGS subgroup P < 0.1: ASD polygenic groups low, moderate, and high based on the autism spectrum disorder polygenic score at P < 0.1. ADHD PGS subgroup P < 0.1: ADHD polygenic groups low, moderate, and high based on the attention deficit/hyperactivity disorder polygenic score at P < 0.1. INT PGS subgroup P < 0.1: INT polygenic groups low, moderate, and high based on the general intelligence polygenic score at P < 0.1. B, unstandardized regression coefficients; BRIEF-BRI, Behavior Rating Inventory of Executive Functions, Behavior Regulation Index; BRIEF-GEC, Behavior Rating Inventory of Executive Functions, Global Executive Composite; BRIEF-MI, Behavior Rating Inventory of Executive Functions, Metacognition Index; IQ, intelligence quotient; SE, standard error; SRS, Social Responsiveness Scale; β, standardized regression coefficients. *P < 0.05 (two-tailed) **P < 0.01 (two-tailed). a n corresponds to participants without any missing variables in outcomes or covariates (in the total sample of the study n = 176, there were 19 missing on SRS total and 11 missing on full-scale IQ). The covariates are included in the table to show their contribution to the model. social function in everyday life measured with the SRS. In an independent sample t-test we found a significant group difference between total SRS scores of the Low and the High INT PGS groups; total t-score t = 66.0 in the Low PGS group and t = 76.2 in the High PGS group; t(45) = −2.27, 95% CI [−19.11, −1.14], P = 0.028, Cohen's d = 0.66. This means that the group with the highest PGS for INT had significantly more problems related to social function in everyday life than the group with Low PGS for INT. We did not find any significant differences between the Low and High ASD PGS groups or the Low and High ADHD PGS groups on the SRS total score (P = 0.921 and P = 0.865). Furthermore, we performed a regression analysis and controlled for age, sex, full-scale IQ, MI from the BRIEF, BRI from the BRIEF, ASD, ADHD and INT PGS groups as well as PCs. The linear regression model was significant and explained 48% of the variance in total SRS score; F (12, 128) = 10.010, P < 0.001, R 2 = 0.48 (unadjusted). MI from the BRIEF (P = 0.001), BRI from the BRIEF (P < 0.001), and sex (P = 0.019) had a significant contribution to the model. None of the PGSs made a significant contribution to the total SRS score (P = 0.101-0.742) in the regression model. Discussion Our results showed a significant association between ASD PGS, representing the polygenetic components of ASD, and executive function deficits in everyday life in a clinical group seeking specialist health care due to autistic symptomatology. In our study, the BRI from the BRIEF differed significantly between individuals with the highest and the lowest PGS for ASD. The participants in the High PGS group had on average a t-score of 71.5 on BRI, which is in the clinical range of the scale. In comparison, the participants in the Low PGS group had an average t-score of 62.9, which is under the clinical cutoff (Cohen's d = 0.69). We did not observe any significant BRI difference between the Low and High PGS groups for ADHD or INT. This finding was in line with our hypothesis that in a sample consisting of participants with an ASD diagnosis and/or ASD symptomatology, the ASD PGS would be more strongly associated with BRIEF scores than the PGS for ADHD and INT. No significant differences in the MI from the BRIEF were detected between the Low and High PGS groups for ASD, ADHD, or INT. Since ASD characteristics are quantitative traits, we included participants under the diagnostic threshold for ASD. Furthermore, because of the large degree of comorbidity in ASD, we included PGS of other diagnostic groups. This enabled us to investigate the polygenic component of core ASD phenomena beyond diagnostic categories. The use of extreme PGSs may already have clinical relevance, for example, for cancer and cardiovascular diseases [Seibert et al., 2018;Torkamani et al., 2018]. In our study, we found a clinically meaningful and significant difference in the BRI scores from the BRIEF where the participants in the High ASD PGS had clinical/pathological t-scores and the Low ASD PGS group had nonclinical score. Thus the use of extreme scores seems to have a potential for identifying clinically relevant levels of cognitive problems. However, further studies are needed before clinical relevance can be established. It is possible that polygenic factors in ASD could be related to behavior differences, or other phenotypes not ASD PGS subgroup p0.1: ASD polygenic groups low, moderate, and high based on the autism spectrum disorder polygenic score at P < 0.1. ADHD PGS subgroup p0.1: ADHD polygenic groups low, moderate, and high based on the attention deficit/hyperactivity disorder polygenic score at P < 0.1. INT PGS subgroup p0.1: INT polygenic groups low, moderate, and high based on the general intelligence polygenic score at P < 0.1. *P < 0.05 (two-tailed); **P < 0.01 (two-tailed). a n corresponds to participants without any missing variables in outcomes or covariates (in the total sample of the study n = 176, there were 19 missing on SRS total and 11 missing on full-scale IQ). The covariates are included in the table to show their contribution to the model. related to executive function. Furthermore, executive function deficits are not related to a specific disorder, but characteristic of several NDs, with few deficits in typically developing controls. This can explain why PGS for ASD in an ASD diagnostic group differs significantly from PGS for ASD in controls, but not significantly from PGS for ASD in a non-ASD diagnostic group with ASD symptomatology. Thus, in the current clinical sample referred for autism symptomatology it is expected that the PGS for ASD will be higher for the clinical group without ASD than for controls. The ASD PGS had a stronger relationship to executive function than the PGS for ADHD and INT in our clinical sample under assessment for ASD. This finding might imply that the common genetic variance associated with ASD is of greater importance for executive dysfunction than the genetic variance associated with ADHD or INT. Even though the ADHD PGS and INT PGS were not significant in the regression analysis with BRIEF as dependent variable, they still may have an association to BRIEF, and the findings must be interpreted with caution since the regression analyses are based on null-hypothesis testing. However, based on the actual t-scores in the Low and the High groups we found that there is a significant difference between the Low and the High ASD PGS groups in BRI, but not between Low and High ADHD and INT PGS groups. Furthermore, our results indicate that the behavioral regulation aspects of executive function are more strongly related to the polygenetic nature of ASD than the metacognitive aspects. The behavioral regulation index from the BRIEF contains the subscales inhibition, flexibility, and emotional control. The subscale flexibility is the hallmark of ASD, and it is within this area that those with ASD have the most pronounced difficulties, also compared with other clinical groups with executive function deficits [Hovik et al., 2014]. Therefore, executive function deficits most specifically related to ASD may also have the closest link to polygenetic components of ASD. Our finding that difficulties with the behavior regulatory aspects of executive function and ASD PGS are positively associated contrasts Schork et al.'s finding that ASD PGS is associated with better performance on a cognitive flexibility task [Schork et al., 2018]. This might be explained by differences in sample and method, as Schork et al. investigated typically developing children with neuropsychological tests while we investigated children referred for clinical assessment and measured their executive functioning in everyday life. Furthermore, our finding is in line with studies of clinical populations where executive function deficits are associated with an ASD diagnosis [Demetriou et al., 2017]. The correlations between rating measures like the BRIEF and performance-based measures of executive function are typically reported to be quite poor. In a review incorporating both clinical and nonclinical samples in addition to both children and adults, the mean correlation between scores on performance-based measures and behavioral ratings by use of BRIEF was reported to be 0.15 [Toplak, West, & Stanovich, 2013]. Furthermore, it is known that genetic factors can contribute in complex ways to even performance-based tests intended to measure the same underlying construct like, for example, memory [Kremen et al., 2014]. In a recent metaanalysis of executive function in ASD, Demetriou et al. found that most measures of executive function did not achieve clinical utility in differentiating between ASD and typical controls [Demetriou et al., 2017]. However, informant-based measures based on BRIEF achieved absolute clinical marker criteria. They conclude that the BRIEF is based on more representative environmental situations and has therefore a higher ecological validity than many performance-based measures, and may thus be more appropriate in clinical practice. Furthermore, this is in line with recurrent findings that there is pronounced discrepancy between structured performancebased measures of general intelligence and adaptive functioning (measures with e.g. Vineland-II) in ASD [Tillmann et al., 2019]. We therefore think that our finding that a higher ASD risk (higher ASD PGS) is associated with more executive function difficulties is novel and interesting, and is consistent with the clinical representation of more executive function difficulties in the ASD population. Our findings illustrate how the polygenic component of NDs and its association to executive dysfunctions can disentangle psychological constructs and may be used to explore possible underlying biological explanatory models of executive dysfunction. In a GWAS study, Sun et al. found that ADHD children had genetic variants related to behavioral regulation impairments measured with the BRIEF [Sun et al., 2018]. In children with NDs such as ASD and ADHD, there are indications that BRI from the BRIEF is more strongly associated with genetic factors than the metacognitive aspects of executive function [Sun et al., 2018]. This is in line with our findings of association between the ASD PGS and the BRI subscale, but not the metacognitive scale of BRIEF. Further, in our study we did not find any significant association between the PGS for ADHD and BRI. This seems to suggest a stronger genetic component of executive function in ASD in our sample, as the effect size was bigger for ASD (d = 0.69) than for ADHD (d = 0.20). The lack of significant association between ADHD PGS and BRIEF scores may also be due to a smaller sample of ADHD participants, however the effect sizes for the extreme scores groups (ADHD PGS Low vs. High on BRI: d = 0.20 and MI: d = 0.27) support the notion that ADHD PGS has less influence on BRIEF than the ASD PGS. Another explanation could be that ASD and ADHD involve different genetic mechanisms. ASD and ADHD are both NDs and often manifest as comorbid conditions, both characterized by executive function deficits. In our study, the PGS for ASD groups did not reproduce the diagnostic categories as both the Low and High PGS group consisted of children/adolescents with ASD and/or ADHD. Dajani et al. [2016] studied a sample of ASD, ADHD, or comorbid ASD and ADHD, and found that executive function performance did not match the diagnostic categories. Sun et al. [2018] argue that is it more important to evaluate executive function than diagnosis for targeting executive function interventions. Taken together, this suggests that the Low and High PGS groups are not only an indirect measurement of ASD, but rather reflect that polygenic disposition for ASD associated with the behavior regulation part of executive function. Furthermore, it is important to be aware of potential factors that can influence the PGSs. In Grove et al.'s ASD sample it is likely that some of the participants might have had comorbid ADHD [Grove et al., 2019], and the ADHD risk score calculated by Demontis et al. might be more specific to ADHD [Demontis et al., 2018]. However, it is likely that there is some overlap between all the different phenotypes, especially at P-values as high as 0.1. The INT PGS was significantly related to the amount of social problems in this sample, and we found a moderate effect size in the comparison of Low vs. High INT PGS on the SRS (P = 0.028, d = 0.66). The PGS for ASD or ADHD did not significantly influence the SRS score. This is in line with other studies linking common variants associated with high IQ to social problems [Clarke et al., 2016;Grove et al., 2019]. However, the contribution of INT PGS was not significant in our regression model. SRS measures autistic social impairments in a quantitative score, and can also be viewed as an impairment score. However, we did not find a significant difference in total SRS score between the Low and High ASD PGS groups. We might have too small and heterogeneous sample consisting of different ASD diagnosis to detect a relationship between ASD PGS and SRS. Grove et al. found the Asperger diagnosis to be more strongly correlated to a high ASD PGS than classic autism (childhood autism) [Grove et al., 2019]. We did not find a significant difference in full-scale IQ between the Low and the High ASD PGS groups in our sample (P = 0.439). In the whole sample, there was a small, positive correlation between ASD PGS and full-scale IQ, r = 0.033, but this was not significant (P = 0.673). Therefore, our finding is not consistent with the reports showing high risk for ASD associated with higher IQ in the general population [Clarke et al., 2016]. The reason for this is probably the size and heterogeneity of our sample, which groups together different ASD diagnoses. The Asperger diagnosis has previously shown to be the ASD diagnosis with the highest positive correlation with high IQ [Grove et al., 2019]. We found a significant contribution of sex to the behavior regulation of executive function (BRI) in the regression model. This indicates that the relationship between genetic dispositions for ASD and executive function may be different for boys and girls. The low number of girls in our study makes these findings uncertain albeit in line with the "female protective model," which proposes that more severe genetic mutations are required for a girl to develop ASD than for a boy [Levy et al., 2011]. Thus, common gene variants may also have different vulnerabilities depending on sex. Strengths and limitations of the study. One of the strengths of the current study is that it includes a clinically relevant sample of participants (n = 176) who were referred for an ASD evaluation and were thoroughly clinically assessed for ASD. Furthermore, the PGSs are based on large discovery samples with large power. Even though we have relatively few participants, we found moderate effect sizes between measures of executive function (BRI) in the Low and High ASD PGS groups. The PGS for ASD is relatively new and based on a smaller sample/cohort than the PGS for ADHD and INT. Yet, only the PGS for ASD is significantly associated with executive function deficits (BRI). However, because of a relatively small sample size, the uncorrected P-values, and the absence of an effect when using PGS as a continuous variable, the results should be interpreted with caution. Furthermore, the findings are in need of replication in larger samples. We also had information about the presence of possible CNVs in all the participants. We did not find more CNVs in the High vs. the Low ASD PGS group. Therefore, it is not likely that CNVs are the reason for more executive dysfunction in the High ASD PGS group. Another strength of the study is that we had no challenges with kinship or ethnicity as confounding factors. Even though the participants were thoroughly assessed for ASD, we did not apply specific ADHD measures to compare the degree of ADHD symptoms; we only know whether they have a clinical diagnosis of ADHD. Although we controlled for age in the analyses, nonlinear age effects could bias the results. Clinical implications. It has been stated that "the end game of PGS […] is personalized medicine" [Zheutling & Ross, 2018]. Children and adolescent with high PGS for ASD may be particularly vulnerable and have difficulties with executive function. Despite the present small effect and predictive power, the findings suggest that PGS may be a clinically useful tool in the future for children with NDs. If children at risk can be identified, it might be of clinical relevance to initiate prevention interventions aimed at the executive difficulties, or stratify the more general ASD treatment by PGS. Conclusions We report a significant relationship between PGS for ASD and executive function deficits in terms of behavior regulation in a clinical sample under evaluation for ASD symptomatology. Furthermore, we find that PGS of ASD, and not ADHD nor INT, has a significant contribution to executive function deficits when controlling for confounders. To our knowledge, this is the first study to find an association between the PGS for ASD and executive function in everyday life, and shows how information from PGS can be used in a clinical neurodevelopmental sample.
2019-10-02T13:04:22.274Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "4643e03fe7ca56bc395f43eaa89a725270f58f15", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aur.2207", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b16a7886d72da5249bfa425fe89de59d49afd9e8", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6536210
pes2o/s2orc
v3-fos-license
Consulting communities on feedback of genetic findings in international health research: sharing sickle cell disease and carrier information in coastal Kenya Background International health research in malaria-endemic settings may include screening for sickle cell disease, given the relationship between this important genetic condition and resistance to malaria, generating questions about whether and how findings should be disclosed. The literature on disclosing genetic findings in the context of research highlights the role of community consultation in understanding and balancing ethically important issues from participants’ perspectives, including social forms of benefit and harm, and the influence of access to care. To inform research practice locally, and contribute to policy more widely, this study aimed to explore the views of local residents in Kilifi County in coastal Kenya on how researchers should manage study-generated information on sickle cell disease and carrier status. Methods Between June 2010 and July 2011, we consulted 62 purposively selected Kilifi residents on how researchers should manage study-generated sickle cell disease findings. Methods drew on a series of deliberative informed small group discussions. Data were analysed thematically, using charts, to describe participants’ perceptions of the importance of disclosing findings, including reasoning, difference and underlying values. Themes were derived from the underlying research questions and from issues emerging from discussions. Data interpretation drew on relevant areas of social science and bioethics literature. Results Perceived health and social benefits generated strong support for disclosing findings on sickle cell disease, but the balance of social benefits and harms was less clear for sickle cell trait. Many forms of health and social benefits and harms of information-sharing were identified, with important underlying values related to family interests and the importance of openness. The influence of micro and macro level contextual features and prioritization of values led to marked diversity of opinion. Conclusions The approach demonstrates a high ethical importance in many malaria endemic low-to-middle income country settings of disclosing sickle cell disease findings generated during research, alongside provision of effective care and locally-informed counselling. Since these services are central to the benefits of disclosure, health researchers whose studies include screening for sickle cell disease should actively promote the development of health policy and services for this condition in situations of unmet need, including through the prior development of collaborative partnerships with government health managers and providers. Community consultation can importantly enrich ethical debate on research practice where in-depth exploration of informed views and the potential for difference are taken into account. Results: Perceived health and social benefits generated strong support for disclosing findings on sickle cell disease, but the balance of social benefits and harms was less clear for sickle cell trait. Many forms of health and social benefits and harms of information-sharing were identified, with important underlying values related to family interests and the importance of openness. The influence of micro and macro level contextual features and prioritization of values led to marked diversity of opinion. (Continued on next page) (Continued from previous page) Conclusions: The approach demonstrates a high ethical importance in many malaria endemic low-to-middle income country settings of disclosing sickle cell disease findings generated during research, alongside provision of effective care and locally-informed counselling. Since these services are central to the benefits of disclosure, health researchers whose studies include screening for sickle cell disease should actively promote the development of health policy and services for this condition in situations of unmet need, including through the prior development of collaborative partnerships with government health managers and providers. Community consultation can importantly enrich ethical debate on research practice where in-depth exploration of informed views and the potential for difference are taken into account. Keywords: Kenya, Africa, Sickle cell disease, Community consultation, Genetic findings, Genetic and genomics research, Deliberative methods, Empirical ethics Background Sickle cell (SC) disease is a serious single gene disorder common in many malaria endemic parts of Africa; areas that account for three quarters of an estimated 300,000 to 500,000 children born with SC disease worldwide every year [1]. The high prevalence stems from an evolutionary link between the SC gene and resistance to malaria, a feature that also underpins the common inclusion of SC screening in health research in malaria endemic settings where the gene may act as a risk factor. For example, in the setting for this paper in Kilifi, where around 1% children under one year of age have SC disease and 18% carry SC trait, assessment of SC status has been included in descriptive and intervention studies on malaria, pneumonia, Human Immunodeficiency Virus/ Acquired Immunodeficiency Syndrome (HIV/AIDS) and malnutrition in young children as well as in studies on SC disease. An ethical question may then arise about the importance of sharing findings on sickle cell status generated during studies; an issue paradigmatic of more general debates in the literature on researchers' responsibilities for disclosing study-generated genetic findings with participants, including the additional challenges presented where services for health problems related to findings are not widely available. SC disease, an autosomal recessive condition, is an inherited abnormality of red blood cells. Affected children inherit two copies of an abnormal haemoglobin gene, one from each parent. For couples where both individuals carry one copy of the abnormal gene, described as having SC trait or being a carrier for SC disease, there is a 1 in 4 chance of future children being affected by the disease. From a biomedical perspective, a high potential for benefit from sharing research-generated SC disease findings stems from a positive health impact of comprehensive forms of health care. Without care, symptoms can be very severe and life threatening, mainly resulting from obstruction to small blood vessels, chronic anaemia, acute breakdown of blood cells and increased risk of serious infection [2,3]. Although environmental and genetic factors influence severity, without care many children in malaria endemic settings are likely to die in their first few years of life [4,5]. In contrast, quality of life is significantly improved where comprehensive care programmes are in place, typically in high-income settings [6,7], leading to a median adult survival of 48 years [8]. SC trait is generally seen as a benign condition [9,10] whose main implication is an increased future reproductive risk for the disease [11]. From the literature, key features of the debate on the general importance of sharing study-generated genetic information with participants are the benefits to participants in practice, including social benefits such as knowing about future reproductive risks [12]; the need to include a wide range of views in understanding the nature and importance of potential benefits [13]; and the validity of testing processes. Over time, consensus in guidelines on disclosure has moved towards recommending greater openness [13], mainly drawing on ethical principles of respect for autonomy of, and maximising benefits for, participants [12]. For SC trait, in addition to generating participants' awareness of future reproductive risks, an importance has been seen in alerting the wider family to this risk, and respecting their rights to ownership of genetic information [14]. Challenges to disclosure of SC disease findings arise where services needed to generate benefits (such as specialist knowledge, clinical care and counselling) are not available [15]; and, for both disease and trait, by the risks of research being seen as a form of clinical service and issues of research resource prioritisation [12]. Additional risks for SC trait findings are that the health implications will be misinterpreted, stigmatisation and -for children screened -of undermining their autonomy [16]. While the potential for benefit in sharing study-generated SC findings is likely to be closely linked to the presence of effective health services for the condition, these are not widely available in many malaria-endemic settings, with some localised exceptions [17,18]. The potential benefits of sharing study-generated SC disease findings may then depend on forms of ancillary care provided by researchers [19]. Against this background, and building on on-going community engagement activities within the research programme [20,21], this paper reports on a study aiming to inform research practice through exploring the views of a diverse group of Kilifi residents on how researchers should manage study-generated information on sickle cell disease and carrier status. Community engagement has been seen as essential to supporting ethical conduct of research, particularly for international collaborative research conducted in low-income settings [22] through enabling research to be conducted in a way that is respectful to individuals and communities and social value to be maximized [23]. Towards these aims, community consultation, as a form of community engagement, aims to include 'community voices' in planning research, including research questions and activities. At the same time, there is unresolved debate on how consultation should be undertaken, including how a community should be defined, who might reasonably represent a community, and how their voices might be listened to and taken into account in practice [24][25][26]. The consultation methods described in this study used an information-sharing, deliberative approach. Although it is not within the scope of this paper to assess the contribution of this method to the literature on community engagement, we aim to illustrate this contribution in supporting policy decision making in research. Study site A detailed description of the KEMRI Wellcome Trust research programme and its setting in Kilifi on the coast of Kenya has been given elsewhere [27]. In summary, the county includes rural and semi-urban populations of around 1 million; subsistence farming is the primary livelihood and between 55% and 65% households live below a poverty line defined in relation to the costs of meeting basic needs [28]. The study was conducted within the population of 260, 000 people resident within the programme's Health and Demographic Surveillance System (KHDSS) that accounts for around 80% of admissions to the County Referral Hospital [29]. The main population group are Mijikenda [30]; 47% describe Christianity, 13% Islam and 24% traditional beliefs as their faith system. During community engagement planning surveys in 2005, 45% adults reported inability to read a newspaper or letter, although free primary school education was introduced nationally in 2003. In the research setting for this paper in Kenya, there is a close collaboration between researchers at the KEMRI Wellcome Trust Research programme and government health providers at Kilifi County Referral Hospital that includes the provision of a dedicated weekly clinic for people affected by SC disease, part of wider systematic long-term research support provided to the community through the County Health Management Team [31]. Given this availability of care, researchers in Kilifi generally disclose study-generated findings on SC disease to affected families, but not on SC trait [32]. At the same time, challenges have been experienced with low use of the SC clinic and in providing the resources needed to support disclosure, particularly for large scale studies. As reported elsewhere [27] a particular challenge in research-based SC disease screening in healthy children was the likelihood of 'diagnostic misconceptions' , where the test is seen as health check for the benefit of the individual, similar to the more commonly described 'therapeutic misconception' of research [33]. Study population, sampling and data collection 63 residents participated in this study, detailed in Table 1. A priori purposive sampling aimed to explore diversity within the population, drawing on features of role, gender and geographic residence (rural and urban). Participants included: i) Residents working full time within the research programme (n=20), including Community Facilitators, Field Workers (staff supporting studies through undertaking informed consent processes, interviews and some sample-taking), Data Entry Clerks and a Scientist in training; ii) District Health Managers (n=4); iii) Administrative leaders, Chiefs and Assistant Chiefs (n=18); iv) KEMRI Community Representatives (KCRs) (n=18), who are 'typical' residents selected by local communities to support consultation on research-related issues through regular or ad hoc meetings with community liaison staff and/or researchers ; and v) Mothers of affected children (n=3), not belonging to the above groups. All groups included people of different ages, religions and educational status. Participants were invited to attend one or two small group discussions (3 to 6 people) to support an informed deliberative process to assess elements of good practice for information sharing on SC disease and trait [34][35][36] across two stages. Stage one and two discussions were held one week apart at local venues near to participants' homes or, for staff members, at the research centre; and using the participants' language of choice (English, Kiswahili or local dialect). The first stage, based on participatory processes, aimed to share information on: Prevalence, health implications and forms of management for SC disease; SC disease inheritance, particularly transmission across generations by healthy people with SC trait; Risks of future children being affected where both parents have SC trait. The second stage built on understandings shared in the first to explore perceptions of the importance of sharing SC disease and SC trait findings, using two scenarios addressing these different forms of the condition. Stage two discussions aimed to: Describe the perspectives of all participants as far as possible; Explore reasoning using non-judgmental probes to support reflection, including any emerging morally relevant issues; Encourage expression of diversity of views; Pay attention to the voices of the most vulnerable within the population, taken here as parents and families of children with SC disease. Data management and analysis Field notes were made during and immediately after meetings. Discussions were recorded, transcribed and translated into English, including a total of 48.5 hours of recordings. All translations were undertaken by note takers present at the meetings, experienced staff in the social science research group with fluency in local languages and English, and checked by FK. Debriefings were held within the study team after each discussion, and findings used to inform the on-going development of the interview guide. Data were managed using Microsoft Word applications, anonymised through coded identities. The analysis used a modified form of framework analysis [37]. In this study, following in-depth reading of transcripts, we developed a series of analysis charts to capture individual and group level data under themes related to views on the importance of sharing information on SC disease and SC trait information, and underlying reasoning. Charts reflected the flow of debates including changing views to identify individual positions emerging from deliberative discussions. Some themes used in charting were derived from the underlying research questions, based on probes used in discussions. These included perceptions of the likely influence of information sharing on health seeking practices, family relationships and the interests and rights of different stakeholders, and the responsibilities of researchers. Themes were also identified from issues emerging during discussions, including common underlying values, such as openness, truth telling and family stability, and the conflation of SC disease with HIV/AIDS. Analysis was primarily conducted by VM, with support from the other authors, including cross-checking and discussions around coding of data within analysis charts. The varied backgrounds of the authors, including bioethics, social science, public health and community engagement were drawn upon to inform the analysis. VM, SM and FK have lived and worked in the research programme in Kilifi for more than 15 years. Ethical review The study was approved by the Scientific Steering and Ethical Review Committees in Kenya, and the OXTREC committee at Oxford University. Approval included verbal consent for participation, which was given by all participants. The paper is published with the permission of the director, KEMRI. Results In this section, the first two sub-sections describe residents' views on whether, why and how study-generated information on SC disease and SC trait should be shared with the parents of affected children. The remaining sub-sections describe emerging findings on the influence of context and underlying values. Sharing information on sickle cell disease The findings of this study confirmed those of earlier research based on the experiences of directly affected families [32] of low levels of biomedical awareness of SC disease in the community, and sometimes extreme levels of distress experienced by some families in struggling to understand what was happening to their child, and how best to help them. Confusion was often compounded by the nature of early symptoms of SC disease, being typically fleeting and varied, including episodes of severe unexplained pain and crying, painful swelling of the hands, feet or limbs, yellowness of eyes, fever and abdominal swelling. From this and the earlier study, the main forms of harm of 'not knowing' about SC disease were, therefore, seen as: The exacerbation of ill health and emotional distress for the affected child where relatively ineffective treatment was used, including traditional or faithbased forms of healing; Emotional and economic costs to the family associated with looking after the child, including loss of income-generating activities for mothers; Forms of blame and conflict within families, particularly affecting mothers blamed for causing the condition in their child. An underlying feature was the tendency for fathers to deny responsibility for ill health in their children, instead placing blame on mothers. Maternal blaming could take the form of fault seen through direct mother-child inheritance, curses or bewitchment; or through claims of a wife's sexual unfaithfulness and denial of paternity. Reasons for sharing SC disease information: Limiting harms of 'not knowing' Against this background, participants in this study saw sharing SC disease information during research as of central importance in limiting future serious and avoidable harms for affected children and their families, by optimising care-seeking and reducing the impact of existing harmful 'misconceptions' , rumours and suspicions. In sharing information, participants saw the need to give a full and convincing explanation of the cause of SC disease, in addition to advice on management. Given risks of gendered blaming in families, and a commonly reported belief that effective biomedical care should be curative, many participants particularly emphasised the importance of sharing information on the lifelong nature of the condition and its inheritance from both sides of the family over many generations: "When I went to the doctor, the child was tested and…we were told that there was a condition from the father and me that caused the child to get that thing. So when we came back home, other people were saying that it was witchcraft…but, the two of us, we knew because the doctor had explained to us, and so we were not worried. So when the parents get information, it removes the fear" (KCR mother of an affected child, KCR02P2) Some participants, in this and the earlier study [32], described a specific risk that SC disease would be confused with HIV/AIDS, prompted by the need for long term treatment and the typical slim build of people affected by both conditions. In this case, sharing information on the latter would help to reduce confusion: "When they understand [about SC disease], they can be able to remove that fear from their hearts because some may misunderstand it, they take it this is a disease like HIV/AIDS" (KCR01/P6) These positive effects of information sharing were seen for the families of affected children included in research. They were also seen as likely to be helpful for other affected family members, including those where the condition was currently unrecognised. In addition, residents felt information was likely to more widely shared, given its perceived importance, and that positive effects would reach others in the community similarly affected. Reasons not to share SC disease information: The risks of generating harm Whilst there was broad agreement that information should be shared, a number of arguments were proposed against this, and for caution. First, explaining the occurrence of an inherited condition in the family might generate high, and sometimes unwarranted, levels of worry and hopelessness: "So I think it [sharing information] is important though I have said it's sensitive because…once you are told that, then there is that feeling that we are all sick, because it is inherited, and so we are all going to have sick children, you know, there is that traumatising effect the parent might get" (male community facilitator, IDI09P1). Anxiety about the condition was also seen as potentially undermining the parents' emotional relationship with their affected child and with each other. At an extreme, anxiety about the risks to future children was seen as prompting parents to consider separation: "Okay now, for me, I can say that there will be many separations… when it's explained to them in detail the way this is inherited…they can separate because they will not want to have another child with a problem like that." (Female KCR, KCR01/P2) Parental separation was seen as a particularly serious outcome for mothers who lacked independent economic resources, as would be likely for many in rural Kilifi. In this traditionally patrilineal culture [38], mothers without independent resources would be likely to have to move back to their maternal homes where economic and social support might not be forthcoming [32]. The position of a chronically sick child accompanying her mother to the maternal home was seen as particularly fragile, given the costs of health care. All the mothers of children with SC disease in this consultation were greatly concerned about the impact of parental separation on a mother and child. At the same time, a few residents saw a potential benefit of separation where both parents had independent livelihoods: "…and if the worst comes to the worst and the family breaks up…if this man decides to marry another wife maybe he can get one who is not a carrier, and the lady can also get a man who is not a carrier, and they start very fresh lives." (Female field worker, IDI07/P4) Although explaining parents' individual genetic roles was seen as important to address paternal denial of responsibility, paradoxically, this information also carried a risk of generating or increasing gendered blame in families. This risk was sometimes very strongly articulated, and seen as occurring where fathers continued to deny their roles, if information was misinterpreted, or where information generated doubts about paternity of the child. As before, the consequences of maternal blame were seen as potentially serious for the mother and child. This point remained controversial across all groups, although most perceived the risks of gendered blaming to be greater in the absence of a good explanation about the roles of both parents than with this. All the women with an affected child in these discussions supported this view: "Because even if you don't tell them, if they are going to divorce, they will divorce…[for example thinking] 'this woman is evil, every time she gives birth it's a sick child, maybe they [family] have evil spirits, so no! You go to your home, you can even take the child, I don't need the child!' But…if they will have been told, they will understand." (KCR with an affected child, KCR01/P5) Sharing information on sickle cell trait As for SC disease, many residents felt that studygenerated information on SC trait should be shared with families, but with greater differences and shifting of opinions as new points of view were put forwards and considered. Reasons not to share information on SC trait were more commonly and strongly voiced than any raised in support of withholding any type of information on SC disease. A major influence was recognition of the current lack of public access to SC disease screening for healthy individuals, as described in the following sections. Reasons to share information on SC trait There was almost complete agreement that individual knowledge of SC trait would be very important to allow families' choice in reducing a child's future reproductive risks if screening for this condition was widely available. In the absence of this service, many felt sharing SC trait information would be less valuable. However, it was often seen -sometimes strongly -that information would help families to be prepared for this eventuality, including understanding how to seek effective care: "At least, s/he will be prepared [Kiswahili: amejiset]. At the time s/he comes to marry and if s/he has a child like this, s/he will remember 'eeh I was told, in the past I was told.'" (Male KCR, KCR01/P4) This benefit to families was often linked to a perceived public health importance of creating wider awareness of this genetic risk: "He [the child] is fine, yes, he is fine, but… we have to enlighten the parents that this problem may come up in future…if this information is not known and the condition ends up affecting those born in future, how will this problem be solved? So I think they [parents] need to be given the full information." (KCR, KCR02/P4) In addition, some saw that sharing information on SC trait could empower families to create a demand for more and better SC disease services, both at a policy and an individual level. Reasons not to share information on SC trait As for SC disease, disadvantages were related to anxieties thought likely to occur, including through misinterpretations. The key difference was that, for SC trait, these worries were seen as being generated without 'good reason' , given the inability to manage future reproductive risks by screening a partner. At times, this view was strongly held, including as a right 'not to know': "I mean you will explain as well as you can, but later you will leave that particular person worried. When he [the child] comes to marry later, he will say that it would have been better if I didn't agree for him to be tested…" (KCR, KCR02/P6) "I just don't think it's worth knowing…if there is no structure in place to say test the other person … I would rather just stay the way I am [not be told]." (Community facilitator, IDI09/P4) Given the anxieties involved, some felt that SC trait screening should target young adults, including before marriage, but not infants or children. Others disagreed, seeing that young people would be particularly vulnerable to emotional upset from genetic screening: "When you tell him directly that you have this then he will be thinking a lot, and even consider committing suicide because even when I get married things won't be good." (Male KCR, KCR03/P4) Concerns about unnecessary worry were compounded by views that SC trait information could easily be misinterpreted as having a direct impact on the child's health, either in the short or long term: "…so they will remember all through that my child is sick, not my child has a condition which needs a person to make a good decision during marriage, but they will just remember my child is sick…that thing cannot move out of their mind" (Male community facilitator, IDI09/P1) One feature of the risk of misinterpreting the term 'carrier' was the common use of this term in referring to SC trait and carrier status in HIV/AIDS, leading to conflation between these types of 'carriers' or to the conditions themselves. A further local form of medicalisation of SC trait was described as the potential for anxiety to lead some people to seek traditional treatment to 'remove' the risk. These views on the potential risks of sharing SC trait information were not universally held. Some felt that anxiety about SC trait was unlikely to be important in practice; the experience of normal health in affected children would lead parents to accept the condition as non-harmful or forget the information in time. Acceptance would be helped by parents' reflection on their own health, since at least one parent must have SC trait; and the much greater priority likely to be given to hardships confronting many people in the community on a daily basis. A different and particularly key challenge in sharing SC trait information -raised by relatively few residents, but strongly influencing views in their groups -was the low likelihood that parents would be able to accurately and sensitively pass SC trait information on to their children in the future, particularly given that this might happen in 10 or more years' time. A chief compared this situation to HIV/ AIDS control policy: "Me I still disagree, they should not be told…we have a programme in my area on HIV/AIDs infected children. When a child reaches seven or eight years you are supposed to disclose the news to the child, but there is no parent who discloses the news to the child…So it will be the same thing with sickle cell…" (Chiefs 01/P3) Within these discussions, others continued to feel that parents were the right people to pass this sensitive information on to children: "They can help to prepare and explain this to the child over time. They can also explain that being a carrier is not an illness, particularly since they themselves are also carriers." (IDI01) Finally, some individuals in some groups saw risks of stigmatisation for children with SC trait, including difficulty in finding marriage partners in future. Countering, others saw that children and families had the option to keep carrier information confidential; and that carrier knowledge and greater openness could work towards reducing stigmatisation. The latter point was made particularly strongly by all mothers of affected children. As a community facilitator said: "I think if you borrow a leaf from what HIV/AIDS campaign have been doing, the kind of education which is being provided is fighting that stigma, yeah, it has spread very much…" (IDI09/P3) Additionally, in relation to risks of gendered forms of blame and stigma in affected families, knowledge and acceptance of carrier status before marriage was seen to support mutual trust and help parents take care of an affected child in the future, if this were an outcome. The influence of context on perceptions of outcomes of information sharing Diversity of opinion featured throughout the discussions described so far, to a large extent related to recognition of multiple influences on the way information sharing might work out for different families. This contextdependent feature of the impact of sharing genetic information has been described in the literature as situated processes of co-construction [39], illustrated here through the interdependence seen for structural features of context, family dynamics and the nature of information shared. Macro level influences Cultural and socioeconomic circumstances Economic status was seen to have a particularly important influence on the benefits of sharing information, particularly for SC disease. For either parent, having independent economic resources was thought to reduce the risks of being 'abandoned' , but this was particularly important for mothers. For example, chiefs described that mothers able to bring an income into the family would be more likely to influence family decision making, and less vulnerable to blaming by their husbands. Women in the urban KCR group, particularly individuals with high levels of education, talked positively of their ability to manage their lives even if separation from a partner occurred. Overall, mothers without an independent livelihood were seen as vulnerable to harm in situations of 'not knowing' and 'knowing' about SC disease. Illustratively, a member of the District Health Management Team spoke of this as a 'pathetic situation'. The cultural practice of polygamy also provided an influence in the way some families might resolve anxieties about future reproductive risks in Islamic and traditional faiths where the practice is supported and amongst followers of other religions. In these situations, the existence of strong emotional bonds within a family was seen as a reason to choose polygamy in preference to separation. Many residents also raised a role for formal education as an influence on the benefits of sharing information, often based on an assumption that greater exposure to schooling would reduce risks of misunderstanding genetic information, a controversial point: "The highly educated people are the most difficult to explain to, but the ordinary people will listen and think, because education is different to intelligence." (Male KCR, KCR01/P6) In general, increased schooling in women was generally related to greater potential for economic independence, and reduced vulnerability to blame and the harms of family breakup, serving as a marker for valued forms of development: "For one, I think we are moving from the older days… I think to be sincere more people have gone to school, they can understand even the genetic part of this information as opposed to long time ago, and I think people who blame maybe the wife for the genetic makeup -though there's still a gap -but I don't think that's so big" (Community facilitator, IDI09/P1). Micro level influences Individuals and family relationships Alongside descriptions of the influence of culture and socioeconomic factors, residents agreed that there was considerable variation within families in the way that individuals relate to each other, seen as a very important influence on the outcome of family disputes [40]. The most commonly described and important positive concepts were those of trust and emotional commitment, with jealousy as a counter. The likelihood of fathers accepting their genetic role in SC disease, including having SC trait and being the parent of an affected child, was seen as dependent on the emotional bonds and levels of trust already in place in the family. Where these existed, paternal denial, gendered blaming and separation were much less likely. To some extent, observed patterns of SC disease within the family were thought to play into these interpretations, since a common response to recognition of an unknown illness would be to scrutinise extended families on both sides over several generations to see whether any patterns could be established suggesting the origins of the problem [32]. Similar practices have been described for genetic conditions within families in high-income country settings [41]. However, there was little agreement on the implications of particular patterns. Explaining reproductive risks to young couples with one affected child might increase the risk of family breakup, given that no other children would be involved; but parents in this situation might not take a theoretical future reproductive risk seriously. It seems likely that patterns may be less important than other contextual features, particularly the attitudes of individual parents and the existence of trust, as described previously. Emerging underlying values While differences of opinion were often based on the influence of macro and micro level features in different families, tension was also associated with controversy in the way two important underlying values in these discussions were prioritised. A generally dominant value was the protection of vulnerable children and family stability, seen as inter-dependent, and described here as a value of 'family interests'. In addition, opennessoften as a form of empowermentwas highly valued by many participants. While these values are likely to have gained prominence from the focus of these discussions on children affected by a serious genetic condition, they played an important role in strengthening convictions and concerns through their alignment or tension, respectively, within debates. Family interests The interests of vulnerable children, their mothers and families In weighing up the potential benefits and disadvantages of sharing SC information described so far, many residents placed the interests of children with SC disease and their mothers, linked to stability and harmony in families, as their main priority. A lesser degree of vulnerability for children with SC trait was generally reflected in a less heated defence of their individual interests; instead more controversy drew on the public health importance of SC trait. In the literature, the concept of family is complex. A distinction has been made between two broad meanings: the household or 'aggregate or group of actual members who are closely associated by living arrangement or by commitment, for better or worse'; and the family in abstract, including 'the family line…whose boundaries extend over space and time' [42](p35). The value of family stability in these discussions seems to relate to the former, and often referenced structural economic arrangements focused on the parents, affected child, siblings and other dependent family members. At the same time, intrinsic values of the 'actual family' and its stabilising role in wider family society were also described: "What I'm saying is…you cannot make this conflict rise in that home, because by all, either right or wrong…you have to make them live together." (Chiefs 02/P2) Openness and empowerment A positive idea of the value of openness underpinned many views on the benefits of sharing SC information, as a form of empowerment through knowledge for individuals and the wider community. For example, individual families would be able to take more control over their lives (impacting care of an affected child), experience less blame and stigmatisation, and be better able to manage future reproductive risks: "Information is power…so if they get the knowledge [about SC disease], they will continue managing the kids very well, ignoring the others, whatever the other people will be saying." (Male field worker, IDI07/P3) At a community level, openness about SC disease and SC trait was also linked to the potential for advocacy and reducing stigmatisation, the latter often linked to HIV/AIDS control policy: "Ok…back in 1984, 1982, when the first AIDs cases were discovered.... I think in this same case [of SC disease]…many people don't know or don't understand about it…Now we are living with HIV patients.... it's no longer an issue as it used to be. [For SC disease] as time goes by…a few members of the family will accept, and then later a few members of the village accept, and then later on a few members of the location accept…" (IDI07/P3) Similarly, a field worker told the story of a mother whose child had been found to have SC disease during research, and whose attitude had changed from an initial one of denial and anger to one of positively seeking to influence community knowledge and attitudes by becoming a local advocate for SC disease in her village. The suggestion that knowledge could empower affected families or communities to lobby for better SC disease services in the future also reflects a community level benefit. At the scale of a specific project, this may be difficult to imagine, but patient advocacy and public pressure have in fact been the driving force for SC disease programmes in many parts of Africa where these exist, in the form of charitable foundations [18]. Finally, several residents described a potential negative impact of a lack of openness about SC trait information, reflecting on policies at that time for the genomics study in Kilifi. The risk was of loss of trust in researchers (and the research institution) by individuals or the wider community if it became known that SC trait information had been withheld. Examples of ways this might happen included future SC disease screening with a conflicting result, or the birth of an affected child in a future generation of the family: "Maybe we may just say that we don't want to displease the parents, but then after that, yes, the kid will be a carrier and nothing will change that…when he or she marries another carrier, ok, 'we were told that the kid was negative, how did this come about? Or was there something which was being hidden as to..?' Ok, it will bring some question marks." (Field worker, IDI06/P1) Discussion In this study, we aimed to explore informed views of a range of residents in Kilifi County in Kenya on how researchers should manage information generated on SC disease and trait during studies as a form of community consultation. We sought to understand views on what information should be shared and how, and with what potential benefits, risks and other ethical implications, to contribute to the development of policy. In this discussion, we highlight key findings and their implications, and lessons learned for methodology in community consultation in this and similar settings. Sharing study-generated information on SC disease and SC trait Overall, the consultation indicated a high ethical importance of sharing study-generated information on SC disease in children with their families. Community perceptions of benefit show the case for disclosure as compelling based on strong and agreed views on the likelihood of limiting probable and severe harms for many participant children, their mothers and families. This conclusion is underlined by research guidelines on the disclosure of genetic findings that reference the importance of taking account of clinical and social benefits (or utility) of sharing information [13]. The study also illustrated the sensitivities involved in sharing SC disease information, with risks of generating unnecessarily high levels of anxiety in families and putting sometimes extreme strain on parental relationships. Any resulting parental separation was seen as engendering very severe hardship for some mothers and affected children. In addition, an aim of sharing information on parental roles in SC disease to reduce risks of paternal denial of responsibility for their child's condition could paradoxically increase these risks, depending on the influence of existing relationships, socioeconomic context and the way in which information is shared. A specific challenge was the risk of generating paternal requests for individual screening to 'test' for paternity. Community views on what researchers' responsibilities might be in this situation were mixed, but we have argued [43] that researchers should not test parents since harms likely to be caused by showing misattributed paternity, even if in few cases, outweigh responsibilities to counter paternal denial in this way. In the case of SC trait, where the perceived harms were seen as less immediate and severe, there was also greater disagreement on the benefits of sharing information. Disagreement and ambivalence were strongly underpinned by the lack of public access to screening for SC trait in healthy adults in Kenya, and by a series of concerns, including: parents' ability and willingness to pass on genetic information to their children in future; the psychological effect of this information on affected individuals; and risks of stigmatisation, including where the word 'carrier' was conflated with the use of this term in HIV/AIDS. In this way, a few participants supported a right not to know about carrier status in these circumstances. At the same time, different forms of benefit from sharing SC trait information were seen, including the ability to create wider public awareness of SCD, such that public advocacy could apply pressure for wider screening services to be set up. A central benefit seen was that of 'preparedness' , with the potential to limit future harms through better understanding of the condition. Relatedly, awareness of SC trait status within a partnership before marriage was seen to have the potential to limit risks of parental separation and gendered blame if an affected child was later conceived. In any case, participants were concerned that failing to share SC trait information, where this was known by researchers, could undermine relationships between residents and researchers where this was seen as 'withholding' important information. One implication of this diversity of opinion is that the concept of healthy carrier status in SC disease -and for the potential for this to be disclosed by research -should be shared during the consent process, and participants given a choice about accessing information on SC trait where this is seen as helpful. A similar approach has been taken for carrier findings during newborn SC disease screening programmes in Ghana [9], and is recommended by the European Society of Human Genetics [44]. An important challenge to views that study-generated SC disease and trait information should be shared that was not considered in depth during the consultation comes from considering the resources, and appropriateness, of researchers taking responsibility for supporting counselling and clinical services for a lifelong health condition of public health importance. Apart from arguments about resource prioritisation in research, research funding cycles are often incompatible with providing lifelong services; and researchers taking on this role may undermine that of Ministry of Health partners [15]. Resource arguments are particularly challenging for disclosure of SC trait findings, where the perceived benefits are less urgent and certain than for SC disease, and resources requirements to validate tests and provide counselling are greater, given its higher prevalence (18% SC trait vs. 1% SC disease prevalence in infants in Kilifi). In this way, the findings highlight the challenge, described in the literature on disclosing genetic findings, of balancing the benefits of disclosure to individual study participants against the resources needed to support realisation of these benefits [15,45], in this case through ensuring provision of SC services, including clinical care and counselling. On this basis, while the absence of effective public services may seem to suggest limits to researchers' responsibilities for disclosure in some instances, our findings highlight the moral challenges of failing to share study-generated information on SC disease. Rather, the ethical importance of limiting harm in this situation, together with the public health nature of SC disease, underlines the importance of researchers working in prior partnerships with government health authorities to ensure that -as far as possible -disclosure and services support the long term interests of study participants. Learning about community consultation Community consultation is widely recognised as important in supporting ethical practice in many types of research, particularly to take account of potentially conflicting principles emerging from theory or practice [46,47]. There is less published experience or guidance on approaches to community consultation, including on ways of consulting on technical aspects of research that are not necessarily familiar to potential participants [25] . Our experiences suggest that building information-sharing activities into the consultation was essential to the complexity of discussions, given low awareness in the community of many ethically important biomedical aspects of SC disease and its inheritance. In addition, using deliberative methods -including re-visiting issues over time -facilitated a strong and reflective engagement, and generated diverse and detailed accounts of the views and values of residents. Diversity was largely generated in two ways: firstly, by participants recognising inter-related individual and contextual influences on the likely impact of information sharing; and, secondly, by participants' different ways of prioritising underlying values. The depth of exploration gave insights into the central importance attached to family interests and policies of openness; alignment of these values often underpinned strength of opinion and levels of agreement. Similarly, tension between these values or their prioritisation in specific situations lay behind much disagreement. The ethical implications of both forms of diversity suggests that community consultation should be based on carefully accounting for difference, including in the voices listened to, the sharing of information to support debate and the use of methods that explore reasoning and reflection over time, without resort to consensus building. At the same time, further research on these and other methods for consultation are important to strengthen understanding of the potential contribution of these findings to policy, including how typical these 'community voices' are, whether and how wider community accountability should be sought and whether views based on the assimilation of relatively new understandings might importantly continue to develop over time. In particular, an iterative approach involving feeding back of the findings to community and other national research stakeholders in Kenya would be important in developing wider policy [48]. Conclusions This study has shown the high ethical importance and the sensitivities of sharing study-generated information on SC disease in children in Kilifi and other similar settings, including the potential to limit harms that may otherwise be very severe for affected children and their families. In this consultation, arguments for sharing SC trait information created greater controversy, and were less compelling in terms of the nature and probability of harms potentially limited. The extent of diversity in views on sharing carrier status suggests that disclosure policies that support individual choice would importantly maximize the possibility of individual benefit. The public health nature of SC disease and the ethical importance of limiting harm in these situations emphasise the importance of researchers whose studies include SC screening working in prior partnerships with government health authorities to ensure that -as far as possibledisclosure and services support the long term interests of study participants. Theoretical questions on approaches to normative analysis of empirical findings from community consultation on health research practice, and the role of in situ empirical debates, remain far-reaching [49] and have not been addressed in this paper. In fact, there remains a gap in understanding which data collection and analysis methods would be most effective in community consultation on health research, particularly for technical and unfamiliar aspects of studies [25]. Within these limitations we conclude that approaches using deliberative discussion and drawing on shared information, normative reflection and an exploration of the potential for difference can importantly enrich ethical debates on good practice. research programme with a master's degree in public health. RF is a social scientist, Professor of Public Health and Head of the Department of Public Health at Oxford University. TNW is a Senior Research Fellow and Professor of Medicine at the KEMRI Wellcome Trust research programme and Imperial College, London; his research focuses on haemoglobinopathies in Africa. MP is the director of the Ethox Centre at Oxford University. SM is a social scientist and Wellcome Trust Research Fellow at the KEMRI Wellcome Trust research programme and Oxford University.
2017-06-17T01:54:22.037Z
2013-10-14T00:00:00.000
{ "year": 2013, "sha1": "7c955286985a28573325535b2c63c3f59a9c4a40", "oa_license": "CCBY", "oa_url": "https://bmcmedethics.biomedcentral.com/track/pdf/10.1186/1472-6939-14-41", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50eb71d2844d5e5a09052f0873c844d865a283e4", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
252700948
pes2o/s2orc
v3-fos-license
Assessment of Surface and Build-up Doses for a 6 MV Photon Beam using Parallel Plate Chamber, EBT3 Gafchromic Films, and PRIMO Monte Carlo Simulation Code Background: Accurate assessment of surface and build-up doses has a key role in radiotherapy, especially for the superficial lesions with uncertainties involved while performing measurements in the build-up region. Objective: This study aimed to assess surface and build-up doses for 6 MV photon beam from linear accelerator using parallel plate ionization chamber, EBT3 Gafchromic films, and PRIMO Monte Carlo (MC) simulation code. Material and Methods: In this experimental study, parallel plate chamber (PPC05) and EBT3 Gafchromic films were used to measure doses in a build-up region for 6 MV beam from the linear accelerator for different field sizes at various depths ranging from 0 to 2 cm from the surface with 100 cm source to surface distance (SSD) in a solid water phantom. Measured results were compared with Monte Carlo simulated results using PENELOPE-based PRIMO simulation code for the same setup conditions. Effect of gantry angle incidence and SSD were also analyzed for depth doses at the surface and build-up regions using PPC05 ion chamber and EBT3 Gafchromic films. Results: Doses measured at the surface were 14.78%, 19.87%, 25.83%, and 31.54% for field sizes of 5×5, 10×10, 15×15, and 20×20 cm2, respectively for a 6 MV photon beam with a parallel plate chamber and 14.20%, 19.14%, 25.149%, and 30.90%, respectively for EBT3 Gafchromic films. Both measurement sets were in good agreement with corresponding simulated results from the PRIMO MC simulation code; doses increase with the increase in field sizes. Conclusion: Good agreement was observed between the measured depth doses using parallel plate ionization chamber, EBT3 Gafchromic films, and the simulated depth doses using PRIMO Monte Carlo simulation code. Introduction I n radiotherapy, the accuracy of measurement in surface doses is important to achieve the desired outcome of radiation treatment, especially when treating superficial tumors, due to the secondary charged particles, such as electrons; these charged particles are mainly produced during the interaction of photons with air, in linear accelerator beam defining system, i.e. collimator and the scattering materials in the path of the beam [1][2][3]. Material and Methods: In this experimental study, parallel plate chamber (PPC05) and EBT3 Gafchromic films were used to measure doses in a build-up region for 6 MV beam from the linear accelerator for different field sizes at various depths ranging from 0 to 2 cm from the surface with 100 cm source to surface distance (SSD) in a solid water phantom. Measured results were compared with Monte Carlo simulated results using PENELOPE-based PRIMO simulation code for the same setup conditions. Effect of gantry angle incidence and SSD were also analyzed for depth doses at the surface and build-up regions using PPC05 ion chamber and EBT3 Gafchromic films. Results: Doses measured at the surface were 14.78%, 19.87%, 25.83%, and 31.54% for field sizes of 5×5, 10×10, 15×15, and 20×20 cm 2 , respectively for a 6 MV photon beam with a parallel plate chamber and 14.20%, 19.14%, 25.149%, and 30.90%, respectively for EBT3 Gafchromic films. Both measurement sets were in good agreement with corresponding simulated results from the PRIMO MC simulation code; doses increase with the increase in field sizes. Conclusion: Good agreement was observed between the measured depth doses using parallel plate ionization chamber, EBT3 Gafchromic films, and the simulated depth doses using PRIMO Monte Carlo simulation code. According to the International Commission of Radiation Protection report 59 (ICRP-59) [4] and the International Commission of Radiation Units and Measurements (ICRU-39) [5], the skin depth is considered at the depth of 0.07 mm, corresponding to the depth of interface between the dermis and epidermis layers of the skin [6]. The accurate and precise measurements at such depths are not only difficult but also challenging due to the high dose gradient in the superficial region and without any charge particle equilibrium [2,6]. However, extrapolation chambers are the best selection for the measurement of surface dose [7], their unavailability at most clinical facilities and time-consuming procedure in measuring doses leads to the impractical use of extrapolation chambers in clinical setups [6]. A parallel plate chamber (PPC) with fixed separation of electrodes as an alternative has been used for the measurement of the surface dose and build-up region dose. However, factors, such as cavity perturbation are necessary to improve the accuracy of such measurement [7][8][9]. Monte Carlo (MC) simulation method is widely regarded as a benchmark for dose estimation in radiotherapy [10,11]. Various authors have evaluated the buildup region doses by EGSnrc and BEAMnrc MC simulation codes [6,12,13]; however, these simulations require long computational time/computational resources. This study aimed to measure the surface and build-up region doses using the available parallel plate chamber (PPC), EBT3 Gafchromic film and to compare their results with MC simulated results using PENELOPE based PRIMO MC Code. had similar properties to that of water, such as relative electron density, effective atomic number, and similar interaction properties of absorption and scattering of radiation. The PPC was embedded in a custom drilled slot on a 30×30 cm 2 piece of solid water phantom slab. A minimum of 10 cm of backscatter thickness was used to provide full phantom scatter equilibrium for these measurements for a 6 MV photon beam. Measurements were performed for depths of 0, 1, 3, 5, 7, 10, 12, 15, and 20 mm in the solid water phantom at source to surface distance (SSD) of 100 cm for field sizes of 5×5, 10×10, 15×15, and 20×20 cm 2 and gantry angles of 0°, 30°, and 60°. Material and Methods The charge, collected by the ion chamber at both the polarities, i.e. +300 V and -300 V, was recorded, and the average value was then normalized to the dose maximum value as follows: where M avg is the average of accumulated charge, M + and Mare the electrometer readings obtained for accumulated charges at positive and negative polarity. The readings obtained from the PPC were corrected for overresponse by applying Gerbi and Khan's method, which introduced a modified version of the correction factors of Velkely et al. to consider the effect of collector edge sidewall distance of the parallel plate chamber [7,9]. where P(d, E) is the measured percentage depth dose, P'(d, E) is the corrected percentage depth dose at the depth 'd', 'E' is the energy of photon beam, 'l' is the plate separation, 'α' is constant with a value of 5.5, and ξ (0, E) is the energy-dependent chamber factor, showing the overresponse per mm of chamber plate separation at the surface of the phantom. 'IR' represents the ionization ratio measured at depths of 20 cm and 10 cm for a field size of 10 cm × 10 cm at a fixed sourcedetector distance of 100 cm; 'C' is the sidewall-collector distance in mm, and 'd' is the depth of the chamber front window (d=0 for surface). Gafchromic film measurements Gafchromic film is a substantial dosimeter used in the measurement of surface dose due to its characteristics of high spatial resolution and low spectral sensitivity over a broad range of doses [6]. Comparison of doses from Gafchromic EBT films and parallel plate chambers were studied by Bilge et al. [14] and showed a difference between 5% for 6 MV and 3% for 18 MV photon beams. In the current study, Gafchromic EBT3 film (International Specialty Product, NJ, US) was used, consisting of a 28 μm thick active layer sandwiched between two 125 μm matte-polyester substrates [15]. The active layer of the film contains the active component, such as a marker dye, stabilizers, and other components giving the film its near energy-independent response. The effective point of measurement was assumed at the geometric center along with the thickness of the exposed film. All films used for measurements were obtained from the same lot (packet) (#09061602) and cut into squares of 4 cm × 4 cm. After irradiation, the films were scanned using an Epson 10000 XL flatbed scanner (Epson America, Inc. Long Beach, CA). After a 24hour gap was considered between irradiation and scanning, leading to post-irradiation color changes. For scanning, transmission mode at a 72 dpi resolution and with 48-bits RGB format was used. As the optical properties of the Gafchromic film were sensitive to the scanning orientation of the film on the scanner bed, all the irradiated film pieces were scanned in the same orientation, which was in portrait mode [16]. Images were saved and analyzed using film QA Pro software (National Institute of Health, USA). For calibration purposes, a set of EBT3 films from the same lot used for actual measurements were irradiated to establish the calibration curve. For irradiation, the film was placed at a depth of 5 cm in a solid water phantom and irradiated to doses ranging from 0 to 600 cGy at 100 cm SSD for a field size of 10×10 cm 2 . After scanning, the optical density of the film was obtained from the red component of the RGB (red, green, and blue) images using Film QA Pro software. For buildup doses measurements, the film was placed at depths of 0, 1, 3, 5, 7, 10, 12, 15, and 20 mm in the solid water phantom; 200 MUs were delivered for each irradiation. PRIMO MC simulation technique involves the use of known probability distributions for the interaction of beam particles in various materials and simulating random trajectories of each particle. PRIMO is a new MC simulation sys-tem (computer software) used for the effortless simulation of most Varian and Elekta linear accelerators, estimating dose distribution in phantoms and computed tomographic (CT) images; it also can use phase-space files in International Atomic Energy Agency (IAEA) format and import structures in the standard DICOMRT Structure format [17][18][19]. PRIMO simulation system includes as follows: (a) accurate physics from the PENEL-OPE code, (b) variance reduction technique significantly, reducing the computation time, and (c) a user-friendly graphical interface with tools for analyzing the generated data [18,19]. In PRIMO MC code, the simulation of the Linear accelerator (Linac) and the phantom set/ CT images set can be performed in 3 segments (Figure 1): the first segment 's1' simulates the field-independent part of the Linear accelerator starting from an electron beam source to just above the moveable collimators, the second segment 's2' simulates field, defining part of the Linac, i.e. moveable jaws and multileaf collimator (MLC), and at the last, the segment 's3' simulates the dose distribution in the phantom or CT images set. In PRIMO, the primary electrons reaching the target are defined by a Gaussian distribution [18]. The latest released version (0.3.1.1772) of PRIMO on a desktop computer with specifications of 32GB RAM, Intel® CPU E5-2695 with a 64-bit operating system was used in this study. In PRIMO MC code, the default parameters of the 6 MV photon beam for the Varian Clinac 600 C model were defined with an electron beam of the initial energy of 5.4 mega electron volt (MeV) with energy FWHM of 0 MeV, the focal spot size of 0 mm, and a beam divergence of 0º. Tuning of initial beam parameters in PRIMO The primary beam parameters were adjusted to an acceptable difference with the measured data [19]. For tuning the 6 MV beam, the whole linear accelerator geometry was simulated at once to produce phase-space files in IAEA format. Initial beam energy parameters were changed from 5.4 MeV to 6.2 MeV in a step of 0.1 MeV until a good agreement for 5.8 MeV between measured and calculated percentage depth dose (PDD) for simulation of field size of 10 × 10 cm 2 in a water phantom at 100 cm SSD. FWHM and its energy, a similar approach to iterative adjustment, were applied by varying the initial values and repeating the simulation process to find the closest match for measured and calculated PDDs, profiles, and focal spot The focal spot size values were varied from 1.0 mm to 1.5 mm in a step of 0.1 mm, and FWHM energy was varied from 0.1 to 0.2 MeV; finally, the values for beam divergence were varied from 0.1º to 1º to determine the configuration, giving the highest gamma index passing rate using 1%/1 mm from the inbuilt analysis tool for comparing experimental and measured data in PRIMO code. When simulating linear accelerator parts (s1 and s2), splitting roulette was selected. According to the authors of the PRIMO code, splitting-roulette was recommended for nominal energies below 15 MV, and rotational splitting was usually more efficient for nom- inal energies above 15 MV [17,18,20,21]. These applied variance reduction techniques and the geometry files used in the simulation were tested extensively by many researchers in the past [21][22][23][24][25]. The parameters of final beam values selected for simulation were as follows: initial energy of 5.8 MeV, energy FWHM of 0.18 MeV, the focal spot size of 1.2 mm, and the beam divergence of 0.2º for Linear Accelerator Varian Clinac 600 C after tuning of beam parameters in PRIMO. For the defined beam parameters, further simulation for 5×5, 10×10, 15×15, and 20×20 cm 2 field sizes were performed for tallying the dose in a homogeneous water phantom of size 30×30×30 cm 3 for a bin size of 2×2×1 mm 3 at SSD of 100 cm. More than 1×10 8 histories were simulated in PRIMO for each field size to reach the dose uncertainty below 1%. Simulated depth dose curves were saved in a file in.txt format and normalized for the depth of maximum dose. The percentage difference between measured and simulated PDD values was evaluated statistically for each field size. Results In this paper, the buildup dose measured using PPC and Gafchromic films was compared and validated with MC simulated results from PENELOPE-based PRIMO code for the 6 MV photon beam from Varian Clinac 600 C. Figure 2 shows the PDD values in the buildup region measured with PPC as well as film and estimated with MC simulation for different field sizes. The dose values measured with PPC, EBT3 film, and PRIMO MC simulated results for different field sizes at different depths as seen in Table 1. Discussion Based on the results, PDD values measured from PPC, EBT3 films, and simulated results using PRIMO MC were within 10% of the dose at 0.0 mm depth, 5% for the first 4 mm depths, and also 2% for measurements at depths beyond 4 mm. The maximum variation was observed for 0 mm and 1 mm depth among the measured and simulated results. Figure 2 shows that the PDD increased with an increase in field size as expected [26]. The measurements with EBT3 film showed more coherence with PRIMO MC simulated data. A maximum variation of 8.1% was observed at the surface and 4.3% at 1mm depth between PRIMO and EBT3 films results, while the maximum variation of 10.1% at the surface and 5.0% at 1 mm depth was observed for PPC and PRIMO MC simulated results. For PPC and EBT3 films, the maximum variation was observed at 3.9% at the surface and [27] reported the surface doses measured with Markus PPC for 6 and 10 MV photon beams from Varian Clinac 2100 C/D for 10×10 cm 2 field size within 15.8% and 11.8%, respectively. Qi et al. [28] measured PDD for 6 MV in water equivalent phantom with Attix PPC for Linear Accelerator Varian Clinac 600 C linear accelerator and also showed the values 12.9%, 18.9%, 29.1%, and 37.9% for 5×5, 10×10, 20×20, and 30×30 cm 2 field size at 100 cm SSD. Yu et al. [29] reported the surface doses of 16% and 13% for 10×10 cm 2 field size for 6 MV and 18 MV beam, respectively, in Varian Clinac 2100 C with Attix model 449 PPC. Effect of Variation in the angle of incidence and SSD on surface doses Figure 3 shows the PDDs values on the sur-face for 5×5, 10×10, 15×15, and 20×20 cm 2 field sizes for gantry angle incidence of 0º, 30º, and 60º. For oblique incidence of 6 MV photon beam, i.e. for gantry angles of 30 and 60 degree, surface dose increased from 14.33% to 30.67%, 21.32% to 36.8%, 26.53% to 42.89%, and 31.01% to 46.95% for the 5×5, 10×10, 15×15, and 20×20 cm 2 field sizes, respectively. The surface dose increased with an increase in beam incidence angle due to the shift of charged particle equilibrium towards the surface. When the angle of the incident beam increased the depth of dose maximum shifted towards the surface due to increased electron contamination and higher photons interactions along the oblique path of the beam [30]. Figure 4 shows that the surface doses decreases as SSD increased; however, the effect was not much significant. It is known that the dose deposited on the surface of irradiation is due to not only the primary photon beam but also the contaminant electrons generated in the air and the collimator head, reaching the surface. However, the contribution to surface dose due to these contaminant electrons is not sufficient as the electrons produced in the accelerator head had relatively high energy. The Figure 4: Effect of source to surface distance (SSD) on surface dose for field sizes of 5×5, 10×10, 15×15, and 20×20 cm 2 at 0º degree gantry angle. range of these electrons does not change significantly in phantom when required to travel 10 cm more or less in air. A similar situation is expected for photons when there is a change in traveling distance in air, as no considerable change in spectral components of photons is expected. [27,31,32]. Conclusion In this present study, surface doses were analyzed for different field sizes at different gantry angles using three different tools, such as PPC, Gafchromic EBT3 films, and PRIMO MC simulation code. The simulated MC results from Penelope-based PRIMO software for 6 MV photon beam were in good agreement with the previously reported data for similar machines. The difference between measurements by PPC, EBT3 films, and the simulated results by PRIMO MC code was reported within 10% at the surface and 5% for the first 5 mm depth. It shows that the comparison of Primo MC simulated results are in good agreement with the measured doses using PPC and EBT3 films and provides an accurate estimation of doses at the surface and buildup region. Also, this accurate estimation of doses at the surface and buildup region may help manage radiationinduced late skin toxicities.
2022-10-05T15:06:37.131Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "f2f658b9872c64751e1d028db384929831a27071", "oa_license": "CCBYNC", "oa_url": "https://jbpe.sums.ac.ir/article_48398_46fe64bd031a97811abd197f0579a45e.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c011a7994858cc9884ae46671ffcfa2f316e24e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
151448674
pes2o/s2orc
v3-fos-license
ICT Use in Elementary Schools and the Future-Teachers ’ Perception The aim of this article is to explore the student teachers’ perception of ICT use in elementary schools in Shkodra city. This research presents the perception that students have on ICT use in several elementary grades during their practice in four different schools. The paper presents an interpretative analyzes of the perception of Shkodra’s university future teachers at teaching programmes. In this study, the students were asked to analyze the situation in some elementary classes in Shkodra’s schools, about the use of ICT tools during different classes they have attended during their teaching practices. The students reflected also on the association between the use of ICTs in the elementary classes and the reshape of class teaching and learning, as well on the modification on learning and teaching process itself. This paper finishes by identifying some of the perceptions and needs the students will face and would like to change in their future teaching. Introduction ICTs have the potential to innovate, accelerate, enrich, and deepen skills, to motivate and engage students, to help relate school experience to work practices, create economic viability for tomorrow's workers, as well as strengthening teaching and helping schools change (Davis and Tearle, 1999;Lemke and Coughlin, 1998;cited by Yusuf, 2005).As Jhurree (2005) states, much has been said and reported about the impact of technology, especially computers, in education.(Noor-Ul-Amin) Students using ICTs for learning purposes become immersed in the process of learning and as more and more students use computers as information sources and cognitive tools (Reeves & Jonassen, 1996: 693-719).),the influence of the technology on supporting how students learn will continue to increase.(Noor-Ul-Amin) The use of technologies in teaching and learning process is very important in our schools in Albania.Even though it is in a progressive process, it is not at the level the students and teachers need or the context of the ICT development in general.Except this, the educational system is trying to improve the teaching and learning process through the educational system.The universities are supporters of this progress especially for future teachers through subjects or the teaching models they offer using ICT tools. The perceptual process is a sequence of processes that work together to determine our experience of and reaction to stimuli in the environment.The perception does not just happen, but is the end of the result of the complex behind the scenes processes, many of which are not available to your awareness.The process is divided into four categories: stimulus (all the elements in our environment, that can we potentially perceived), electricity, experience and knowledge (which intend all the information that a person brings to a situation).(Goldstein, 2009: 5). In Shkodra University the curricula of future teachers' has changed during the years.These changes reflected the way the technologies of education seen by the context and the needs.The implementation of the new concepts on E-ISSN 2281-4612 ISSN 2281-3993 Academic Journal of Interdisciplinary Studies MCSER Publishing, Rome-Italy technology started during the years 1998-2003.The subject was firstly named "Computer", because this is the period of time the computers begun to be part of the Albanians lifestyle.So, the first need for the students was to learn how to use a computer for themselves in the future or to prepare their class works.During this period the people in general learnt to use the computer and they became part of their everyday life at home and at work.Through the years 2003-2008 there was no subject which had to do with computers or other technologies for the future teacher programmes.In 2008 the subject was changed at the curricula of future teachers as "The use of the new technologies" and now it is renamed "Learning through new technologies" which has been programmed in a term per year.The students attended this subject 4 hours per week in 15 weeks of a term (semester), 2 hours of lectures and 2 seminars.During these classes the students have some theoretical information combined with practical lessons of elementary school teachers about the way of ICT use in the classes.So, this course is not only to know how to use computers or other ICT items, but also to gain a sense of technology integration and a new pedagogical thinking with ICTs.The students also prepare some lesson plans in group and individually using ICT tools.At the same time the students are at the last semester of the last semester of university.They have the active practice at the elementary school of the city to observe and to teach practically before they graduate as teachers of Elementary schools.This study is done exactly based on their practice during this period of time. Methodology This is a qualitative study, in a focus group of 30 students on the third year of "Luigj Gurakuqi" university.They attend the third year (Bachelor) at the Faculty of Education, at the programme for teachers of Elementary Schools.These students follow the active practice and passive practice in 4 schools of the Shkodra city ("Branko Kadia", "Skënderbeu", "Mati Logoreci" and "Pashko Vasa").They attend this practice for one year, in two different terms (15 weeks + 14 weeks).In the first term they do the passive practice (they follow the normal lessons of a teacher in a class) and in the second term the active one (they teach in the elementary classes).The questions asked to the students consisted on what they have really seen in these classes, the ICT tools used by the teachers, the teachers needs on ICT support and on the other hand their perception on their ability to use ICT's in the future and how do they see the ICT use by them in the future.According to their opinion the students see the lack of the ICT tools in our schools even though they feel almost ready to prepare different classes using ICT in their future classes. Results and Discussions The focus group of 30 students, third year, in Bachelor level in the education programme follows the active practice (they teach in the elementary classes) and passive practice (they follow the normal lessons of a teacher in a class) in 4 schools of the city: "Branko Kadia", "Skënderbeu", "Mati Logoreci" and "Pashko Vasa".The focus group answered 5 open-ending questions to reflect the situation of schools and their points of view about their perspective in front of the context and their future teaching.Q1. "How do you assess the context (school and grade) where you achieve professional practice related to ICT tools and their use?"About this question, the students gave different answers depending on the schools they followed the practice. In Branko Kadia school the students mostly admit: No tools to be applied; Only a tape recorder for the music lessons; There are no tools; There are no audio-visual tools in the classes I have done the practice; There are no tools; No tools in this school; No conditions and no tools; Two teachers used the English room with overhead projector and computers during all the lessons; All the teachers use a tape recorder during the music lessons; Only in the English classroom use computers and an overhead projector. In Mati Logoreci school they often admit: Never seen to apply ICT tools; Never seen neither in the passive, nor in the active practice a tool to been used; There are no tools; There are no labs or ICT tools. The students who followed the practice in Pashko Vasa School mostly confirm that: There are no tools; No lessons with an ICT tool; No tools at all, There is only a tape recorder during the music lessons and the personal cell phone of the teacher to show pictures or to listen to the songs; It is a very disappointing situation the absence of tools; There is only a tape recorder for all the elementary classes and the computers' lab for the secondary school. And the students who had the practice in Skënderbeu school mostly stated: There are no ICT tools, and no lessons using ICT tools; There are no other tools, only a tape recorder used every Monday morning for the Anthem; There are no tools, only a tape recorder in the corner of the class , never used during the practice; No ICT tools in the E- ISSN 2281-4612 ISSN 2281-3993 Academic Journal of Interdisciplinary Studies MCSER Publishing, Rome-Italy classes I have had practice during this year in this school; No tools, only a computer lab, but never used by the elementary school teachers; Only the teacher we had the active practice has her own computer and uses it, but in general there is no electricity, so it is impossible to use it according to the lessons. Based on the students perception we can highlight that in schools in general there are computer labs only for the secondary classes, not for the elementary school.But, on the other hand the elementary school teachers do not try to challenge themselves by using the computers in their lessons.We can admit the fact that there are efforts by the teachers in a sporadic way to use the ICT tools, but they do not have the right conditions to use them in their classes or do not have the right softwares in mother tongue which makes teaching difficult also the way of finding resources to use in their lessons. Q2. "What are the current needs of an elementary teacher associated with the use of technology?" there were different points of view (opinions).The most frequent answers they gave about the needs the students said: To have tools and the necessary environments; To have the ability to use computers and other ICT tools; There are no tools and not a try by the teachers to use the ICT tools; The teachers need trainings; The teachers need courses and the right conditions to use these tools; The teachers are never been asked to use the ICT tools; They have no computers or overhead projectors; The teachers need tools (Tape recorders, PC or Overhead Projectors) and also to have software in Albanian Language and the trainings about the use during their lessons; Knowledge and trainings on computer use; Training and softwares available for subjects and ages of the pupils; The right conditions and tools according to the subjects and themes of the lessons. The students see the lack of ICT tools and the right environment, and less the need for trainings, because they think the teachers know how to use some tools, specially the basic programmes of computers. Their answers about the question (Q.3): "How do you assess your level of knowledge about the use of technological tools in the future with regard to the learning process?How prepared do you feel about your lessons of applied learning technologies?",were almost the same: Have a good knowledge, but need more practice to use them; Have good abilities to use computers, but need more tools than those to try before my future teaching; Are ready; Have a fair knowledge; Have good knowledge on computers, but need more practice before my real class; Not very good knowledge, but with a specific training and the right conditions to do their best; Feel able because have had the chance to practice their use; Need a frequent use of these tools. In general the students are able to use the main ICT tools, but they need a real time in real classes to use and to practice their ideas through these tools.This is understandable because they do not have the chance to practice their knowledge in our classes and they are in the first year of the experience as teachers. Q4. "Are the pupils or children of a primary level willing to engage in ICT use during their lessons?", the students admitted almost the same statements.In general they think that the pupils are prepared to use the ICT tools; others think that the pupils are prepared, because they use different tools at home (computers, cameras, and cell phones) and only one of them admits that they need more preparations. The students see their future problem on using ICTs bigger than that of their pupils, because they see that the pupils are interested and found of ICT tools as an entertainment way and a kind part of their free time. Q5."Do you see the introduction of learning technologies as an obstacle or a relief in your work in the future?" was the last question in our focus group.In general the students see the introduction of the technologies as a facilitator for the teachers and for the pupils too; Others think that it is a relief for the pupils, but their use should be well programmed; A small group thinks that the use of these tools may be an obstacle or a relief depending on the subject, so in some subjects they are a relief and in other they think are obstacles; Others think that it is a relief specially to reduce the time of understanding and learning, raises the concentration, attention and a small group thinks that is an important relief, but this process needs a good management, because otherwise it will be an obstacle. We can highlight the fact that the ICT introduction in the future classes is more a relief than an obstacle, but related to activities and several practices during their lesson ICT is a necessity for a better and future teaching. Conclusions and Recommendations Based on their perception, the students admit that they: • Have a low level of computer skills for specific programmes, due to the impracticable skills and the theoretic computer knowledge, not practiced. • Are optimistic about their possibility and ability to use the ICT tools during their classes in the future, but they need different trainings, not as a necessity. E- ISSN 2281-4612 ISSN 2281-3993 Academic Journal of Interdisciplinary Studies MCSER Publishing, Rome-Italy • See a big problem on interaction of the ICT tools and the conditions of the classrooms. • Need help and support to change the conditions in schools with the aim to improve the quality of teaching with their future pupils.(tools,environment, trainings, the right software etc.) • Need to practice not only the computers, but also over head projectors etc. before going in real classrooms. • The introduction of ICTs in teaching and learning process is seen more as a relief and the pupils as good collaborators.
2018-12-15T05:04:01.086Z
2016-12-30T00:00:00.000
{ "year": 2016, "sha1": "f66a23513e59215893bf1c1063f8489b78196c5b", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/ajis/article/download/9767/9405", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f66a23513e59215893bf1c1063f8489b78196c5b", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
42994276
pes2o/s2orc
v3-fos-license
Inhibition and inactivation of bovine mammary and liver UDP-galactose-4-epimerases. Bovine liver and mammary UDP-galactose-4-epimerases were investigated with respect to various inhibitors and inactivators. Uridine nucleotides and NADH are potent inhibitors with Ki values in the low micromolar range. The NAD+/NADH ratio may be an important physiological control mechanism for it affects markedly the activity of the enzyme with 50% inhibition occurring at a ratio of 20:1. In the presence of uridine nucleotides binding of NADH to the epimerases is enhanced. Consequently, the effect of changes in the NAD+/NADH ratio in vivo would not be immediately apparent as uridine nucleotides would slow down the displacement of NADH by NAD+. Neither uridine nor galactose 1-phosphate inhibits the purified enzymes as previously reported with the impure liver enzyme. Uridine nucleotides provide almost total protection against the apparent first order inactivation of the epimerases by trypsin and allow determination of dissociation constants. NAD+ partially protects against trypsin inactivation. Inactivation with various sulfhydryl reagents is complex and the results indicate that at least three sulfhydryl groups may be modified before total inactivation occurs. Partial inactivation occurs upon modification of the epimerases with 2-hydroxy-5-nitrogenzyl bromide. Some protection against this modification is provided by the combination of NAD+ and UDP. Bovine liver and mammary UDP-galactose-4-epimerases were investigated with respect to various inhibitors and inactivators. Uridine nucleotides and NADH are potent inhibitors with Ki values in the low micromolar range. The NAD+/NADH ratio may be an important physiological control mechanism for it affects markedly the activity of the enzyme with 50% inhibition occurring at a ratio of 2O:l. In the presence of uridine nucleotides binding of NADH to the epimerases is enhanced. Consequently, the effect of changes in the NAD+/NADH ratio in uiuo would not be immediately apparent as uridine nucleotides would slow down the displacement of NADH by NAD+. Neither uridine nor galactose l-phosphate inhibits the purified enzymes as previously reported with the impure liver enzyme. Uridine nucleotides provide almost total protection against the apparent first order inactivation of the epimerases by trypsin and allow determination of dissociation constants. NAD+ partially protects against trypsin inactivation. Inactivation with various sulfhydryl reagents is complex and the results indicate that at least three sulfhydryl groups may be modified before total inactivation occurs. Partial inactivation occurs upon modification of the epimerases with 2-hydroxy-5-nitrobenzyl bromide. Some protection against this modification is provided by the combination of NAD+ and UDP. UMP, UDP, UTP, NAD', NADH, ADP, galactose-l-P, galactose-6-P, glucose-l-P, glucose-6-P, trypsin, Nethylmaleimide, showdomycin, cysteine, p-hydroxymercuribenzoate, 2-mercaptoethanol, dithiothreitol, 2-hydroxy-5-nitrobenzyl bromide type II, sodium pyruvate, and type III lactic dehydrogenase (EC 1.1.1.28) were purchased from Sigma Chemical Co. All other reagents were as described previously (8). Methods -UDP-galactose-4-epimerase was isolated from bovine liver and mammary tissue as previously described (8). For the inhibition studies, DEAE-cellulose chromatography (8) was used to remove UMP remaining from the final purification step. The epimerases used in all of these experiments were in a buffer consisting of 100 rnM potassium phosphate, pH 7.6, containing 1 rnM mercaptoethanol and 20% glycerol. UDP-galactose-4-epimerases used in the sulfhydryl inactivation studies were eluted from DEAE-cellulose (8) with buffer containing no mercaptoethanol and used immediately. Epimerase activity was determined by both Assays I and II (8). The inhibition by uridine nucleotides was examined with Assay I. The coupling enzyme, UDP-glucose dehydrogenase (EC 1.1.1.221, was not appreciably affected by the concentrations of nucleotides employed, especially since a large excess of dehydrogenase was employed. Assay II was used to examine the inhibition by NADH. To investigate the effect of the NAD+/NADH ratio on activity, the NAD+ concentration was held constant (10 PM) and the NADH concentration was varied. With the higher NADH concentrations significant inhibition of the UDP-glucose dehydrogenase was apparent in Assay II, necessitating oxidation of the NADH to NAD' before addition of this coupling enzyme. This was done by adding sodium pyruvate to a concentration of 5 mM and lOpg/ml of lactic dehydrogenase after stopping the epimerase reaction with heat. After a 30. min incubation at 24" the lactic dehydrogenase was inactivated by heating for 3 min at 100". Then NAD+ was added to a final concentration of 1 mM and the UDP-glucose dehydrogenase was added to determine the amount of UDP-glucose produced by measuring total NADH production. Sulfhydryl modification of the epimerases by either N-ethylmaleimide or showdomycin was stopped by the addition of excess cysteine (2O:l molar of cysteine to inactivating agent) (2, 9) before determination of residual epimerase activity by Assay I. Enzyme treated with p-hydroxymercuribenzoate was assayed essentially by Assay II, with one modification. At least a ZOO-fold molar excess of mercaptoethanol was added to each assay after the epimerase reaction was stopped by heating but before addition of the UDP-glucose dehydrogenase. Trypsin inactivation studies were at 24" in the potassium phos-Inhibition and Inactivation of Epimerases phate, mercaptoethanol, glycerol-stabilizing buffer (8). Aliquots were removed at appropriate times and UDP-galactose-4-epimerase activity was determined by Assay I. Stock solutions of 2-hydroxy-5nitrobenzyl bromide were prepared in absolute methanol immediately before use. Nine volumes of phosphate-buffered epimerase solution, pH 7.6, were treated with 1 volume of 2-hydroxy-5-nitrobenzyl bromide solution and shaken vigorously for 1 min and allowed to incubate at 4" for 5 additional min to effectively hydrolyze the unreacted reagent. The activity of the modified enzyme as determined by Assay I was compared with that of a control which had been treated in the same manner with only absolute methanol. p-Hydroxymercuribenzoate, NAD +, NADH, and uridine nucleotide concentrations were determined by using the appropriate extinction coefficients. UDP-galactose concentrations were determined enzymatically with excess UDP-galactose-4-epimerase and UDPglucose dehydrogenase. Only the dehydrogenase was used to determine UDP-glucose concentrations. Data for determination of K, and K,, values were fitted by linear regression (12) using a Texas Instruments SR-52 programmable calculator. Circular dichroism spectra were measured at 24" in a Jasco J-20 Circular Dichroism Spectrometer with 2-mm path length cuvettes. All spectra were run in duplicate. The buffer for all determinations was 100 rnM potassium phosphate containing 20% glycerol and 1 mM mercaptoethanol, pH 7.6. A mean residue molecular weight of 118 was assumed and protein concentrations were estimated from enzymatic activity using a specific activity of 66 units/mg for the liver epimerase (8). All spectra represent the difference between enzyme, buffer, and added substrates and solutions of buffer and added substrates. Sodium dodecyl sulfate polyacrylamide gel electrophoresis was by the procedure of Weber and Osborn (13) while discontinuous polyacrylamide gel electrophoresis was by the procedure of Davis (14). RESULTS Data regarding the inhibition of the epimerase by the uridine nucleotides, NADH, and 2-hydroxy-Snitrobenzyl bromide as well as the circular dichroism spectrum in the absence and presence of UDP-galactose are presented in a supplement' following this paper. In view of the report by Ray and Bhaduri (41, gala&se-l-P, galactose&P, glucose-l-P and glucose-6-P were tested as possible effecmrs with both enzymes. Five glucose concentrations of the carbohydrate phosphates had no effect on epimerase activity in either Assay I or Assay II. There was also no effect when UDP-glucose was used as the substrate. These carbohydrate phosphates did, however, appear to be substrates for activities present in the partially purified UDP-glucose dehydrogenase preparation; particularly glucose-l-P and glucose-6-P. NADH is an excellent inhibitor of both the bovine liver and mammary epimerases. The extent of inhibition is dependent upon the NAD+/NADH ratio as shown in Fig. 3. A NAD+/ NADH ratio of 2O:l results in approximately 50% inhibition which is independent of the initial NAD+ concentration since similar results to those presented in Fig. 3 were obtained when the initial NAD+ concentration was 500 PM. Five hundred 1 Some of the data are presented as a miniprint supplement immediately following this paper. Figs. 1, 2, 7, and 8 are found on p. 2094. For the convenience of those who prefer to obtain the supplementary material in the form of six pages of full size photocopies, these same data are available as JBC Document No. 76M-112. Orders should specify the title, authors, and reference to this paper, the JBC Document Number, and the number of copies desired. Orders should be addressed to The Journal of Biological Chemistry, 9650 Rockville Pike, Bethesda, Md. 20014, and must be accompanied by a remittance to the order of the Journal in the amount of $1.00 per set of photocopies. (0) 4. A, effect of preincubation of liver epimerases with NADH, and NADH and UDP-galactose. The enzyme was preincubated for 5 min at 24" in 100 rn~ glycylglycine, pH 8.5, with no additions (O), 1 FM NADH (01, and 1 ELM NADH and 500 PM UDP-galactose (A). At zero time all solutions were made the same in concentrations of UDP-galactose and NADH and 500 /*M NAD+ was added. All of these were added as a single addition. Aliquots were removed at the indicated times and UDP-glucose determined by Assay II. B, displacement of NAD+ from the liver epimerase by NADH. Enzyme was incubated with 100 rnM glycylglycine, pH 8.5, 500 ELM UDP-galactose and 1 FLM NAD+ (0). At 4.5 min ( 1 1, 1 ELM NADH was added to half (0). Assay II was used to determine UDP-glucose production. No UDP-glucose was produced when enzyme was incubated with UDP-galactose and NADH. C, displacement of UDP-galactose from liver epimerase by UDP. Enzyme was incubated with 1 mM NAD+, 100 FLM UDP was added. The lower curve (A) represents activity in the presence of 500 F"M UDP-galactose added at zero time. UDP-glucose production was determined by Assay I with a l-h incubation to negate inhibitory effects of UDP on UDP-glucose dehydrogenase. The same types of data were obtained for the mammary UDP-galactose-4-epimerase in all three cases. With 3 mM showdomycin, the residual level was reached in 5 min whereas in its absence it took 20 h. Plots of log (V -V,)/(V,, -V,) versus time were nonlinear, and in all cases a very rapid initial loss of activity was observed. No substantial protection against inactivation by 3 mM showdomycin was observed with either 1 mM NAD+ or 0.05 mM UDP-galactose. Very similar results were obtained using N-ethylmaleimide as the inactivating agent with two exceptions. The very rapid initial decreases in activity were more pronounced than with showdomycin, and the enzyme was totally inactivated by concentrations of N-ethylmaleimide greater than 5 mM. Neither 0.05 mM UDP-galactose nor 1 mM NAD+ offered substantial protection against inactivation by 0.5 mM N-ethylmaleimide. With low concentrations of C+hydroxymercuribenzoate as the inactivating agent, the time course of the initial loss of activity could be followed whereas this was impractical to do with both Nethylmaleimide and showdomycin. With concentrations of phydroxymercuribenzoate greater than 1 PM, no protection was afforded by substrates. Since the reaction with p-hydroxymercuribenzoate was somewhat temperature-dependent, protection by substrates was examined at 4" with 0.05 pM of the inactivating reagent. As shown in Fig. 6, the substrates NAD+ and UDP-galactose and the inhibitor UDP caused the semilogarithmic inactivation plots to become linear. Almost total protection was observed when both NAD+ and UDP were incubated with the enzyme. Enzyme totally inactivated by concentrations of p-hydroxymercuribenzoate greater than 1 PM were only partially reactivated (30 to 50%) by incubation with 50 PM mercaptoethanol or 50 FM dithiothreitol. Enzyme only partially inactivated (less than 50%) was totally reactivated by the same procedure. Inactivation studies with Nethylmaleimide and p-hydroxymercuribenzoate with the highly purified mammary epimerase resulted in data very similar to that obtained for the liver enzyme. Uridine nucleotides are very effective inhibitors of the electrophoretically homogenous bovine mammary and liver UDPgalactose-Cepimerases as determined by Assay I. This inhibition is competitive with respect to UDP-galactose and Ki values obtained are comparable to those reported by Tsai et al. (2) for the partially purified mammary enzyme. The absence of inhibition by uridine and galactose l-phosphate and the competitive inhibition by UMP indicates the involvement of both the uridine and phosphate moieties for effective interaction with the enzymes. The lower Ki values observed for UDP as compared to those for UMP and UTP are consistent with the closer similarity of this nucleotide with the substrates UDPglucose and UDP-galactose. ADP was not an inhibitor of either enzyme. NADH is a much more effective inhibitor than the uridine nucleotides. The Ki values for UDP for the two enzymes are approximately twice the apparent K,,, values for UDP-galactose (8) while the Ki values for NADH are approximately onetenth the apparent K,,, value for NAD+ (8). The effect of the NAD+/NADH ratio on epimerase activity is very pronounced as shown in Fig. 3 and provides evidence for the importance of this ratio in controlling the metabolic flux of UDP-glucose and UDP-galactose. Robinson et al. (16) have shown that the NAD+/NADH ratio and pH are very important in controlling the expression of epimerase activity in cell cultures and tumor cells with the effect of NADH being more pronounced at higher hydrogen ion concentrations. As illustrated in Fig. 4A, the effect of increasing the NAD+/NADH ratio would not be immediately apparent because in the presence of uridine nucleotides, NADH is not rapidly displaced by NAD+. Experiments very similar to those shown in Fig. 4, A and B, were reported earlier with partially purified bovine liver UDP-gala&se-4-epimerase by Langer and Glaser (17). These workers reported that the NAD+ did not dissociate from the enzyme with each catalytic event as several minutes were required for complete inhibition by NADH. Data presented in this study indicate that the effect of NADH is immediate. Langer and Glaser (17) also observed no displacement on NADH by NAD+ in an experiment similar to that described by Fig. 4A, but this was due to the experimental procedure since they used only 5.7 times as much NAD+ as NADH. This ratio of NAD+/NADH is sufficient to cause 70 to 80% inhibition so displacement would not be readily apparent. Langer and Glaser (17) also observed a 340 nm absorbance increase with time with NADH, epimerase preparation, and UDP-galactose under conditions similar to Assay I. This was never observed with either of the highly purified epimerases. The apparent loss of activity caused by preincubation with only NADH (Fig. 4A) form. An alternate explanation is that there is a small amount of enzyme-NADH formed which is inactive and dissociates slowly. The change caused by NADH required 4 min of incubation at 24" of the liver epimerase with NADH for maximum decrease in apparent activity with no further loss observed after that length of time. This indicates that the loss of activity was not due to simple inactivation as this process would have continued. The linear nature of UDP-glucose production catalyzed by NADH-preincubated enzymes (Fig. 4A) shows that in the absence of uridine nucleotides, NADH is rapidly displaced by NAD+ under these assay conditions rather than slowly as reported by Langer and Glaser (17). The inactivation studies with the sulfhydryl specific reagents were complex but indicate that more than one susceptible group was necessary for full activity in both the mammary and liver enzyme. The semilogarithmic plots of activity loss as a function of time show at least three different rates of inactivation suggesting the possible involvement of at least three sulfhydryl groups. The amount of activity lost during the initial rapid process was 35 to 45% of the original activity with either N-ethylmaleimide or p-hydroxymercuribenzoate as the inactivating reagent (18). With the two slower processes, the amount of activity lost varied with the inactivating reagent, but with both reagents total inactivation was the final result. The slowest process evidently does not occur with incubation of the enzyme in the absence of P-mercaptoethanol nor does it occur when showdomycin was the inactivating reagent. Only with low concentrations (0.5 FM) ofp-hydroxymercuribenzoate could any protection by substrates against inactivation be observed. The observed protection by substrates to inactivation at low sulfhydryl reagent concentration could possibly be due to different susceptible sulfhydryl groups being shielded as a consequence of substrates binding to the enzyme. The sulfhydryl groups which are modified by these reagents are very reactive since low concentrations of reagents are required for total inactivation of the enzyme and since the enzyme is inactivated readily upon incubation in the absence of mercaptoethanol. Only partial inactivation of the enzyme was observed when 2-hydroxy-5-nitrobenzyl bromide was the inactivating agent. Partial protection against this inactivation was observed only in the presence of both NAD+ and UDP. Interestingly, the 2hydroxy-Snitrobenzyl bromide-modified enzyme appeared to regain activity upon incubation with substrates. The time required for maximum regain of activity increased with increasing inactivating reagent concentration. This could conceivably be due to the substrates interacting with the enzyme and forcing reassumption of an active conformation from the inactive or less active perturbed form caused by reaction with 2-hydroxy-Snitrobenzyl bromide. The protection by micromolar concentrations of uridine nucleotides against trypsin inactivation of the epimerases could indicate that the binding domain for these nucleotides contains a particular peptide bond which serves as the initial site of attack for trypsin. Cleavage at this site could be necessary before further trypsinolysis could occur. This seems unlikely as trypsin specificity is directed toward lysyl and arginyl residues (19) and amino acid analyses (8) have indicated 49 potential trypsin cleavage sites for the liver enzyme and 44 for the mammary enzyme. Also, the sodium dodecyl sulfate-polyacrylamide gel patterns of enzyme incubated with trypsin in the absence of uridine nucleotides showed only low molecular weight fragments while in the presence of these nucleotides no fragmentation was observed. A probable explanation for these results would be that upon binding of uridine nucleotides to the epimerases, the protein assumes a tighter, more trypsin-resistant conformation. Preliminary evidence for such a structural change is indicated by the circular dichroism studies. Both NAD+ and uridine nucleotides bind to the free enzyme, as shown by binding to both the affinity columns used for purification (8) and the protection against trypsinolysis by uridine nucleotides and NAD+. The necessity of uridine nucleotides for tight binding of NADH to these enzymes is especially interesting since the & for dissociation of NAD+ from the free enzyme as determined by trypsinolysis is approximately &fold greater than the apparent K,,, for NAD+ (8) for both enzymes. This could indicate that uridine nucleotides cause a change in these enzymes which enhances their affinity for NAD+. Only small differences were observed when the K,, and K; values of substrates and inhibitors were compared for the liver and mammary UDP-gala&se-4-epimerases which provides additional evidence that the properties of these enzymes are similar. In general, all of the K,, values obtained for UMP, UDP, and UTP as determined by trypsin inactivation were lower than the corresponding Ki values determined by inhibition kinetics. It would be anticipated that these values would be more closely related since they reflect the dissociation of the ligand from the enzyme. Protection by inhibitors against trypsin inactivation suggests a conformational change in the enzyme. It may be possible that the reversal of this conformational change after dissociation of the inhibitor is slow and if the native form of the enzyme is the only form susceptible to trypsin as indicated by the linearity of the plots (Fig. 5) it may be possible to obtain spuriously low dissociation constants. Another possibility is that the inhibitor may bind to more than one site on the enzyme and a tight binding site affords protection against trypsin whereas the determination of K, reflects a less tight binding site.
2018-04-03T05:29:18.375Z
1977-03-25T00:00:00.000
{ "year": 1977, "sha1": "90591c0d1ccbd41e37c2c9d0ea3eba106e26489a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)71869-3", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "6b366f2155c24dccc236cf7006a3079e8e449db9", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233722430
pes2o/s2orc
v3-fos-license
Has GWAS lost its status as a paragon of open science? Genomic research led the way in open science, a tradition continued by genome-wide association studies (GWAS)—through the sharing of materials, results, and data. Coordinated quality control procedures also contributed to robust findings. However, recent years have seen declines in GWAS transparency. Here, we assess some shifts away from open science practices with the aim of stimulating a discussion of these issues. The Human Genome Project (HGP) led the way in open science-in particular, data sharing. In 1996, HGP scientists established the "Bermuda Principles," which specified that DNA sequence data should be released in publicly accessible databases within 24 hours of generation. The following year, data quality standards were developed-the "Bermuda Sequence-Quality Standards." TheseAU : PleasenotethattextfootnotesarenotpermittedinPLOSarticles:Hence; thefo Bermuda agreements (see https://web.ornl.gov/sci/techresources/ Human_Genome/research/bermuda.shtml) were key to the multinational collaborative work behind the HGP's remarkable successes, producing a global knowledge resource that has stimulated major scientific advances. Following completion of the HGP, these open science principles were applied to other genomics projects. Scientists and funders recognized the value of data sharing, coordination, and transparency in advancing knowledge, scientific credibility, and improvements in human health. Many data sharing policies reflect the ethos of these principles, and many areas of human genomics continue lead the way in open science. Genome-wide association studies (GWAS) continued these open science trends. The GWAS era arose following the widespread recognition of low statistical power and questionable methodological practices in candidate gene association studies, which were plagued by low reproducibility. The need to collaborate at scale to achieve the large sample sizes required to detect small effect sizes, while correcting for multiple testing, necessitated coordinated data analysis plans and, in turn, harmonized datasets and code. These were shared within consortia, making it a small step to sharing materials, results, and data publicly (albeit typically summary results, rather than individual level data). Another benefit of this collaborative approach (particularly when handling complex datasets) was a focus on coordinated quality control procedures. For these reasons, human genomics in general, and GWAS, in particular, are often held up as an exemplar of reproducible science. DoesAU : PleaseconfirmthatthestatementDoestheGWASfield the GWAS field still live up to these standards, or is it slipping back? GWAS is now a mainstream technique, and increasingly only one partAU : PleasenotethatasperPLO of a study, rather than the study itself. Studies that include a GWAS now often include functional work, analysis of causal pathways, polygenic risk score analyses, and so on. But this greater breadth risks coming at a cost; often the details of the GWAS itself are relegated to a supplement, which reviewers may scrutinize less carefully [1], while the need to recruit reviewers to evaluate these other elements comes at the expense of having multiple experts inspect the GWAS itself, if this is no longer the sole focus. There is evidence that this has been accompanied by inconsistency in standards. We have seen imputation quality scores as high as r 2 > .9 to an imputation accuracy score of < .1 [2]. GWAS may employ different thresholds across cohorts and analyses within the same study. While what is acceptable will depend on the specific nature of the study, these different thresholds may have a substantial impact on results. However, because imputed SNPs that pass the threshold are not treated any differently from measured SNPs, and imputation quality scores are not included in GWAS, we have no way of knowing whether this is a problem. Different software packages and bioinformatic pipelines are employed, with assumptions that may not be articulated. Even commonly adopted minimum thresholds for what constitutes "sufficient LD" for the purposes of identifying SNP "independence" (e.g., r 2 <. 1) vary across studies, as well different analyses within studies. Employing different thresholds may be warranted, but methodological decisions should be clearly documented and justified. In our view, simply relying on honesty, and assuming no mistakes, is not the best way forward in modern science, where the incentives to produce noteworthy findings can be substantial. Transparency can serve a quality control function [3]. The extraordinary complexity and density of many current studies including a GWAS means methodological details can be relegated to extensive supplements. If these are not scrutinised fully, this may impact on the robustness and reproducibility of GWAS results, with downstream effects such as overinterpretation of noise (e.g., post-GWAS analyses, such as gene prediction, tissue specific expression). Further, many bioinformatic pipelines use existing associations and functional annotations to link to new findings. Alongside this increase in complexity, there has also been a shift away from open science practices. Efforts to achieve ever-greater sample sizes, coupled with the finite number of highquality large cohorts with genetic information available, have encouraged researchers to increasingly partner with private companies that can offer large amounts of data. These companies have a direct interest in using GWAS results for profit and thus have a motivation to contribute data. But the results are commercially sensitive. One consequence is that these private-public research partnerships proceed largely on the terms of the private companies. These terms commonly include no access to individual data (analyzed with "in house pipelines"), no sharing of data, and sharing of only partial results. Many recently published GWAS using 23andMe data include partial results, no code, and no data [e.g., 4,5]. This is despite the fact that most of these studies are meta-analyses, and the data consist of summary statistics, rather than the primary, individual-level data, and therefore do not include sensitive, individually identifiable information. Furthermore, such closed data practices often contravene explicit journal and funding agency data sharing policies. The result is that researchers' ability to replicate and build on these studies is limited. Commercial datasets are also often highly unrepresentative. The problem of lack of representativeness is not unique to commercial datasets-for example, UK Biobank achieved only an approximately 5% recruitment rate, with evidence of "healthy volunteer" selection bias into that study [6]. But selection into commercial datasets can be particularly pronounced. The widely used 23andMe data is composed primarily of individuals from the USA who can afford to investigate their DNA. Participants therefore tend to be European-ancestry, more highly educated, more affluent, and in better health. Furthermore, these data can make up a considerable proportion of the total sample in a GWAS-in some cases over 50% of the total sample [5]. These highly selected samples may bias results [7]. This concern is especially acute for socially patterned phenotypes such educational attainment, income, health behaviours, and mental health (which are often minimally phenotyped via brief participant self-report). The quest for ever-larger sample sizes seems to have come at the expense of the transparency and data sharing that characterized the field in the past. Collaborations between academia and industry can be powerful, and consortium efforts have been critical to the success of GWAS efforts, but we should always ask: At what cost? The question of whether this trade-off is a net positive deserves attention. We would encourage an open discussion of the costs and benefits of these trade-offs by the research community. Despite having led the way in open science and reproducibility, GWAS has become more opaque. Perhaps the method is being taken for granted, given its track record of generating reproducible findings; but reproducible science requires enforcing existing standards as well as continued review and refinement [8]. The lesson is that no methodology stands still, and as particularly complex methodologies evolve-whether it be GWAS, fMRI, etc.-we should continue to examine how these methodologies are applied, and how robust the findings they generate are. If GWAS wants to remain a paragon of open science, it cannot be open only when convenient. Otherwise, hard-won gains in openness and reproducibility can be gradually eroded often at a significant cost to scientific credibility.
2021-05-05T06:17:05.649Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "6bd1c3c355b4e958cec029f5cef1dab3c971c1d1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3001242&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "295f01370e027dbb1b48fb64764136bb60c74ac4", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
266916895
pes2o/s2orc
v3-fos-license
The Zenú oral tradition as a didactic strategy to strengthen family communication Oral tradition is a significant element within culture, especially for ethnic groups, such as the Zenú, who fight for its preservation, because for Zenú families, it is an important factor in keeping their worldview alive. From this perspective, teachers need to have didactic options to promote cultural identity, especially in the indigenous cabildos where family communication is weak and does not tend to promote the oral traditions so necessary to develop in a pluralistic and democratic society, a training alternative to build more democratic societies, from this perspective, the purpose of this research work is the implementation of oral tradition as a didactic strategy to strengthen family communication. The population to be addressed are the students, parents and teachers of the eighth grade of the San Francisco de Asis Educational Institution of Chinú, Córdoba. The sample is made up of 4 parents, 4 students and 4 teachers. This degree work is developed from the qualitative approach, and responds to the theoretical conception of participatory action research. Surveys and semi-structured interviews were used as data collection techniques, which allowed understanding the importance of promoting and revitalizing the oral tradition as a cultural and pedagogical element to strengthen family and community ties. Introduction The research problem posed by this project, the strengthening of family relationships in 32 eighth grade students of the San Francisco de Asís Educational Institution of Chinú -Córdoba, descendants of the Zenú indigenous community through oral tradition as a methodological strategy, is framed within the need to create educational environments that foster the integral development of students and promote inclusion and respect for cultural diversity. In this sense, the research proposal is relevant at a personal, organizational and community level, since it seeks to strengthen family relationships in an indigenous community, through myths, legends, stories, tales of fright, their customs and traditions, among others, which will have a positive impact on the quality of life of students and their families, as well as on the cultural and social development of the community in general. From the organizational perspective, this proposal is aligned with the theory of the learning organization by Peter Senge, who states that learning organizations are those that foster continuous learning and innovation, Senge, P. (2012).The fifth discipline: the art and practice of the learning organization.Granica.and that have the ability to adapt to changes in the environment.In this sense, the implementation of the oral tradition as Resumen La tradición oral es un elemento significativo dentro de la cultura, mucho más para las etnias, que, como la Zenú, luchan por su preservación, pues para las familias Zenúes, es un factor importante para mantener viva su cosmovisión.Desde esta perspectiva, los docentes necesitan contar con opciones didácticas para promover la identidad cultural, máxime en los cabildos indígenas donde la comunicación familiar es débil y no propende por promover las tradiciones orales tan necesarias para desarrollarse en una sociedad pluralista y democrática, alternativa de formación para construir sociedades más democráticas, desde esta perspectiva, la finalidad de este trabajo de investigación es, la implementación de la tradición oral como estrategia didáctica para fortalecer la comunicación familiar.La población que se abordará son los estudiantes, padres de familia y docentes del grado octavo de la Institución Educativa San Francisco de Asís, de Chinú, Córdoba.La muestra está conformada por 4 padres de familia, 4 estudiantes y 4 docentes.Este trabajo de grado se desarrolla desde el enfoque cualitativo, y responde a la concepción teórica de la investigación acción participación.Como técnicas de recolección de datos se utilizaron las encuestas y entrevistas semiestructuradas, estas permitieron comprender la importancia de promover y revitalizar la tradición oral como elemento cultural y pedagógico para fortalecer los vínculos familiares y comunitarios. Palabras clave: Tradición oral, Zenú, estrategia didáctica, comunicación familiar, identidad cultural, participación activa a methodological strategy in the educational environment could contribute to create a culture of continuous learning in the educational community, where different forms of knowledge are valued and respected and where inclusion and intercultural dialogue are promoted. On the other hand, it is important to highlight that the research proposal is established within the legal framework of Colombian education, which establishes the need to promote intercultural education and respect for cultural diversity in the country's educational establishments.The Colombian Ministry of National Education (MEN) has established policies and programs aimed at guaranteeing access to and quality of education for indigenous peoples, Afro-descendants and marginalized communities, in order to promote the integral development of students and the strengthening of their cultural identities.In this sense, the implementation of the oral tradition as a methodological strategy in the Zenú educational community is not only relevant from a pedagogical point of view, but it is also in line with the policies and guidelines of the MEN in terms of intercultural education and cultural diversity. In conclusion, the research proposal to strengthen family relationships in the Zenú indigenous community through oral tradition as a methodological strategy is framed within the need to create educational environments that foster the integral development of students and promote inclusion and respect for cultural diversity.In addition, it is aligned with Peter Senge's theory of the learning organization and with the legal framework of Colombian education in terms of intercultural education and cultural diversity. Within the framework of the pedagogical practice as teachers at the San Francisco de Asis Educational Institution, a recurrent problematic situation has been identified related to the weakening of family relationships in the Zenú indigenous community, specifically among eighth grade students.This problem has a significant impact on the emotional well-being and academic performance of the students, as well as on the strengthening of their cultural identity and sense of belonging. The problematic situation is characterized by several aspects: Personal: Zenú indigenous families may face socioeconomic, cultural and access to basic services challenges, which may affect family dynamics and parent-child relationships.In addition, they may face difficulties in reconciling traditional cultural practices with the demands of modern society. Organizational: The San Francisco de Asis educational institution should consider the cultural diversity of students and their families, promoting an intercultural education that respects and values indigenous cultural traditions and practices.It is important to foster effective communication and active collaboration between the institution and families to strengthen family relationships. Sociocultural: The Zenú indigenous community has a rich oral tradition that has been passed down from generation to generation.However, due to various factors, this tradition has been weakening in recent years, which has had a negative impact on family relationships and the preservation of the community's cultural identity. Given this problem, the need arises to investigate and understand how the implementation of oral tradition as a methodological strategy can strengthen family relationships and promote meaningful learning in eighth grade students.We seek to explore the impact of this strategy in the revitalization of indigenous culture, the strengthening of cultural identity and the promotion of strong bonds between students and their families.This research is justified by the need to strengthen the education of eighth grade students of the San Francisco de Asis Educational Institution, belonging to the Zenú ethnic group and residing in the outskirts of the municipality of Chinú, through the implementation of oral tradition as a methodological strategy and the strengthening of family relationships. Oral tradition is a valuable tool for learning and transmitting knowledge in the Zenú indigenous culture, and its implementation in the classroom can promote cultural identity and appreciation of ethnic diversity.Likewise, the strengthening of family communication can improve the academic performance and emotional well-being of students.In this line, Ramírez Vargas ( 2009) who values oral tradition as "a source of learning, as it contains information on knowledge and customs in different areas such as: history, myths and sacred texts, political institutions, rites and music, with great historical value and is a source of cultural values, from which national and regional identities emerge" (Ramírez Vargas, 2009). Likewise, family communication: It is understood as the transactional symbolic process of generating within the family system, meanings to events, things and situations of daily life; it is a process of mutual and evolutionary influence that includes verbal and non-verbal messages, perceptions, feelings and cognitions of the members of the family group.Interaction occurs in a cultural, environmental and historical context and results in the creation and sharing of meanings" (gallego, 2006, p.94).On the other hand, Tesson & Younnis (1995) and Noack & Krake (1998) perceive family communication as a decisive scenario to renegotiate roles and transform relationships so that the family environment is not hostile, but surrounded by mutuality and reciprocity.Likewise, Herrera (2007) recognizes the transcendental role that communication plays in the functioning and maintenance of the family system when it is developed with clear hierarchies, clear limits, clear roles and open and proactive dialogues that make it possible to adapt to changes. For Tobón (2010), teaching strategies are "a set of actions that are planned and implemented in an orderly manner to achieve a certain purpose", therefore, in the pedagogical field, it is specifically a "plan of action that the teacher implements to achieve learning" (Tobón, 2010: 246).Didactic strategies as an element of reflection for the teaching activity itself, offer great possibilities and expectations to improve educational practice.In order to communicate knowledge, teachers use strategies aimed at promoting its acquisition, elaboration and understanding.In other words, didactic strategies refer to tasks and activities that the teacher implements systematically to achieve certain learning in students. The internal organization of the Zenú people is characterized by maintaining ancestral cultural patterns.This system is based on being organized by settlements within the resguardo and each of these units has a Cabildo Menor.In turn, the union of the different cabildos forms the so-called Cabildo Mayor Zenú.This Cabildo Mayor guides the collective interests of the people, exercises control and brings the communities together.Under this logic, an internal organization similar to that of the past is evident, in that the people maintain their territory divided under a figure of chiefdoms, which today are called cabildos menores (smaller councils).The internal organization of the Zenú people is the basis for advocacy and visibility to the outside world.In this sense, the traditional system of the people constitutes the mechanisms of participation in the scenarios, therefore the community relationship of the Zenú indigenous people maintains a strong link with their immediate neighbors such as the peasants and the coastal society, mainly.Indigenous Organization of Antioquia (2009)."Construyendo conocimientos ejercemos autonomía en nuestros territorios".Medellín: Impresiones Graficas. In this sense, the present research is theoretically justified within the framework of the socio-constructivist and meaningful approach to learning, which considers that knowledge is socially constructed through the interaction between individuals and their environment. Materials and methods This research is qualitative, following Hernández Sampieri (2018), qualitative research usually "produces questions before, during or after the collection and analysis of data.The inquiry action moves dynamically between the facts and their interpretation.understand and interpret the impact of the oral tradition and the strengthening of family relationships in their educational and personal development.therefore, the collection and analysis of information. For the design of this research, the postulates of Participatory Action Research (PAR) are followed, taking into account that it is recognized as a way to explore and study a social situation to improve it, in this case in the educational context, it is framed in a field research, where data collection is done directly with the participants in their own environment Hernández-Sampieri and Mendoza-Torres (2018).In the IAP development, several actions are implemented whose purpose is to generate changes, which allow mitigating or overcoming the problems evidenced in it. The procedure to be developed in this design is given in phases as follows: first phase; delimitation of the problem and population to be worked on, second phase; research design, construction of the theoretical and methodological framework and third phase.Data collection, analysis, design of the proposal and preparation of the final report. The population under study in this research project are 4 students between the ages of 12 to 14 years old of the eighth grade (8th) of the San Francisco de Asis Educational Institution, belonging to the Zenú ethnic group, 4 parents who live in the outskirts of the municipality of Chinú and 4 teachers of the eighth grade.The educational institution provides educational services at the preschool, elementary school, high school and technical high school levels, creating a flexible intercultural curriculum that establishes the basis for strengthening inclusion, unity in family ties, and the appreciation of their ancestral customs. In this research project, it is necessary to have a variety of data collection instruments to obtain relevant and significant data to address the problem and respond to the research objectives.These instruments will be used in a complementary manner, providing different perspectives and approaches that will enrich the analysis of the results obtained. To guarantee the quality and validity of the information collected, instruments have been selected that fit the qualitative nature of the research, seeking to capture the richness and depth of the experiences, perceptions and experiences of the students and their families in relation to the oral tradition and family relationships.The data collection instruments that will be used in this study are presented here, along with a brief description of each one of them.It should be noted that these instruments have been adapted and contextualized specifically for the purpose of this research, considering the particularities of the Zenú indigenous community and the eighth grade student population. A structured survey is designed and applied to eighth grade students, which will allow obtaining data on their perceptions and experiences in relation to oral tradition and family relationships.This survey will include closed questions that will be statistically analyzed. Semi-structured interviews were conducted with teachers, families and members of the Zenú community in order to obtain more detailed qualitative information on the importance of oral tradition, family roles and cultural expectations in the educational process.These interviews will be recorded and transcribed for later analysis. Results The results presented below are established according to the specific objectives, based on the application of each of the established activities.It could be evidenced that the spaces of communication about the difficulties or academic advances of the students is little, because this type of dialogues in the familiar bosom is given sporadically, that is to say, that there exist failures in the communicative familiar climate, also it was made finding of the little transmission of the oral traditions and the Zenú cosmovision, This indicates a low sense of belonging to the Zenú identity, which originates an uprooting of the family with its ancestral roots, because it is through the family where these values and cultural traditions are cultivated, through it there is an extension of the socioculture.Also important within the characterization of family communication is the type of relationships that exist between the family and the school, since the information collected clearly shows the presence of relationships that are characterized by a lack of communication and distancing between parents and the educational institution, with a low level of participation in curricular and extracurricular activities.This situation is of concern, if we take into account that the family is fundamental to the educational process, therefore the relationship with the school should be characterized by joint action, i.e. mutual collaboration and constant dialogue to facilitate the learning of students, and in this case to improve the participation of students of the Zenú ethnic group. A positive aspect found is the willingness of the students to participate in the realization of textual productions of the Zenú ethnicity, since 100% of them express it, something very significant, since it shows the interest of the students in knowing their ancestral traditions together with their families.An important finding was to make a compendium of the Zenú oral tradition, thanks to the narrative of the parents, who consider important that their children know the worldview of their Zenú indigenous culture, a fact that contrasts with the lack of communication on these issues in the family, as 75% say they know well the Zenú oral tradition and 25% say they know in part their oral tradition, This indicates, on the one hand, that there has been a lack of meeting spaces and family dialogue, and on the other hand, that there is a great source of knowledge in the parents, they are knowledgeable about their indigenous culture, which has a positive impact on their children and on this project, since their information served as an invaluable resource to design the didactic strategy "Nitana" on the Zenú oral tradition. The realization of the didactic strategy "Nitana", takes its name from the indigenous Zenú word, which represents the place where the Zenú ancestors met to narrate and share their daily life, hence the relevance of the name for this strategy that aims to improve family communication through oral tradition, giving the family the experience of traditions and indigenous values that generate identity and family dialogue. Each workshop consists of a glossary of Zenú vocabulary, a reading of Zenú myths and cosmogonic stories, a feedback and a written composition of a text or drawing to develop creative composition.All are aimed at encouraging communication and family dialogue through an affectionate and respectful approach, based on texts from the oral tradition. The methodology used is that of educational workshops that allow the active participation of parents and children, a pedagogical task that varies from traditional meetings, where parents attended sporadically as passive recipients that did not allow them to create situations for them to develop objective attitudes of reflection, dialogue, acceptance and recognition of their ancestry. The Nitana booklet was created with activities that generate cooperative work experiences between parents and children, in which narratives of the Zenú cosmovision are developed, which further strengthens the conservation of the oral tradition of the Zenú people who are part of the Urban Indigenous Council of Chinú. The four mothers who participated in this project became tutors for their children, sharing with the researching and evaluating teachers.They began the activities by telling the stories that appear in the booklet and that they themselves collaborated in its production, which generated conversations with their children, who could not hide their admiration at seeing their parents telling ancestral knowledge that many of them did not know, but above all, seeing them in the role of storytellers. By applying the workshops of the "Nitana" primer, an empathetic dynamic was generated between parents and children, since this significant experience of working in teams developing narrative activities, reading comprehension and complementing with the creative composition of artistic works such as: creation of couplets, poems, micro stories, collage, drawings and origami; allowed to create a significant learning for students, in addition to generating a climate of good communication and participation of parents and children. Results of the fourth objective of evaluating the effect of the implementation of the didactic strategy "Nitana", based on the Zenú oral tradition.Teachers endorsed as positive the increase in communication and collaboration between students and their families as a result of the incorporation of the Zenú indigenous oral tradition in school activities, something satisfactory, since one of the fundamental purposes of the educational institution is to ensure that all members of the educational community are involved in the educational process, in that order of ideas, the family is one of the fundamental pillars to achieve it, hence strengthening family communication of students with their parents is an imperative in the school. They also consider that the didactic strategy Nitana, based on the Zenú oral tradition, has managed to contribute to the improvement of the family communication of the students, a very significant fact, since this strategy seeks the active participation of parents and their children in the educational process, in addition to strengthening family ties and the oral tradition of the Zenú indigenous people that is part of the Urban Cabildo of the municipality of Chinú, it can be corroborated that the oral tradition is a significant element within the educational process of the children belonging to the Zenú ethnic group and their families, since the ancestral culture deserves to be recognized, because it constitutes their cultural identity, and allows them to have a self-recognition of the diversity and plurality of which they are a part.The oral tradition has been understood as the remembrances of the past that are transmitted or narrated orally, worth the redundancy, and that are manifested in a natural way in the experiences of a culture, in which all its members recognize themselves, because they are an expression of the customs, the identity and the generational living continuity.Hence, this work on the formulation of the didactic strategy "Nitana", based on the Zenú oral tradition, which allows strengthening the family communication of the eighth grade students of the San Francisco de Asís Educational Institution of Chinú, Córdoba, is pertinent and necessary: [...] a project of oral history is not only a project of oral history, but also a project of oral history....] an oral history project can not only bring them new social contacts, and sometimes even lasting friendships, but can provide them with an invaluable service: ignored and too often in need, they can be given back a certain 48 dignity, a feeling of usefulness, by reconsidering their lives and transferring valuable information to younger generations (Thompson,1998, p.12). In turn, this work promotes the Zenú oral tradition, from its cosmovision, as a way to cultivate that ancestral knowledge, taking into account that this ethnic group lost its native language, but relevant aspects such as oral narrative are maintained, which is necessary to keep it alive.The different ways of preserving cultural memory undoubtedly lead to a differentiation in social organization; in fact, in the most oral cultures, knowledge is linked to communication and the different ways of cultivating cultural memory.In this respect Ong (1982) points out that in an oral culture, once acquired, knowledge must be constantly repeated, and that formal and fixed patterns of thought are indispensable for wisdom.For oral peoples, language is, in general, a mode of action and not only a password for thought, which is why they confer great power to the word.The strength of the oral word is related to the sacred and the existential, so it is good to remember that the oral culture needs spaces to give continuity to their way of expressing thought (in the case of the Zenú indigenous worldview, source of this work) with which transmits its ancestral knowledge from one to another, only thus ensures the continuation of the culture of indigenous ethnic groups such as the Zenú cabildos.And it is pertinent to recall Havelock's contribution when he states that: "The natural human being is not a writer, nor a reader, but a speaker and listener [...] From the perspective of the evolutionary process, writing, at any stage of its development, is an adventitious phenomenon, an artificial exercise, a work of culture and not of nature, imposed on natural man" (Havelock, 1996, p. 37). It is also worth noting that during the collection of information it became evident that there is a lot of information from parents regarding the oral traditions of the community, which resulted in a booklet with workshops that collected all that Zenú worldview, which needs to be disseminated and preserved, Therefore, the inclusion of didactic strategies that promote oral tradition in the classroom is of great relevance and benefit, not only for the students, but also for the family and the community in general, since it strengthens the ties between children, their culture and collective memory.In this regard, Ong (1982) points out that in an oral culture, once acquired, knowledge must be constantly repeated and that formal and fixed patterns of thought are indispensable for wisdom.For oral peoples, language is generally a mode of action and not just a password for thought, and they therefore confer great power on the word. As evidenced once the proposal was implemented, a strengthening of family communication was observed, since the fact that parents were actors in the learning of their children as storytellers of the Zenú oral tradition, and applying family workshops of their local context, students were able to give meaning to the ancestral narratives of their Zenú people, where children shared with their parents the stories collected in the design of the Nitana strategy.Within the family group is built the fundamental basis of communication for each individual, they facilitate to each member adequate ways of how to face the world and to belong within it, the family can build and evidently overcome the relationships and communication models of each of its members so that throughout life they can promote and develop the process through which they acquire knowledge about themselves, strategies to solve problems and improve their attitudes that they learn within the home (Tustón, 2016). Conclusions This project fulfilled the objectives and contributions to the research lines of the group, as the first specific objective was established: To characterize the family communication of the students of the San Francisco de Asis Educational Institution, in order to obtain a significant sample that would serve as a starting point for the execution of the project.According to the information collected, there is evidence of a failure in the family communicative climate, a fact that has repercussions in the educational process of the students, since undoubtedly the family is a fundamental pillar in this.Likewise, regarding family dialogue about oral traditions and Zenú cosmovision, there is little that the students know about their own ethnicity, which indicates the little sense of belonging with the Zenú identity on the part of the parents, this originates an uprooting of the family with their ancestral roots, since it is through the family where these values and cultural traditions are cultivated, through it there is an extension of the socioculture.Vallecida and Orobio (2019) agree that there are multiple factors that intervene in the loss of knowledge and oral tradition, among these they expose the accelerated globalization and minority communities, who must assume drastic changes in their lifestyles, likewise, the lack of interest on the part of the school system in rescuing and strengthening this knowledge from the classroom is put on the table. According to the information collected, it is considered that the type of relationships that exist between the family and the school are characterized by a lack of communication and distancing between parents and the educational institution, with a low level of participation in curricular and extracurricular activities.This situation is of concern, if we take into account that the family is fundamental to the educational process, therefore the relationship with the school should be characterized by joint action, i.e. mutual collaboration and constant dialogue to facilitate the learning of students, and in this case to improve the participation of students of the Zenú ethnicity. The second specific objective was to design the didactic strategy proposal "Nitana", based on the elements of the Zenú indigenous oral tradition, to contribute to the improvement of family communication among eighth grade students of the San Francisco de Asís Educational Institution in Chinú, Córdoba, for which two steps were developed based on the active participation of parents, as follows: Step 1." A narrar se dijo": In this activity there was a meeting of parents narrators of the Zenú indigenous tradition, each attending mother expressed her ancestral knowledge and stories told by her grandparents concerning the Zenú cosmovision.The result of this activity was to compile 10 cosmogonic stories of our Zenú ancestors that will be part of the booklet on the Zenú worldview.Step 2. "A escribir se dijo": In this activity the teachers responsible for this project captured the narratives of the Zenú oral tradition, told by the attending mothers, and organized the sequence of ten workshops, as a bet of a broad school, which opens its doors to ancestral knowledge having as axis the communication in the family. The methodology used is that of educational workshops that allow the active participation of parents and children, a pedagogical task that varies from traditional meetings. The third objective proposed the implementation of the didactic strategy proposal "Nitana", based on the elements of the Zenú indigenous oral tradition, to contribute to the improvement of family communication, the eighth grade students developed the workshops of the booklet entitled "Nitana", a strategy with different activities, designed to work between parents and children.The Nitana booklet was created with activities that generate cooperative work experiences between parents and children, it develops narratives of the Zenú worldview, which further strengthens the conservation of the oral tradition of the Zenú people. The fourth objective evaluated the effect of the implementation of the didactic strategy "Nitana", based on the Zenú oral tradition in eighth grade students, contributing to the improvement of family communication of students, a very significant fact, since this strategy seeks the active participation of parents and their children in the educational process, in addition to strengthening family ties and the oral tradition of the Zenú indigenous people that is part of the Cabildo Urbano of the municipality of Chinú.For this research project, the product is evidenced in the improvement of the educational environment, the active participation of students and parents through cooperative work practices in the family and in the classroom, the promotion of a meaningful learning environment that aims to recognize the oral tradition as a strategy to strengthen family communication of students who are part of the Cabildo Indígena Urbano, in addition to using ancestral knowledge as valuable material for cultural and social learning, which strengthens interculturality in educational institutions. .
2024-01-11T16:09:17.783Z
2024-01-08T00:00:00.000
{ "year": 2024, "sha1": "a0b762b482be5cc6939a3dfa0cc72ba44842e922", "oa_license": "CCBYNCSA", "oa_url": "https://www.revistaespirales.com/index.php/es/article/download/858/849", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c24f6f97b124d38725c80ed714797887d1adde98", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
243852170
pes2o/s2orc
v3-fos-license
Risk factors associated with human immunodeficiency virus infection in blood donors in Iran: A case–control study BACKGROUND: Despite setting the stringent criteria for the selection of safe donors, some human immunodeficiency virus (HIV)-positive volunteers manage to give blood. Considering the window period of screening tests, this could endanger the safety of blood supply. MATERIALS AND METHODS: A frequency match case–control study was conducted on HIV-positive and negative blood donors in Iran from 2007 to 2008. Overall, 61 HIV-positive and 224 HIV-negative blood donors were selected as cases and controls, respectively. Two groups were matched for confounding factors. An identical questionnaire was used to assess risk factors. Univariate regression analysis for calculating crude odds ratio (OR) and 95% confidence interval (CI) was used for detecting eligibility of risk factors to enter the final model. The exposures with P < 0.1 were entered in the logistic regression model. Adjusted ORs with P < 0.05 and 95% CIs were reported for statistically significant variables. RESULTS: Significant effects were detected for the following variables: education, job, tattoo, intravenous (IV) drug abuse, imprisonment, and risky sexual behavior. However, based on multiple analyses, education, IV drug abuse, imprisonment, and risky sexual behavior remain significant. CONCLUSION: The majority of our findings are in parallel with the other studies performed in other countries. To increase blood safety, special attention should be paid to illiterate, first-time blood donors who are in the 25–40 age range. In addition, having the history of IV drug abuse, imprisonment and risky sexual behaviors put the blood donors more at risk of infecting HIV. Introduction P roviding safe and sufficient blood is the most important mission of all blood transfusion services. To this end, Iranian blood transfusion organization (IBTO) developed stringent criteria for selection of safe donors such as encouraging regular blood donation and retaining safe blood donors; providing informative and educational materials about the main risk factors; improving public health programs with a focus on counseling and screening of those engaged in high-risk activities; predonation screening through interviews, filling precise questionnaire, and brief physical examinations; and implementing a uniform self-deferral procedure and confidential unit exclusion. [1] Physicians in donor selection department are trained before starting their work and continuously in related courses. (TTI). Screening of donated blood for hepatitis B surface antigen (HBsAg) became mandatory since the establishment of IBTO in 1974. However, screening of blood units for human immunodeficiency virus (HIV) and hepatitis C virus (HCV) started from 1989 to 1996, respectively. [2] In Iran, the overall prevalence rate of HIV infection among blood donors, during the last 5 years, was 0.003%. [3] However, according to the UNAIDS, the number of people living with HIV in Iran is estimated to be 66000 (37,000-120,000) in 2016; in another word, the prevalence of HIV is about 0.1% among Iranian adult population (15-49 years old). [4] As expected, the prevalence of HIV is considerably lower among blood donors comparing to general population. However, despite the policies set to exclude people with high-risk behaviors, some few HIV-infected volunteers manage to donate. Considering the window period of screening tests, this could endanger the safety of blood supply. Several studies carried out to identify the most important risk factors among HBsAg and HCV-positive blood donors in Iran. [5,6] However, there has not been any comprehensive data about the risk factors of HIV-positive blood donors. This study aims to address this lack of information and evaluates the main risk factors of Iranian HIV seropositive blood donors. Materials and Methods A frequency match case-control study was conducted on HIV-positive and negative blood donors in Iran from 2007 to 2008. The delay in reporting the data was mainly attributed to the problems that were faced in collecting the data of control group in terms of their consistency and confounding factors. Overall, 61 HIV-positive and 224 HIV-negative blood donors were selected as cases and controls, respectively. To reduce selection bias and confounding, 1:4 cases and control frequency matching by age, gender, and times of donation (first time, regular or repeat donor) was performed. Their ages were matched based on 10-year period categories, and the controls were selected and recalled according to their inclusion in each category. The same method was used for the matching their sex. An identical questionnaire was used for both the cases and controls to assess risk factors for HIV. All participants agreed to complete the questionnaire and signed an informed consent. The questionnaire was developed (to assess HIV risk factors among blood donors) by consulting experts, based on the national standards of procedures in IBTO. To exclude the errors and defects, the questionnaire was firstly filled by 35 HIV-positive blood donors and 100 HIV-negative blood donors in a pilot. The questionnaire contained items on sociodemographic characteristics and risk factors. Sociodemographic characteristics included gender, age, marital status, level of education, and occupation. Risk factors included phlebotomy (Hijamat), tattoo, blood transfusion, intravenous (IV) drug abuse, imprisonment, risky sexual behavior (sex with HIV-positive person, sex with more than one partner, extramarital sex, male-male sex, and history of sexually transmitted disease). The questionnaires were completed by the medical doctors at blood centers in predonation interview at a private room, through a face-to-face process, and were collected in special boxes. To observe confidentiality, the name and donation number was deleted from the questionnaire. Case and control definitions Based on the policy of IBTO, volunteers who have positive ELISA test are permanently deferred to donate blood. Then, confirmatory test must be done, and the positive cases are recalled and requested to turn back to repeat the confirmatory test with different kits. These donors after positive repeated tests have been requested to fill in the questionnaire as cases. Control group was selected four times more than cases among blood donors who had negative HIV serologic tests from the same database of cases who had accepted our invitation to participate in the study. Control group were frequently matched with cases in terms of age, gender, and times of donation. Phone recalls were made based on the results of HIV + confirmatory tests to the donors. Accordingly for each HIV-positive case, a person from control group was recalled to fill the questionnaire. Laboratory methods T h e d o n a t i o n s w e r e a l l s c r e e n e d f o r H I V a n t i g e n / a n t i b o d y b y V i r o n o s t i k a H I V antigen/antibody (Bio Merieux)-Fourth generation or HIV antigen/antibody (Bio-Rad)-Fourth generation kits. Every sample that was found to be positive in the screening test was retested; and if it was constantly positive, retesting was performed by HIV BLOT 2.2 (MP Diagnostic) and INNO-LIA, HIV 1/2 score (Innogenetics) as a confirmatory test. Statistical analysis First, to evaluate univariate analysis between HIV-positive blood donors and the expected exposures, the models were run separately. The models included the risk factors of the questionnaire as independent variables and HIV as the dependent variable. The sociodemographic characteristics of the cases and controls were assessed, and the risk factors were compared with univariate analysis for calculating crude odds ratio (OR) and 95% confidence interval (CI) and eligibility to enter the final model. The exposures with P < 0.1 were entered in the multiple logistic regression model. Consequently, a backward stepwise selection method was used to build multiple models that restricted to all the risk factors that were independently associated with HIV. Adjusted ORs with P < 0.05 and 95% CIs were reported for significant variables. Confounding bias was identified as a consequence of the change in OR before and after adjustment for the confounding variable. All the analyses were performed with computer software (SPSS 22, SPSS, IBM Inc.). The study was ethically approved by the Ethics Committee of Iranian High Institute for Research and Education. The confidentiality of data was preserved during the study. Results Of all 89 confirmative positive HIV blood donors in 2007-2008 who were called, 61 cases filled in the questionnaire. Of 28 excluded cases, 13 were given wrong telephone number and address (47%), 8 did not return despite former willingness during the first recall (29%), and 7 did not have phone number or had remote home address (24%). Compared to 61 cases, 244 controls were selected from blood donors who had negative HIV serologic tests. Of 61 cases, 5 were female (8.2%) and 56 (91.8%) were male. Cases were more likely between 30 and 40 years old (39.3%). Sixty-seven point two percent of cases were first-time blood donors and 32.8% were lapsed donors. Successful frequency matching enrolment approach leads to enhanced age group, sex, and blood donation type distribution in the final sample. The participants' donation status and sociodemographic and relevant characteristics are presented in Table 1. Table 2 displays a comparison of potential crude and adjusted OR of significant exposures and 95% CI. Based on univariate analysis, significant association was detected for the following variables: education, job, tattoo, IV drug abuse, imprisonment, and risky sexual behavior. However, based on multiple analyses, education, IV drug abuse, imprisonment, and risky sexual behavior remain significant. It seems that job and tattoo were confounded by other exposures. In univariate model HIV positivity in occupied and students were less than jobless patients, tattoo increased the risk of HIV four times compared to control but neither was significant in multiple models. Phlebotomy (Hijamat) (the Hijamat is the name of the traditional Islamic healing technique and the method includes removing blood from the body to attain remedy and consists of cupping and scarification of the specific skin area of the body) and history of blood transfusion based on univariate analysis were not significantly different between two groups. In terms of educational levels, illiteracy was more frequent among cases (13.1%) comparing to 2.5% among controls. Higher education has a protective role against HIV positivity. The protective roll became robust by increasing the educational level comparing to illiterate participants (academic level OR adj : 0.04, CI 95%: 0.008-0. 2 In HIV-positive cases, having multiple risk factors at the same time were frequent. About 29% (18 blood donors) of HIV-positive cases had all significant risk factors, but in HIV-negative group, there was no participant with multiple risk factors. In 7.3% (17 blood donors) of HIV-positive cases and 96.4% (216 blood donors) of controls, no risk factor was reported. [7] In our study, we found some demographic characteristics make volunteer to be at more risk. Middle-aged and first-time blood donors are more likely to be at risk of infecting HIV. However, the population of Iranian blood donors was mostly between 20 and 30 years old (36.7%) in 2007. [7] Furthermore, in general population of HIV infected, cases were mostly (46.4%) in the 25-34 age range. [8] In 2012, Mariston et al. found that being at the 29-39 age range is most prevalent among HIV-positive blood donors. [9] In another study carried out in Malawi, there was a highly significant positive association of HIV prevalence with being in the age group of 25-29 years for females and 30-34 years for males. The minor discrepancy that exists in the age group of our study comparing to general population of HIV infected cases may be related to the higher level of knowledge among young blood donors about the negative effects of HIV test-seeking behavior which is the result of growing IBTO awareness-raising campaigns among university students and young people. Furthermore, the majority of our cases were male, and as it was found in the study of Malawi, men tend to be older than women among cases. Our results indicate that cases are mostly first-time blood donors (67%). This confirms the findings of another study in which Amini et al. found that the frequency of HIV in repeat blood donors is significantly less than first-time blood donors between 2006 and 2007. [10] In another study that analyzed the prevalence of HIV among Brazilian blood donors, it was shown that HIV prevalence was 22% higher among the first-time donors than replacement donors. [9] Illiteracy was more frequent among cases than controls (13.1% vs. 2.5%). Illiterate individuals were found to present the highest risk of being HIV-positive donor candidates in Brazil. [10] It may be because of lower rate of risky behavior among educated people or higher educated HIV positives may not tend to donate because of their knowledge. Among known HIV risk factors, we did not recognize any significant association between HIV positivity, doing phlebotomy (Hijamat), and history of blood transfusion. Based on the National Blood Policy in Iran, doing Hijamat defers volunteer from blood donation for 6 months. These results confirm the finding of the report of the Ministry of Health which indicates that since 2007, there has not been any reported case of HIV positive through blood transfusion. [11] However, being IV drug abuser, having a history of imprisonment and risky sexual behaviors are found to have significant effects on HIV positivity. Given a large proportion of prisoners is drug addicts, these findings were compatible with other studies conducted in Iran and some countries which suggest that drug injection inside prison carries more risk for HIV infection. [12,13] The prevalence of HIV is 13.4% among injecting drug users which is dramatically high comparing to general population. [14] In a study among community-based drug users in Tehran, the prevalence of HIV infection was reported 23.2% among male IVDUs. In a multiple analysis, a history of shared drug injection inside prison (OR: 2.5) and multiple incarcerations (OR: 3.13) were associated with a significantly higher prevalence of HIV infection. [15] Other studies conducted among IV drug users in Tehran support our results, in which a history of shared injection inside prison found to be the most important risk factor associated with HIV infection. [16,17] After injecting drugs, a significant proportion (17.1%) of registered cases of HIV transmission in Iran is attributed to unsafe sexual contact. [18] Although having male-male sex identified as the most significant risk factor in other countries, (in Brazil [19] and United States [20] ) in Iran, having risky sexual behaviors came third of importance. This may due to religious beliefs and criminal laws which bans Iranians from this kind of relation. Nevertheless, in the biobehavioral survey of inmates in 2009, 15.6% of men reported sexual contact with other men. The prevalence of HIV among this subset of MSM was found to be 3.7%. 1. [5] We believe that most of our excluded cases may have some risk factors. These donors gave wrong address or phone number and some did not return despite former willingness during the first recall. They may mostly be test seekers who did not give right personal details to escape from aftermath consequences. In a study conducted in Brazil in 2010, it was reported that test seeker HIV-positive blood donors believe that it is ok not to answer questions truthfully to donate blood and get tested for HIV through donation. [21] Due to few numbers of cases, we were unable to identify prevalent risk factors in each province. In provinces with more HIV positives such as Kermanshah and Golestan, [22] the rate of deferral from blood donation was lower than average rate of deferral in the whole country which was 25.6% in the same year; [23] and in other HIV prevalent provinces such as Tehran, [24] Fars, and Hormozgan, [22] it was equal to that rate. Considering the high prevalence of HIV in those provinces, it is necessary to apply more stringent criteria for the selection of blood donors. Conclusion In donor selection step, attention should be paid to the vulnerable population, especially first time who are in the age range of 25-40. The staff of donation department staff should be trained regularly and receive feedback about donors who will be positive for TTI.
2021-11-09T16:07:29.939Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "16fa7448f06ac4df540e17bdda8c142107bc080c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ajts.ajts_47_18", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bee1c30076f5805e122ce4b5c20fcd27f83fe7cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
21580089
pes2o/s2orc
v3-fos-license
Alternative impression technique for multiple abutments in difficult case to control BACKGROUND Even though excellent impression materials are now available for making accurate replication for hard and soft tissue, the numerous dentists have faced lots of obstacles in making simultaneous impressions of multiple abutments. CASE DESCRIPTION This article describes a modified method of tray fabrication using auto-polymerizing acrylic resin and impression technique for multiple prepared teeth in cases with limitations and difficulties in taking dental impressions. CLINICAL IMPLICATION This segmental tray technique has several advantages, including higher impression quality, fewer impressions, and being more comfortable for the patient and less stressful for the clinician. INTRODUCTION Various technical and practitioner factors influence the precision of constructing the impression. Most of these factors can be controlled by careful manipulation. However, various factors related to the patient are essentially out of the dentist' s control, including the gag reflex, microstomia, limited mouth opening, difficulties in saliva control, and bleeding. Especially in the case of a full arch impression for multiple prepared teeth in full mouth rehabilitation, the inherent limited working time of the impression material coupled with patient factors makes the prosthetic procedure challenging. Some modifications have been made to techniques of impression-tray fabrication in order to make a complete impression for the cases of the limited mouth opening, the full mouth rehabilitation and so on. [1][2][3][4][5][6][7][8][9][10][11] The purpose of this article is to design and fabricate a segmental tray system and introduce an effective way in making an accurate impression of multiple teeth in a fixed partial denture in a difficult case of saliva control and mouth opening. TECHNIQUE Tray fabrication procedure The impression tray was fabricated using the following procedure: 1. The diagnostic cast was constructed (Fig. 1). 2. The arch for the impression was divided into two or three segments, each of which usually included two or three prepared teeth for ease of management, and each segment was marked with a pencil. 3. The individual segmental tray was constructed (Fig. 2). One sheet of baseplate wax was covered for relief and then the resin mixture in the dough stage was extended 1 -2 mm over the cervical margins of the prepared teeth. This extension of the segmental tray acted as a tray stop. A small wing was attached to the buccal or labial side of the segmental tray to allow the simultaneous removal of both segmental trays and an overlay tray. 4. After each individual segmental tray was seated on the cast, an overlay tray was fabricated with base plate wax relief (Fig. 3). The overlay tray was precisely positioned with the aid of an indentation around each wing with 1 mm of leeway (Figs. 4 & 5). Impression procedure The impression was made using the following procedures: 1. After a conventional gingival displacement procedure, an appropriate adhesive was applied to the internal surface of all trays, and particularly the external surfaces of the segmental trays. 2. After removing of retraction cords, impression material (Pentamix, 3M ESPE, Germany) was syringed around the prepared teeth involved in a segment, and a segmental tray containing the impression material was positioned. Excessive material was removed from around the tray to ensure the precise vertical positioning of the overlay tray. 3. After the impression material of a segmental tray was set, Fig. 1. View of a model requiring impression of multiple prepared teeth. the above procedure was repeated for the next segment without removing the pre-taken impression segmental tray. 4. To reduce the number of segmental trays, an overlay tray can be used to take an impression of the remaining prepared teeth. 5. After the impression material in the overlay tray hardened, all trays should be removed together by holding both wings of the segmental trays and a handle or margin of an overlay tray to avoid dimensional change and flexing deformation of the impression material despite the presence of segmentally different paths (Fig. 6). DISCUSSION Full arch impressions are the most difficult to manage in prosthetic dental treatment, but they have been made frequently in many cases. These impressions pose many problems to dentists, including dimensional change of impression material and dental stone, along with the limited working time of the impression material. A total working and setting time of 4 minutes with a snap set is generally regarded as adequate for most procedures. 12 These difficulties are likely to make it necessary to repeat impression procedures in a patient. Gardener proposed an intraoral coping technique for making impressions of multiple preparations. 6 Vasilakis proposed a cast impression coping technique that removed the need for a retraction technique, which provided a better impression environment. 7 However, in these techniques, the individual cast coping had no stops, making it questionable whether the material will have a uniform thickness and whether the stock tray will trap all the cast copings undermining the subgingival area upon removal of a stock tray. In addition, the flexing deformation still remains as a problem in both techniques. The above-mentioned problems can be solved by ensuring the union of segmental trays and an overlay tray during their removal from the mouth and also by attaching a structure such as a handle to each coping or individual tray for the stable removal of segmental impression without flexing deformation. 13 In this technique, a handle was attached to each segmental tray that was strong enough to sustain the force required for removal. Its thickness was 1.5 mm and its length was determined by the impression area. An overlay tray had an indent around the handle of each segmental tray and covered all segmental trays sufficiently for precise positioning, which increased the dimensional stability of the overall impression. One of the advantages of this segmental technique is that it allows the clinician to focus on syringing around no more than two or three teeth, which will improve the accuracy of the margin and a narrow zone of unprepared tooth apical to the finishing line, 2,12 even in difficult cases such as compromised gingival health, the use of chemical agents for bleeding control, 12,13 and an uncooperative patient, and allowing for the limited working time of the materials. In conclusion, the segmental impression technique applied in this study has several advantages, including higher impression quality, fewer impressions, fewer remakes, and being more comfortable for the patient and less stressful for the clinician. Although the clinical results have been satisfactory, scanning electron microscopy should be used to characterize the dimensional stability of the master cast and the accuracy of the margin comparing to the conventional impression method.
2014-10-01T00:00:00.000Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "224ab79affaee04848da86b6ea63dfcaa0098e00", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2984516?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "224ab79affaee04848da86b6ea63dfcaa0098e00", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
253120095
pes2o/s2orc
v3-fos-license
Neutralization of five SARS-CoV-2 variants of concern by convalescent and BBIBP-CorV vaccinee serum The prevalence of SARS-CoV-2 variants of concern (VOCs) is still escalating throughout the world. However, the level of neutralization of the inactivated viral vaccine recipients’ sera and convalescent sera against all VOCs, including B.1.1.7 (Alpha), B.1.351 (Beta), P.1 (Gamma), B.1.617.2 (Delta), and B.1.1.529 (Omicron) remains to be lack of comparative analysis. Therefore, we constructed pseudoviruses of five VOCs using a lentiviral-based system and analyzed their viral infectivity and neutralization resistance to convalescent and BBIBP-CorV vaccinee serum at different times. Our results show that, compared with the wild-type strain (WT), five VOC pseudoviruses showed higher infection, of which B.1.617.2 and B.1.1.529 variant pseudoviruses exhibited higher infection rates than wild-type or other VOC strains, respectively. Sera from 10 vaccinated individuals at the 1, 3 and 5-month post second dose or from 10 convalescent at 14 and 200 days after discharge retained neutralizing activity against all strains but exhibited decreased neutralization activity significantly against the five VOC variant pseudoviruses over time compared to WT. Notably, 100% (30/30) of the vaccinee serum samples showed more than a 2.5-fold reduction in neutralizing activity against B.1.1.529, and 90% (18/20) of the convalescent serum samples showed more than 2.5-fold reduction in neutralization against B.1.1.529. These findings demonstrate the reduced protection against the VOCs in vaccinated and convalescent individuals over time, indicating that it is necessary to have a booster shot and develop new vaccines capable of eliciting broad neutralization antibodies. Although these recent studies evaluated the neutralization activities of some VOCs by convalescent and/or vaccine sera, up-to-date all five VOCs' neutralization resistance to the inactivated-virus vaccine and convalescent remains to lack a comparative analysis. Here, we, therefore, constructed six pseudotyped viruses displaying spike proteins derived from wild-type SARS-CoV-2, five VOCs including B.1.1.7 (Alpha), B.1.351 (Beta), P.1 (Gamma), B.1.617.2 (Delta) and B.1.1.529 (Omicron), and analyzed their infectivity in HEK293T cells expressing the ACE2 protein. We next compared this pseudovirus resistance to neutralization using sera obtained from ten vaccines at 1, 3 and 5-month after receipt of the second dose of inactivated-virus vaccines BBIBP-CorV (Sinopharm, China), and from ten convalescents at 14 and 200 days after discharge. Bioinformatics The wild-type SARS-CoV-2 whole genome sequences (WT, NC_045512.2) and variants (B. (Trifinopoulos et al., 2016), employing the GTR as the substitution model and 1000 bootstrap replications. Furthermore, mutations in the spike protein whose crystal structure complexed with ACE2 were obtained from the protein data bank (PDB ID: 7A98) (Benton et al., 2020) and were mapped using Biovia Discovery studio visualizer 2020. Human sera Ten COVID-19 convalescent patients and ten BBIBP-CorV vaccinees were recruited from Shanghai Municipal Public Health center and Fudan University respectively. The basic information of volunteers was listed in Supplementary Table S1, including gender, age, disease severity, and sampling time interval. The study was approved by the School of Life Sciences at Fudan University (BE2001). For BBIBP-CorV vaccinees, the blood samples were obtained 1, 3, and 5 months after boosted the second vaccine. For COVID-19 convalescent patients, the blood sampling interval was according to the time of two follow-up visits after discharge. The first sampling time was nearly 14 days, and the second interval was nearly 200 days. Construction and production of pseudoviruses The spike segments of SARS-CoV-2 wild-type and five variants were synthesized by GeneScript Company (Nanjing, China) and inserted into pcDNA3.1 plasmid. All spike proteins were optimized with 18 amino acids removed and a HA tag attached. The pLenti.GFP.NLuc, which can express the green fluorescent protein (GFP) and luciferase simultaneously, was preserved by our laboratory. Plasmid psPAX2 was used for packaging the lentiviruses with pLenti.GFP.NLuc and the spike-pcDNA3.1. Infection assay To measure the infectivity of pseudoviruses, 293T-hACE2 cells and Huh-7 cells (10 4 cells per well) were seeded at 96-well plates, and 3 μL [multiplicity of infection (MOI) ¼ 0.3] of the pseudoviruses were added to each well with100 μL culture medium. Then the plates were incubated at 37 C, 5% CO 2 . After 12-16 h, the culture medium was refreshed, and the cells were cultured for another 48 h. The luciferase activity was measured by Luciferase reporter gene assay kit (Yeasen, cat.11401ES80, China) and SYNERGY2 microplate reader (BioTek, USA) to reflect the infectivity of pseudoviruses. Neutralization assay The neutralization assay of pseudoviruses was performed as described previously Wang et al., 2021). The 293T-hACE2 cells were seeded in a 96-well plate (1 Â 10 4 cells per well) and cultured for 12 h. Serial dilutions of serum (starting at 1:40 dilution) were incubated with 3 μL (MOI ¼ 0.3) of different SARS-CoV-2 pseudoviruses at 37 C for 30 min before being adding to cells. Each dilution has three biology duplicates. Subsequently, the culture medium on cells was removed and 100 μL serum-pseudovirus mixture was added and incubated at 37 C for 24 h. Then, the supernatant was discarded and cells were overlaid with fresh DMEM with 10% FBS and 1% P/S. After 48 h, chemiluminescence was measured by Luciferase reporter gene assay kit (Yeasen, cat.11401ES80, China). The ratio of the relative luminescence unit (RLU) of cells which added with serum-pseudovirus mixture and which infected with only pseudoviruses represented for the infectivity. The neutralization was calculated by (1-infectivity)%. The IC50 (50% inhibition concentration) values were calculated by a three-parameter dose-response-inhibition curve by transforming the dilutions to log10 in GraphPad Prism 8. Quantification and statistical analysis GraphPad Prism 8 was used for drawing plots and statistical analysis. All statistical tests were performed as described in the figure legends. The Fig. 2. Neutralization capability of sera from vaccinees at three different time points post vaccination. Sera samples were collected from ten vaccinees (V001-V010) at the 1-, 3-and 5-months post the second dose of vaccination. The neutralization activity of sera against WT and VOC pseudoviruses was determined by neutralization assay. All data were obtained from three independent experiments (mean AE SEM). The X axis is log10 to represent the sera dilution ratio, and the Y axis is to represent the ability of neutralization of serum. Fisher's precision probability test was performed as described the difference in IC50 between neutralizing antibodies against mutants. Nonlinear regression curve fitting was performed to calculate IC50 values. The P-value was less than 0.05 and was considered a statistically significant difference. *P represented P-value was less than 0.05; **P represented P-value was less than 0.01; ***P represented P-value was less than 0.001, ****P represented P-value was less than 0.0001, ns represented there is no significant difference between each group. The statistical significance was calculated by t-test. Constructions of pseudotyped viruses related to variants SARS-CoV-2 has undergone great diversification since the outbreak. WHO classified five lineages as the VOCs, including B.1.1.7 (Alpha), B.1.351 (Beta), P.1 (Gamma), B.1.617.2 (Delta), and B.1.1.529 (Omicron). The genealogical evolution suggests that B.1.1.529 variant was derived from the B.1.1.7 variant, but there was a large discrepancy between them (Fig. 1A). All five reported VOCs have mutations in the RBD and the NTD. The variant B.1.1.529 has 34 mutations in spike protein. Some amino acid changes (Δ69/70, T95I, G142D, Δ145, K417N, T478K, N501Y, and P681H) are shared mutations also found in the Alpha, Beta, Gamma, or Delta VOCs. The ACE2 receptor interacted with the spike protein of five VOCs (Fig. 1B). To study the ability of immune evasion of the current VOCs, we constructed a total of six (five VOCs and WT) pseudoviruses that express spike proteins on lentivirus vectors (Fig. 1C), the titers of all pseudoviruses are adjusted to unity. Infectivity of pseudoviruses related to five VOCs To investigate the infectivity of pseudoviruses with VOCs spike protein, the level of relative luminescence activity was considered to reflect the infectivity. Different pseudoviruses were inoculated into the 293T-hACE2 and Huh-7 cells at the MOI of 0.3. Our findings showed that the five VOC variant pseudovirus displayed a 1.83-6.3 folds increase in infectivity at 72 h post-infection in two cell lines compared to WT (Fig. 1D). Strikingly, B.1.617.2 and B.1.1.529 variants exhibited higher infection rates than wild-type or other VOCs variants. In contrast, no significant difference was found between the B.1.617.2 and B.1.1.529 variants in 293T-hACE2 and Huh-7 cells. These data suggest that mutants in the B.1.617.2 and B.1.1.529 variant promote viral entry into ACE2expressing cells. Reduced neutralization by vaccinee sera To compare the neutralizing activity of antibodies induced by inactivated vaccines on WT and VOCs and to explore the effect of time on neutralizing activity, the neutralization experiments were conducted using a luciferase-expressing lentiviral pseudotyping system, and geometric mean titers (GMTs) were calculated to assess the neutralizing efficacy. The changes in neutralization sensitivity were quantitated by the mean of IC50. We accrued a cohort of 10 BBIBP-CorV vaccinees Fig. 3. The comparison of resistance of SARS-CoV-2 variants to inactivated vaccine sera. A The fold changes in neutralization activity of vaccine sera were shown in the ratio of IC50 between the variant and WT SARS-CoV-2 (IC50 WT SARS-CoV-2 /IC50 variant SARS-CoV-2 ) as a heatmap with darker color implying greater change (! 2.5fold). B The comparison of neutralization activity of sera from ten vaccinees (V001-V010) at one month post the second dose of vaccination against different variants. The IC50 (50% inhibition concentration) values were calculated by a three-parameter dose-response-inhibition curve. All data were obtained from three independent experiments (mean AE SEM). Mean fold changes in IC50 values are denoted above the P values. Statistical analysis was performed using a ratio paired t-test. ***P < 0.001; ****P < 0001. Table S1). The entire cohort had a median age of 36 years (range: 22-56) and was 50% male. The serum samples were collected from vaccinated individuals at the 1-, 3-, and 5-months post the second dose of vaccination. The neutralization curves for vaccinated individuals are presented in Fig. 2. The result showed that sera from BBIBP-CorV vaccinee still have neutralizing activity against VOCs or WT five months after vaccination (Fig. 2). The neutralization IC50 titers relative to the WT are quantified and tabulated as a fold increase or decrease (Fig. 3A). We found that the neutralizing activity of the vaccine serum against VOCs was decreased at the 1, 3, and 5-months after vaccination (Figs. 2 and 3A). Noted that 100% (30/30) of the serum samples had more than a 2.5-fold reduction in neutralizing activity against B. (Fig. 3A). However, at one month after the second dose of vaccination, the IC50 of neutralizing activity against VOCs was reduced by 2.79-fold, 2.83-fold, 4.41-fold, 4.30-fold and 6.08-fold compared to WT, respectively. The IC50 of neutralizing activity against B.1.1.529 significantly decreased compared to other VOCs except P.1 (Fig. 3B). Next, we compared the neutralizing activity of the same variant at different time points. We found that the IC50 of neutralizing activity against all VOCs and WT in the vaccinees' serum decreased significantly after five months post-vaccination (WT: 2.96-fold, B. (Fig. 4), A similar reduction of neutralizing activity was observed between the 1-month and 3-month or between the 3-month and 5-month's sera, respectively. Reduced neutralization in COVID-19 convalescent sera To evaluate the neutralizing activity of COVID-19 convalescent sera against the VOCs, the neutralization assays were conducted with WT and five VOCs. The serum samples of 10 convalescent volunteers infected with WT SARS-CoV-2 were obtained from the Shanghai Public Health Fig. 4. The comparison of neutralization activity of vaccinees sera against the same variant at 1-, 3-and 5-mouths post vaccination. All data were obtained from three independent experiments (mean AE SEM). Mean fold changes in IC50 values are denoted above the P values. Statistical analysis was performed using a paired t-test. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. Clinical Center at two different time points (an average of 14 days and 200 days after discharge) (Supplement Table S1). The entire cohort had a median age of 53 years (range: 33-68) and was 40% male with one severe case. The serum samples were also diluted for neutralization experiments. Similar to inactivated-virus vaccinee sera, we found that serum samples from all convalescent volunteers at 14 and 200 days after discharge maintained neutralizing activity against VOCs or WT (Fig. 5). Discussion SARS-CoV-2 VOCs have multiple mutations in the spike protein, which might alter important biological properties of the virus, including the efficiency of entry into target cells and the degree of vaccine protection. Here, we constructed five VOC-related pseudoviruses using a lentivirus-based system and systematically studied the effects of variants on virus infectivity and neutralization resistance to convalescent sera and inactivated-virus vaccine sera. Our findings showed that the VOC variant pseudoviruses displayed a 1.83-6.3 folds increase in infectivity at 72 h post-infection in two cell lines compared to WT. Strikingly, B.1.617.2 and B.1.1.529 variants exhibited higher infection rates than wild-type or other VOC strains, whereas no significant difference was found between the B.1.617.2 and B.1.1.529 variants in 293T-hACE2 and Huh-7 cells. These data suggest that mutants in the B.1.617.2 and B.1.1.529 variant promote viral entry into ACE2-expressing cells. Garcia-Beltran et al. reported that the Omicron pseudovirus continues to rely upon the human ACE2 receptor for target cell entry and infects target cells 4-fold more efficiently than wild-type pseudovirus and 2-fold more efficiently than Delta pseudovirus (Garcia-Beltran et al., 2022). The slightly inconsistent results of fold changes could be related to several factors, such as different infected cells (Mautner et al., 2022), virus titer, and time to assay after pseudovirus infection, but the trend of the infectivity is consistent. In this study, our findings showed that sera from 10 vaccinated individuals at the 1-, 3-and 5-months post the second dose retained Fig. 5. Neutralization capability of sera from convalescents at 14 days and 200 days after discharge. Sera samples were collected from ten convalescents (P001-P010) at 14 days and 200 days after discharge. The neutralization activity of sera against WT and VOC pseudoviruses was determined by neutralization assay. All data were obtained from three independent experiments (mean AE SEM). P007* is a severe case. The X axis is log10 to represent the plasma dilution ratio, and the Y axis is to represent the ability of neutralization of serum. Y. Zhu et al. Virologica Sinica 37 (2022) 831-841 neutralizing activity against all VOC strains but the neutralization activity significantly decreased owing to the escape mutations and the waning of antibodies over time. Similar trends were also reported in other reports. Wang et al. reported an about 12.5-fold decrease of neutralization against the Omicron variant from recipients who received two doses of inactivated vaccine (Wang et al., 2022a,b). In a population-based study of BNT162b2-vaccinated Africans, the vaccine-elicited neutralization against the Omicron variant shows a 22-fold reduction when compared with the ancestral SARS-CoV-2 strains (Cele et al., 2022). Available evidence shows that the efficacy of major vaccines used worldwide against the VOCs are significantly dropped Collier et al., 2021;Garcia-Beltran et al., 2022, 2021Greaney et al., 2021;Hoffmann et al., 2022Hoffmann et al., , 2021Liu J. et al., 2021;Meng et al., 2022;Suzuki et al., 2022;. Some studies showed a considerable decrease in the neutralizing potency of two doses of inactivated vaccines or RNA vaccines against Omicron (Cheng et al., 2022;Garcia-Beltran et al., 2022;Greaney et al., 2021;Hoffmann et al., 2022;Lu et al., 2021;Meng et al., 2022;Suzuki et al., 2022;Wang Y.D. et al., 2022;Zhang et al., 2022). The Omicron lineage of SARS-CoV-2 spread rapidly to become globally dominant and now has split into a number of sublineages including BA.1, BA.1.1, BA.2, BA.2.12.1,BA.2.75,BA.3,BA.4,and BA.5. Several recent studies showed that sera from individuals who received three doses of vaccines (Pfizer, AstraZeneca, or CoronaVac or BBIBP-CorV) have a reduced ability to neutralize BA.4, BA.5, and BA.2.75 compared with BA.1 (Ai et al., 2022;Cao et al., 2022;Tan et al., 2022;Tuekprakhon et al., 2022). These data suggest that BA.4, BA.5, and BA.2.75 subvariants can substantially escape neutralizing antibodies induced by vaccination. An Omicron-specific vaccine is advisable to developed, which expected to elicit neutralizing antibodies that help reduce the immune escape of Omicron strain. Individuals exposed to SARS-CoV-2 produce antibodies, which display neutralization activity. Zhang et al. reported that the neutralization activity of convalescent sera against Omicron was reduced by 8.4fold, whereas the neutralization activity of convalescent sera against other VOCs and VOIs was decreased by 1.2-4.5-fold compared to the D614G strain . Wang et al. reported that 16 convalescent serum samples showed an average 10.5-fold reduction of neutralization against the Omicron variant when compared with the SARS-CoV-2 WT Wang Y.D. et al., 2022). In another study, the neutralization activity of sera from ten convalescents showed a 32-fold drop against the Omicron variant compared to the WT SARS-CoV-2 . Consistent with these reports, we observed that sera of convalescents 14 and 200 days after discharge exhibited significant decrease neutralization activity against the VOC variant pseudoviruses compared to WT. All of these data revealed that the Omicron variant could easily escape the neutralization activity of convalescent sera. Fig. 6. The comparison of resistance of SARS-CoV-2 variants to convalescent sera. A The fold changes in neutralization activity of convalescent sera were shown in the ratio of IC50 between the variant and WT SARS-CoV-2 (IC50 WT SARS-CoV-2 /IC50 variant SARS-CoV-2 ) as a heatmap with darker color implying greater change (!2.5-fold). P007* is a severe case. B The comparison of neutralization activity of sera from convalescents (P001-P010) at 14 days after discharge. All data were obtained from three independent experiments (mean AE SEM). Mean fold changes in IC50 values are denoted above the P values. Statistical analysis was performed using a ratio paired t-test. *P < 0.05; **P < 0.01. Conclusions In summary, we demonstrated that sera from vaccinated individuals at five months post the second dose of vaccination or convalescents at 200 days after discharge retained neutralizing activity against all SARS-CoV-2 strains. But both the vaccinated and convalescent individuals exhibited decreased neutralization activity against all VOCs, especially the Omicron variant, compared to WT. It is necessary to reveal the precise mechanisms through which a large number of mutations in the VOCs facilitate immune escape from the actions of neutralizing antibodies and develop new vaccines capable of eliciting broad neutralization antibodies. Data availability All the data generated during the current study are included in the manuscript. Ethics statement This study was approved by the Ethics Committee of Shanghai Public Health Clinical Center and Fudan University. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. No animal experiments are involved. Conflict of interest Author Jun Liu is employed by Fubio (Suzhou) Biomedical Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2022-10-27T13:14:48.445Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "c63cfca51467a3b9e22de9e2f4c3bb972648052f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.virs.2022.10.006", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ab85015334a8d95cbcaa1d54149d74ce59457e40", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219005867
pes2o/s2orc
v3-fos-license
Dengue fever: a re-emerging disease Dengue is a rapidly spreading mosquito borne disease. Dengue is now endemic in 100 countries worldwide. India has reported several fold increases in number of Dengue cases in the decade. A considerable number of cases of dengue fever occurred in the population of a small town in western Maharashtra in the month of September 2019. Fortunately, the outbreak was controlled timely by proactive action initiated by the health authorities. In this paper, we aim to share the challenges faced in controlling the outbreak, lessons learnt during our exercise; and recommend suitable preventive measures to prevent similar occurrences in future. surveys during the period of the outbreak was also carried out. Cases of dengue fever have been reported from this town in earlier years also. In the year 2015, 2016, 2017 and 2018, only 09, 11, 14 and 10 cases were reported, respectively. In the year 2019 in a short span of 15 days from 08 to 23 September, 17 cases of dengue fever were reported. This is clearly far in excess of the expected number of cases and constitutes an outbreak. The study team reviewed all the cases which were admitted to the local hospital. This included eliciting a detailed clinical and epidemiological history. Sudden onset fever with chills, intense headache, muscle and joint pains, retro-orbital pain, nausea, vomiting, extreme weakness, colicky pain and abdominal tenderness, dragging pain in inguinal region, sore throat, general depression, anorexia, restlessness, lethargy, constipation, altered taste sensation and photophobia constituted the clinical case definition of dengue fever. Any case having three or more of the above symptoms was taken as a probable case. 1,2 A case compatible with clinical description with positive NS1 Antigen ELISA test was taken as a confirmed case. 1,3 The outbreak was studied and analysed in terms of time, place and person distribution of the cases. The workers also conducted a detailed environmental survey. Ethical clearance from the institutional ethical committee was obtained prior to collection of data. Standard statistical tools were utilised for data analysis. RESULTS In all 17 cases who fulfilled the case definition criteria were admitted in the month of September 2019. Out of these 16 (94.11%) were confirmed by NS 1 Antigen ELISA, while 01 (05.88%) was probable case. At the time of admission 15 (88.23%) cases presented with moderate to high grade fever (mean temperature 101.7°F), 11 (64.70%) had intense headache, 7 (41.17%), 15 (88.23%) had muscle and joint pain and 3 (17.64%) had vomiting. On laboratory investigations, 3 (17.64%) cases had leucopenia. The mean total leucocyte count was 6588/cu mm, while the mean lymphocyte count was 31.8%. The first case reported on 08 Sep 2019, followed by a sudden spurt in the number of cases. The weekly epidemic curve is depicted in (Figure 1). The median incubation period was approximately 3 days. There were no complications or fatalities. Age distribution of the cases is shown in (Table 1). Sex distribution of the cases is shown in (Table 2). Detailed environmental assessment of the area revealed Aedes mosquito breeding in artificial containers of water, such as broken and discarded pots, flower pots with a plate below it, in close vicinity of the houses from where the cases were reported. The investigators carried out a quick survey of systematically selected houses in the station in order to assess the number of houses where Aedes larvae were found in water containers. In this case out of total 750 houses which were screened 53 were found to have Aedes mosquito breeding in and around them. This gives a house index of 7.06%. The number of containers which were found to have larvae or pupae was observed to give a container index of 6.06% Limited meteorological data was available locally. However, it was reported by the local population that it had rained intermittently for last nearly one month. Figure 1: Epidemic curve. Immediate preventive measures were instituted by the health officials who were deputed to visit the station. Proper disposal of all artificial containers of water was carried out. All possible breeding sites were identified and emptied of water collection. Outdoor fogging was also carried out. In another study, Kunwar et al reported 266 cases of dengue fever in 2013. 1 In their study fever was present in all cases (100%), body ache in 69.9% and headache in 49.2%, which differs from our results. Huy, Hoa, Thuy, Kinh, Ngan, Duyet, et al reported 31.3% cases with warning signs, 11.4% cases of severe dengue; and 0.8% mortality in their study of 2922 patients, which again differs from our study. 7 Cariappa et al reported a house index of 5.38% and container index of 4.16% in their study, which is also different from our study. 8 The present outbreak was due to intermittent rain in the preceding month which contributed to Aedes mosquito breeding in peridomestic areas. Extensive health education of the local population on the subject was carried out and immediate preventive measures were instituted. Thus, the outbreak was controlled well in time, before it reached disastrous proportions. CONCLUSION Our study depicts how despite several massive outbreaks having occurred in the past both in India and abroad as complacency does set in regarding strict environmental monitoring to identify breeding sites, proper disposal of solid waste; and education of all concerned regarding mosquito borne diseases. The above study thus highlights the necessity of strict environmental monitoring by all authorities concerned in the country to prevent morbidity and mortality due to dengue fever and other mosquito borne diseases. The menace of dengue fever will continue till we learn to manage our solid waste properly, as dengue is after all a man-made disease due to improper solid waste disposal. Recommendations Prevention and control of dengue should be public health based besides routine anti-larval measures. The strategy should include: repeated and timely environmental survey of the entire area to assess breeding sites, behavioural survey of the population, continuous health education which should be reinforced from time to time, adequate coordination between various agencies, such as community, health agencies, and administrative authorities at all times, fostering more research into development of a suitable vaccine for dengue.
2020-04-30T09:10:26.612Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "d973f56a32485b188141abc52acf5cacea4c70d1", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/6197/3951", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "01d28a95f8b56f39c46412710dd5bcce37af06db", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
203824276
pes2o/s2orc
v3-fos-license
Pleuropericardial Cyst: A Review of the Literature Background: Pleuropericardial cysts (PPCs), account for 5 - 10% of all mediastinal tumours, are rare lesions occurring in approximately 1 in 100000 persons and are usually congenital and rarely acquired. They are detected post-mortem or incidentally on routine chest X-ray (CXR) and in most cases multi detector Computer Tomography is used to confirm the diagnosis. As benign course and clinical latency are characteristic features of such cysts and the occurrence of complications is rare, the majority of them can be left untreated. Methods: The aim of the study is to review the literature regarding PPCs and create a table which summarises all the published cases in order to draw a conclusion about the epidemiology, as well as the diagnostic and therapeutic approach to PPCs exclusively. We reviewed retrospectively the clinical manifestation, diagnostic and therapeutic approach in 101 cases of PPCs since the 19 th century Results: Our statistical analysis led to the following results: mean age of initial detection: 48.7 ± 17.2 years, female:male ratio: about 3:2, presence of symptomatology: 37/101 cases, most common location: right cardiophrenic angle (RCPA), most common method of initial detection: CXR in 49/101 cases, mean maximal diameter: 8,3 ± 3 cm. Conclusion: The management of a pleuropericardial cyst should be based on an algorithm in which the cyst’s size, shape and compressibility along with clinical presentation and patient’s fitness and preferences are be taken into consideration. When interventional is required, surgical resection by means of traditional open surgery or minimally invasive methods are considered to be the gold standard and along with percutaneous aspiration are the methods that have mostly been used. Introduction Pleuropericardial cysts, account for 5 -10% of all mediastinal tumours, are rare lesions occurring in approximately 1 in 100000 persons and are usually congenital but exceptionally acquired. They are detected post-mortem or incidentally on routine chest X-ray and in most cases multi detector Computer Tomography is used to confirm the diagnosis. As benign course and clinical latency are characteristic features of such cysts and the occurrence of complications is rare, the majority of them can be left untreated. 53-year-old woman 5 . Symptoms coexisting with such cysts were first described by Freedman and Simon, D' Abreu and Churchill and Mallory. Reviewing the literature, it was Adrian Lambert who was the first to propose a pathogenesis, noting the similar embryological origin of PPCs and diverticula from disconnected mesenchymal lacunae which normally fuse to develop the pericardial coelom. He also attempted for the first time to differentiate the thin-walled cysts of the mediastinum, all of which had previously been reported as "probably of lymphatic origin" 6 . Greenfield et al. introduced the term "Spring water cyst" 7 . By 1958, at least 120 cases of mesothelial cysts had been reported 8,9 . PPCs are cyst walls made up of a single layer of mesothelial cells and a loose stroma of fibrous tissue with collagen and elastic fibres. They usually contain clear, serous fluid and that is why they are also called 'spring water' cysts 10 . Finally, both the expression of epithelial membrane antigen and calretinin and the absence of an actin-positive subepithelial smooth muscle layer may be helpful in the diagnosis of a PPC 11 . The aim of this study was to review the literature and present a review article about PPCs including a table with all the data of: a) all the published cases reported as "pleuropericardial cysts" in the title and b) some of the published cases described as "pericardial cysts" in the title which are also called "pleuropericardial cysts" either in other review articles or even in the same article. According to our knowledge, this is the first organized attempt to review whole literature with focus in PPCs. Finally, we analysed statistically the data associated with the age of initial detection, gender and cyst size and location, in order to draw a conclusion regarding the epidemiology of PPCs exclusively. Methods We aimed to review the literature regarding PPCs and create a table which summarises all the published cases in order to draw a conclusion about the epidemiology, as well as the diagnostic and therapeutic approach of PPCs exclusively. We first searched the PubMed and Medline databases for any publications concerning PPCs. After this we searched for a systematic review reporting PPCs. No systematic reviews were found. We found only a review of treatment of four cases with video-thoracoscopy. We also found a review for benign cysts of the mediastinum. We then independently searched PubMed (until February 2018) using the following free text terms: "pleuropericardial cyst", "pleuropericardial cyst" AND "treatment" OR "symptoms" OR "location" OR "intervention" OR "surgery" OR "case". Then we searched for pericardial cysts also. We included case reports, abstracts, editorials and articles in all languages describing the location or symptoms or treatment or intervention or surgery in patients with PPC. The database created from the electronic searches compiled in a reference manager program (Endnote X8) and all duplicated citations was eliminated. The following data were collected: (1) publication details such as title, authors, and other citation details, (2) patient data such as age, sex, symptoms (3) details of PPC (location, size, and approval), (4) data of intervention or surgery, (5) followup data. All in all, we reviewed 139 publications and found 101 cases of PPCs. As described in "Discussion", plenty of terms have been used to describe a PPC. So, review references are mostly based on the terms "pericardial" and "pleuropericardial" and we decided to include the following in table 1: 1) All the published cases referred as "pleuropericardial cysts" in the title. 2) All the published cases referred as "pericardial cysts" in the title which are called "pleuropericardial cysts" either in other review articles or even in the same article but later on, in the text. After having collected the data from all the cysts referred as PPCs (101 cases from 47 publications), we carried out an univariate statistical analysis regarding the following parameters: age of initial detection, gender, cyst size and location, method of initial detection as well as presence of symptomatology. Cases with non-mentioned data were excluded from the analysis. Results As far as the mean age of initial detection is concerned, our statistical analysis showed that this was approximately 48.7 ± 17.2 years, ranging from 3 to 76 years in 50 cases out of 101. Regarding gender, the female:male ratio was calculated to be about 3:2 (29:21) among 50 cases, while the most frequent location was the right cardiophrenic angle (RCPA) accounting for 39,6% followed by the left (LCPA) at 18,9% among 53 patients. CXR was the method used for initial detection in 49/101 cases. The percentage was probably higher, as the method for initial detection was not mentioned in 41 cases. Symptomatology was present in 37 cases out of 101. Finally, the mean maximal diameter was 8.3 ± 3 cm, varying between 1.5 and 17 cm in 42 out of 101 patients. Discussion Plenty of terms have been employed in the literature to characterise PPCs. Most have been used to define the localisation, the contents, the histology or the pathogenesis of such cysts. These terms are: pericardial coelomic cysts 6 , pericardial cysts, hydrocele of the mediastinum, simple cyst of the mediastinum, serosal cyst, spring-or clear-water cyst 7 , para-pericardial cyst, pleuropericardial cyst, pleural cyst and mesothelial mediastinal cyst. the complete closure of the proximal recess gives rise to a PPC 1 . As far as inflammatory cysts are concerned, they develop as a result of loculated pericardial effusion 12 . PPCs account for 5-10% of mediastinal tumours and 11% 2 or 30% 4,5 of mediastinal cysts. Prevalence is approximately 1 in 100000 persons 8 and they constitute the second most common type of primary mediastinal cysts after bronchial ones 19,20 . All ages may be affected, but PPCs are most frequently identified between the third and fifth decade of life, while they are rarely detected in childhood [21][22][23][24] . More specifically, less than 20 cases in children have been reported in the literature 25 . As far as gender is concerned, the female: male ratio varies among many studies and it has been described to be 1:1 22 PPCs are usually congenital in origin but other causes such as inflammation (rheumatic pericarditis, bacterial infection particularly tuberculosis, echinococcosis), trauma, post cardiac surgery and chronic haemodialysis [12][13][14][15][16][17] have been reported. Congenital PPCs usually originate from failure of fusion of one of the mesenchymal lacunae that form the pericardial sac, during embryogenesis after the third week of gestation 6,18 . Another theory of the pathophysiology of such cysts explains the origin of PPCs by means of differential perseverance and graded constriction of ventral recess of the pericardial coelom. The ventral parietal recess is a diverticular structure where most of PPCs are located. Perseverance of this structure forms the diverticulum, constriction of the proximal part of which results in either a diverticulum with a narrow neck or a PPC in communication with the pericardial cavity, while Pleuropericardial cysts, account for 5 -10% of all mediastinal tumors, are rare lesions occurring in approximately 1 in 100000 persons and are usually congenital but exceptionally acquired. No systematic review for pleuropericardial cysts was found. We reviewed all literature and found 100 cases since 19 th century. PPCs can occur in any compartment of the mediastinum 29 , but are usually detected in the visceral mediastinum 30,31 attached to the parietal pericardium. The most frequent site is the right cardiophrenic angle (51-75%), followed by the left (28-38%) [19][20][21]32,33 . Those PPCs occurring elsewhere other than the cardiophrenic angles (8% -16%) are usually superior to the heart and right-sided 34 . A frequent site is the right latero-tracheal region 35 . In this case, the cyst originating from the upper recess of the pericardium extends posteriorly from the pericardial cavity around the ascending aorta 36,37 . Other unusual sites that have also been reported include the other two mediastinal compartments, the vascular hila, the subcarinal area and the left heart border 33,[38][39][40][41][42] . Moreover, the PPCs that are detected in locations remote from the pericardium are believed to be pedunculated with a stalk that connects them with the pericardium 43,44 . Almost 5% of PPCs are in communication with the pericardium through determinable tube-like structures 45 . However, others studies indicate that they are always attached to the pericardium directly or by a pedicle 10,19,20 , although a visible connection between the cyst and the pericardium is rarely detected 46,47 . Finally, a PPC may occasionaly present as a mobile chest lesion, described by the term "wandering PPC" 48 and in this case, it should be differentiated from solitary fibrous tumours of the pleura, which are the most common mobile chest masses 49 . PPCs usually have a diameter of 2-15cm 44,50-52 and weigh 100-200gr 25 . However, there have been reported cases up to 28cm 53 or even larger containing 1300ml of fluid 34 or measuring 25x37x5cm 54 , while other were as large as a grapefruit 55 . The majority of PPCs are asymptomatic (50-75%) in adults and are found post-mortem or incidentally on routine CXR 38,56 . However, two thirds of children diagnosed with a PPC develop symptomatology 8 .When they are symptomatic, in general due to increasing size and consequent compression or invasion on nearby organs, the symptoms are generally dominated by respiratory signs, such as dyspnoea, stridor, wheezing, chest discomfort including vague chest pain, heaviness, retrosternal pressure or substernal pain, persistent cough, sputum, haemoptysis, dysphagia and epigastric pain. Circulatory signs such as tachycardia, palpitations, fatigue, cyanosis and weakness may also be found 8,19,31,44,47,50,[57][58][59][60][61]62 . There can also appear signs of nervous compression presented as hoarseness due to unilateral vocal cord paralysis as a result of left recurrent laryngeal nerve compression 63 Complications The mediastinum is a narrow non-extendable space. Consequently, every mediastinal mass, including PPCs, can compress adjacent organs and this can even occasionally lead to complications as well as life-threatening emergencies 9,14,37,58,64,70,71 . Diagnostic approach-Differential diagnosis The diagnostic approach to PPCs is based on the clinical presentation and the results of imaging studies. However, the fact that PPCs may be clinically and radiologically similar to other mediastinal lesions makes the diagnosis challenging. The location and nature of mediastinal lesions are very important for the differential diagnosis. Differential diagnosis of PPCs is quite wide and includes not only lesions found in the middle mediastinum where PPCs are most commonly identified, but also lesions occurring in the other two mediastinal compartments 9,11,29,[72][73][74][75][76] . A CXR can localise as well as identify PPCs by means of posterior-anterior and lateral views. However, a disadvantage of this method is the fact that it cannot provide much information about the morphology and the expanse of the lesion. Further imaging studies such as MDCT, MRI, ultrasonography, angiography and positron emission tomography (PET) scan are used in order to complement and corroborate the initial diagnosis or suspicion 40,58,77 . With CXR, PPCs are demonstrated as teardrop formations on the lateral views as the cysts tend to adjust to the medial aspect of the pulmonary fissure. Furthermore, this projection can depict the alteration in shape and the movable nature of fluid-filled PPCs, during respiration or postural changes 41,51,64,78,79,80 . In postero-anterior projection, PPCs usually appear as rounded or oval opaque shadows with uniform density and well-defined borders and without calcification 77 . PPCs can also take on different and unusual radiologic appearances, such as a dumbbell shape 20,28,81 . MDCT with or without contrast remains the gold standard for further investigation of a mediastinal mass 14 . It estimates the size and nature of the mass, defines its position within the mediastinum and how it is related to the adjacent structures, providing valuable information about its morphology as well as its extent 82,83 . On a CT scan, the PPC is a thin-walled, well-marginated, oval homogeneous mass, usually unilocular, while multilocular cysts have also been reported 8 . Their attenuation is low (0-20HU), although sometimes it may be a little higher than water density (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40). This is probably because of a high protein and cells content due to bleeding or infection 84 . As they are commonly avascular 25 , they are not enhanced with contrast agents 46,85 . Other atypical CT findings include the presence of calcification, a sharp upper border 25,44,86 and the presence of associated pericardial effusion. Moreover, MDCT can show stalks connecting PPCs to the pericardium, thus a certain diagnosis can be established even for PPCs in unusual locations 48,[87][88][89][90][91][92][93] . PPC torsion can also get depicted via a CT scan as a mass of soft tissue in which there is an internal intertwine with fat and soft tissue attenuation, called the ''whirl sign'' which was first described in intestinal volvulus 28,94,95 . Two-dimensional echocardiography was first used in order to detect a pericardial cyst 96 . Transthoracic and in some cases transoesophageal echocardiography is a superior noninvasive method, which can accurately depict the PPC's position and distinguish it from other possible diagnoses (solid tumours, fat pads, coronary, ventricular or aortic aneurysms) 14,20,96 . Ultrasonographically PPCs appear typically as homogeneous anechoic thin-walled masses 29,67 . Transoesophageal echocardiography can be helpful especially for PPCs in unusual locations and in case of haemodynamic compromise in order to confirm a 1. Congenital cysts of primitive foregut origin (bronchogenic cyst, enterogenic cyst and esophageal duplication cysts) 17. Lymphomas 2. Broncial cysts 18. Mesenchymal tumors (sarcomas 44 , hemangiomas and lymphangiomas 133 ) 3. Localised pericardial or pleural effusion 9 19. Right middle lobe pathology 4. Ventricular aneurysms or aneurysms of the ascending aorta 20. Morgagni hernia 5. Fluid-filled superior aortic recess 134 suspected compression of the large vessels or the cardiac cavities 97,98 . Finally, ultrasonography can set the diagnosis of PPCs prenatally beyond the 14th week of gestation 99 . MRI is similar to CT as far as the efficacy in detecting a tumour is concerned 100 . MRI is a useful tool for both the initial diagnosis of a mediastinal mass and the posttherapeutic follow-up. It gives a better anatomical depiction of PPCs, including those in atypical locations and their relationships to adjacent structures, including blood vessels, without the use of contrast material 20 . Thus, it is helpful in differentiating PPCs from vascular anomalies such as aortic aneurysms 37,101,102 . MRI findings are diagnostic, showing a smooth-walled and well-defined structure with high signal intensity on T2-weighted images, low-to-intermediate signal intensity on T1weighted images and no enhancement after intravenous contrast administration 85,93,103,104 . High signal intensity is rarely seen on T1-weighted images in the case of high protein content 20 . Furthermore, MRI should be the method of choice with children and infants 104 . Arteriography provides help in defining whether the lesion in question is a part of a vascular structure 105 . In cases where diagnosis remains challenging, cyst puncture and sequent injection of a contrast material for diagnostic and therapeutic reasons has been used 106 . Finally, two incidentally detections of PPCs by means of I 131 total body scan due to the uptake of I 131 through the pericardial serosa have been reported 107,108 . Therapeutic approach The management of all mediastinal cysts can vary from conservative follow-up, percutaneous aspiration with or without ethanol, minocyclin or doxycyclin injection to surgical treatment by means of interventional thoracoscopy or thoracotomy 43,109,110,146 . PPCs are commonly asymptomatic and most of them can be left without treatment. So, in the case of an asymptomatic patient and undoubted radiological diagnosis of a PPC, conservative management with cautious follow-up by means of non-contrast-lowdose CT or ultrasound or MRI is advised 38,70,98,111 . Although there are no specific guidelines concerning either the duration or the frequency of the follow-up and the information about safety is poor, it is widely suggested to take into consideration the patient's (new symptoms, complications) as well as the cyst's (size) stability in order to decide how to continue the management. The longest described follow-up lasted 25 years and eventually a 2.5L cyst was resected 52 . Treatment is indicated in the case of symptomatic, large-sized asymptomatic cysts, uncertain diagnosis and possibility of malignant potential, atypical location such as close to large vessels, high density on CT, or the presence of complications. Such treatment is required in order to prevent life-threatening emergencies such as airway and/or haemodynamic impairment, or patient's concern 23,32,40,41,[112][113][114] . Thus, any anterior mediastinal lesion should be considered potentially malignant and should be surgically excised as soon as possible 115 . Surgical excision of the cyst has been considered the gold standard of management especially in complicated cases with excellent outcomes 98,116 . It is worthy noting that although cardiopulmonary bypass is not usually required for PPCs removal, it should be on standby, mainly in case of possible cardiac compression, erosion of the right ventricular free wall or if extensive cardiac manipulation is required 41 . Partial cyst resection is also recommended in the case of tight adhesions to the nearby structures 116 . Apart from traditional open surgery, resection of mediastinal masses including PPCs has been carried out successfully by VATS or VATS with mini-thoracotomy since 1992 26,32,111,[117][118][119][120][121][122] . These minimally invasive procedures reduce surgical trauma and postoperative pain compared to open surgery leading to a shorter period of recovery and hospitalisation 123,124 . Furthermore, the Harmonic Scalpel which is an ultrasonically activated scissor, is recommended for performing VATS more quickly 125 . However, VATS also has limitations especially for removing anterior and upper mediastinal lesions 4,122,[126][127][128] giving only a limited view of the area of interest. In addition, thoracoscopy should be an option for treatment only in the case of well-encapsulated and <6cm sized masses, although successful resections via VATS on larger ones have been reported 52,111,126 . Robotic surgery using the da Vinci TM Robotic System is another minimally invasive therapeutic modality which has proved to be safe and useful, but its cost remains a strong limitation 129 . Taking into consideration the above, smallto-moderate sized and typically located PPCs could be safely and successfully removed by these modern surgical procedures. Percutaneous aspiration of the PPC contents by a thin needle puncture under ultrasound or CT guidance has been used for both diagnosis and therapy 12,84,106,110 . However, complications such as vascular injury, pneumothorax, anaphylaxis, and infection have been referred and recurrence in about one third of patients has been recorded 57,110,[130][131][132] . Thus, percutaneous aspiration of such cysts must be performed only in case of comorbidities that contraindicate surgery, when there is a need for temporary decompression before the removal of a large symptomatic cyst 9,12,133 , when there is a suspicion of a tubercular PPC in order to confirm the diagnosis preoperatively 13 or when a patient refuses surgery. To sum up, the management of PPCs is based on an algorithm. The cyst's size, shape and compressibility along with clinical presentation and the patient's fitness and preference should be taken into consideration so that the appropriate management can be chosen 9 . Prognosis The absence of symptomatology is an indicative sign of good prognosis 14 , while post-resection prognosis is excellent with low rates of morbidity and mortality 87,98 . Only one case of recurrence after excision has been documented 78 . Limitations After reviewing the literature and attempting to statistically analyse the data from table 1 we identified the following limitations: 1. Regarding the nomenclature and the classification of mesothelial cysts, plenty of terms have been used to describe a PPC. Thus, review references are mostly based on the terms "pericardial" and "pleuropericardial" and the inclusion criteria are mentioned in "Methods". That may have affected our results as some cysts which are referred to with a different term have been excluded. 2. We had to exclude many cases from the pool of PPCs as there were not specific data with regard to the examined parameters. 3. We chose to calculate the mean maximal diameter as an objective measurable feature of size. However, our result concerning this parameter may be biased and overestimated, given that our pool of cases of cysts with known size consisted mainly of symptomatic cysts (30 out of 42 patients) which are generally supposed to be larger. Conclusion -Recommendations PPCs are rare and usually clinically silent, but can occasionally cause life threatening complications. The majority of them are congenital due to developmental deficits and are most commonly found incidentally via routine radiography between the third and fifth decade of life. In this study, we found out that the mean age of initial detection is roughly 48.7 years, the mean maximal diameter is 8.3 cm and the female:male ratio is approximately 3:2, which is in line with the literature. The RCPA constitutes the most common location, according to our statistical analysis. MDCT is recommended as the method of choice in all cases, while cardiac MRI can be useful when diagnosis is more challenging. The management algorithm of PPCs can be divided into two main categories, based on whether there is symptomatology or not. The presence of symptoms depends on the cyst's size and eventual compression of the mass. 1. In the case of a small asymptomatic PPC that does not cause compression, follow-up with serial transthoracic echocardiography is recommended. 2. Apart from the symptomatic and/or complicated and/or large PPCs, surgery is also recommended in the case of an initially asymptomatic PPC which grows in size. This is in order to prevent complications and life-threatening emergencies. The patient's concern constitutes a relative indication for surgical management 9 . Surgical resection by means of traditional open surgery or minimally invasive methods is considered to be the gold standard, and this along with percutaneous aspiration are the methods that have mostly been used. Percutaneous aspiration and ethanol sclerosis is recommended for large symptomatic PPCs while the patient is waiting for surgery.
2019-09-19T09:05:13.765Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "1edf3bcf243944f879c96381bc16a6a36dba217c", "oa_license": "CCBY", "oa_url": "https://www.cardiologyresearchjournal.com/articles/pleuropericardial-cyst-a-review-of-the-literature.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d15c3f37338fc0f71c064c043d6d3bbcf364ecc4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255957453
pes2o/s2orc
v3-fos-license
Opposing regulation of endolysosomal pathways by long-acting nanoformulated antiretroviral therapy and HIV-1 in human macrophages Long-acting nanoformulated antiretroviral therapy (nanoART) is designed to improve patient regimen adherence, reduce systemic drug toxicities, and facilitate clearance of human immunodeficiency virus type one (HIV-1) infection. While nanoART establishes drug depots within recycling and late monocyte-macrophage endosomes, whether or not this provides a strategic advantage towards viral elimination has not been elucidated. We applied quantitative SWATH-MS proteomics and cell profiling to nanoparticle atazanavir (nanoATV)-treated and HIV-1 infected human monocyte-derived macrophages (MDM). Native ATV and uninfected cells served as controls. Both HIV-1 and nanoATV engaged endolysosomal trafficking for assembly and depot formation, respectively. Notably, the pathways were deregulated in opposing manners by the virus and the nanoATV, likely by viral clearance. Paired-sample z-scores, of the proteomic data sets, showed up- and down- regulation of Rab-linked endolysosomal proteins. NanoART and native ATV treated uninfected cells showed limited effects. The data was confirmed by Western blot. DAVID and KEGG bioinformatics analyses of proteomic data showed relationships between secretory, mobility and phagocytic cell functions and virus and particle trafficking. We posit that modulation of endolysosomal pathways by antiretroviral nanoparticles provides a strategic path to combat HIV infection. Background Long acting nanoformulated antiretroviral therapy (nanoART) is emerging as an important part of the treatment armamentarium for human immunodeficiency virus type one (HIV-1) infection [1][2][3][4]. While our prior studies defined both a platform for drug delivery and the trafficking mechanisms operative for nanoART in monocytemacrophages, how these cells can be harnessed as drug vehicles for improved antiretroviral responses has not been realized [5][6][7][8][9]. Indeed, human monocyte-derived macrophages (MDM) serve as nanoART carriers extending ART half-life and drug stability [10][11][12]. Such cell-based drug delivery strategies may also decrease systemic drug toxicities [13,14]. We posit that endolysosomal pathways can serve as Trojan horses for viral persistence or as vehicles for its elimination. If correct, facilitated viral replication and the means to eliminate it may occur at identical subcellular locales. The operative nanoART response would facilitate drug delivery by bringing the medicine to the site of viral replication and assembly within mononuclear phagocytes (MP; monocytes, macrophages and dendritic cells). To investigate this apparent mechanistic paradox, functional proteomic tests were employed to uncover how drug particles affect the HIV-1 replication cycle beyond nanoART activity. The intracellular trafficking pathways held by the virus and nanoART were investigated by Sequential Windowed data independent Acquisition of the Total High-resolution Mass Spectra (SWATH-MS) profiling. This technique was applied to obtain a broader picture of complex nanoART-HIV interactions. The method was previously employed in our laboratory an others to identify and quantify cellular peptides on a larger scale [15][16][17][18][19]. While past transcriptomic and proteomic analyses were applied to study viruscell interactions [16][17][18][19], they have failed to uncover key proteins affected by targeted antiretroviral treatments. Herein, we identified deregulated cellular proteins affected by nanoatazanavir (nanoATV) in HIV-1-infected MDM. Comparison was made between nanoformulated and native ATV treatments as related to the proteomic effects induced by HIV-1. Common cellular proteins with coordinated molecular, biochemical and biological functions were altered in virus-infected and nanoATV treated cells. These were linked to phagosome signalling pathways clearly associated with the endosomal and lysosomal compartments. Specifically, opposing expressions of Rab5 and −7 and LAMP1 were seen in HIV-1 infected and nanoATV-treated cells. Notably, the downregulation of late and recycling endosomes and LAMP1 indicated that pathways that could be employed, in measure, for viral assembly and nanoparticle lysosomal degradation were affected. Through cross validation of proteomics, cell biology and protein chemistry, our data provide novel insights into how nanoART facilitates viral clearance while establishing long-lived cell-based depots different from native drug. These works represent a previously unknown mechanism for how long-acting nanoART provides a strategic advantage to combat viral infection. Proteomics analyses of HIV-1 infected MDM HIV-1 infection engages a spectrum of cellular proteins seen in specialized cell populations that support its replication [20][21][22][23]. The effect of nanoART on cell protein expression has not yet been defined, in its target macrophage. To such ends, we applied quantitative SWATH-MS proteomics followed by bioinformatics to uncover proteins deregulated by native ATV or nanoATV with or without HIV-1 infection. For these experiments MDM were first infected with HIV-1 ADA and four hours later medium was removed and cells were treated with 100 μM P407-ATV. Following 16 hours of drug treatment, the media was replaced with drug-free fluids and cells were harvested for proteomic tests after an additional seven days. This experimental paradigm was followed to assess the role that the antiretroviral delivery system had on the macrophage proteome during spreading viral infection. To separate the effects of the antiretroviral drug, the nanoparticle and the viral infection, separate and combined analyses of each of these were required. MDM were infected with HIV-1 at a MOI of 0.1 then treated with 100 μM native-or nanoATV. After seven days, cells were harvested and SWATH-MS was performed on whole cell lysates [15]. Because of the expansive proteomic data sets and analyses based on the biological response variables amongst the nanoATV, ATV and HIV-1 treatments the data are presented in 7 additional files and two figures. Figure 1 illustrates differences in protein expression between HIV-1 infected cells with or without nanoART as compared to controls (uninfected-untreated MDM) and Figure 2 illustrates differences in protein expression between uninfected cells treated with native-and nanoATV compared to controls. Quantitative profiling identified 527 significantly changed MDM proteins following HIV-1 infection. These were up-or downregulated (p < 0.05) and were assessed by paired-samples z-scores (Additional file 1). The numbers of proteins exhibiting changed expression in HIV-1-infected cells were greater than in replicate infected cells treated with nanoATV, 527 versus 376 respectively (Additional file 2A). Up-and down-regulated proteins in HIV-1 infected cells were 41 and 59% of total (n = 216 and 311, respectively). In contrast for nanoATV-treated HIV-1-infected MDM, upand down-regulated proteins were 59 and 41% of total (n = 222 and 154, respectively). Uninfected cells treated with nanoATV had fewer deregulated proteins (n = 195) compared to the other groups (Additional file 1). The proteins uncovered engaged the PANTHER database which sorted the deregulated proteins by classes. This illustrated the relative numbers of proteins in each class for the HIV-1infected and infected and nanoATV-treated cells (Additional file 2B). This showed that the number of deregulated proteins in HIV-1-infected versus infected and nanoATV-treated cells was greater for each of the classes (nucleic acid binding, hydrolase, transferase, protease, signalling molecule, transporter, transcription factors and ligases). These results demonstrate that the deregulation of cellular proteins by HIV-1 infection can be altered by nanoATV treatment. To uncover the function of the classes of proteins deregulated by HIV infection we examined the functional categories by PANTHER classification. These data are based on Gene Ontology annotations (GO molecular function, GO biological processes and GO cellular component). Additional file 3A shows the proteins sorted according to molecular function with the percentages of total deregulated proteins classified for subgroups. Common proteins were separated based on binding (28 to 31%), enzyme regulator (5%) and transporter (3-4%) activities between the nanoATV, HIV-1 and HIV-1 and nanoATV treated MDM. Proteins included Rab5, −7 (GDP/GTP and protein binding) and LAMP1 (enzyme and protein binding). The classification revealed enrichment for metabolic, cellular, localization, regulation and cellular organizational cellular processes. These were the principal categories or protein sets (Additional file 3B). The relative ratios for the proteins were similar amongst groups. GO for cellular component showed that deregulated proteins sorted by cell organelle and macromolecular complex (Additional file 3C). Note that such groupings were common to HIV-1 and HIV-1 and nanoATV treated cells. The data demonstrated that HIV-1 and HIV-1 and nanoATV, as compared to control, affect similar cellular processes. However, the numbers of proteins in each were reduced following infection and nanoATV treatment. KEGG pathway analyses for HIV-infected MDM The proteins relative to functional pathways were further investigated using the KEGG database, which indicated phagosomes as one of the main pathways related to HIV-1 infection and nanoATV treatment. First we identified the role of HIV-1 infection as compared to HIV-1 infected nanoART-treated MDM on the phagosome network ( Figure 1A and B, respectively). Few proteins were deregulated with nanoATV treatment. However, more proteins were deregulated during HIV-1 infection and nanoATV treatment. Notably, there was an opposite regulation for proteins within the phagosome and endosomal compartment between HIV-1-infected and HIV-1-infected and nanoATV-treated MDM. Up-regulation of Rab5 and −7 proteins was observed in HIV-1 infected cells; in contrast these same proteins were down-regulated in nanoATVtreated HIV-1 infected cells (pink = increased expression, blue = decreased expression). A similar pattern for LAMP1 was also observed. Moreover, DAVID functional enrichment clustering gave similar enrichment results for lysosomes in HIV-1-infected and HIV-1-infected and nanoATV-treated cells by filtering the data sets at a P value <0.01 (Additional files 4, 5, 6). There was a downregulation of endosomal and lysosomal proteins in the uninfected cells treated with native ATV (Figure 2A) or nanoATV. The latter showed few down-regulated proteins in parallel endosomal compartments ( Figure 2B). HIV-1 and native ATV treatment was similar to HIV-1 alone (i.e. high number of altered proteins). There were less numbers of oppositely regulated proteins compared to HIV and nanoATV (data not shown). A composite of these protein network changes are summarized in Figure 3. Protein-protein interaction networks To elucidate the operative and dynamic biological processes, methods depicting changes of protein interaction networks are needed. Here we used our data from SWATH-MS and applied it to the STRING bioinformatics tool to determine the dynamics of the protein interaction after HIV-1-infection with or without nanoATV treatment. Differentially expressed proteins in infected or infected and treated MDM were identified using a P-value <0.05, and protein-protein interaction networks were constructed. The results of the networking reproduced a consistently high number of altered cellular proteins by HIV-1 (Additional file 7A). The complex changes and interactions during HIV-1 infection are clearly visualized and, more importantly, the reduced complexity is clear when the infected cells are treated with nanoATV (Additional file 7B). These dynamic changes following HIV-1 infection and nanoATV treatment provide evidence of the importance of antiretroviral therapy to control protein-protein interaction networks. Antiretroviral activities of native and nanoformulated ATV To confirm the antiretroviral activity of nanoATV treatment HIV-1 reverse transcriptase (RT) activity was determined in HIV-1-infected human monocyte-derived macrophages (MDM) treated with either native-or nanoATV. Cells were treated with 10, 100 or 250 μM of native-or nanoATV for 16 hours. At this time the medium was removed, cells were washed 3 times with phosphate buffered saline and fresh medium without drug was added prior to HIV-1 ADA challenge at a multiplicity of infection (MOI) of 0.1 at days 0, 5 and 10 after treatment. Infected cells were cultured for an additional 7 days and RT activity in the culture medium was determined. Significant differences were found between cells treated with native-or nanoATV. For native ATV treated cells, RT activity was suppressed only in the day 0 infection group, at all treatment concentrations. At 10 and 100 μM little antiviral suppression was observed in the days 5 and 10 infection groups. In contrast, for cells treated with nanoATV, RT activity was suppressed to less than 20% HIV-1 positive control with all treatment concentrations and at all infection days ( Figure 4). (See figure on previous page.) Figure 1 Schematic representation of the MDM phagosome network identified in HIV-1-infected (A) and HIV-1-infected and nanoATV treated cells (B). Proteins identified were compared against control uninfected MDM cultures (p < 0.05). The acquired profiles were analyzed through the bioinformatics program using a comprehensive set of functional annotation tools to uncover biological data sets behind the uncovered list of genes. Data for Annotation, Visualization and Integrated Discovery (DAVID) facilitated the linked sets of enriched functional-related protein groups. This tool was employed to identify enriched biological processes among the expressed proteins. Gene Ontology terms were used to identify related pathways with the assistance of the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The KEGG database facilitated the elucidation of the functions for the MDM as derived from the proteomic datasets. Statistical significance was determined using a p-value < 0.05. Proteins in red and blue, display up-and downregulation, respectively. Proteins in green belong to the phagosome network and not deregulated by ATV treatment. The differences in protein up-and down-regulation between HIV infection alone and HIV infection with nanoATV treatment are circled in red. Endolysosomal proteins deregulated HIV-1 and nanoATV We selected the proteins from the KEGG pathway related to phagosome and endosomal compartment which were oppositely regulated by HIV-1 and HIV-1/nanoATV, (Rab 5, −7, −11 and LAMP 1), for validation of the proteomics analysis. Protein expression of Rab 5, −7, −11 and LAMP1 was determined by Western blot. As shown in Figure 5, there was a down-regulation in Rab 5, −7, −11 and LAMP1 protein expression in nanoATV-treated infected cells group. This effect was greater than that seen with native ATV treatment, validating the KEGG analyses, and was time dependent; highlighting the dynamic nature of endosomal trafficking based on macrophage differentiation, viral infection and nanoATV treatments. Notably, assay of Rab protein levels, at multiple days following of HIV-1 infection and antiretroviral treatment revealed that nanoATV while inducing a significant down-regulation of endosomal and lysosomal proteins the effects paralleled what was observed in uninfected cells. Moreover, the downregulation was significant as it was sustained over 10 days. To better assess the potential relationships between the virus, Rab protein and nanoART we used immunofluorescence to visualize if co-localization of Rab7 or LAMP1, HIV-1 p24 and nanoATV could occur. Immunofluorescence co-localization ( Figure 6) demonstrated that HIV-1 p24 (yellow), Rab7 or LAMP1 cellular proteins (red) and nanoATV (green) were in the identical cellular locale. These results highlight the fact that the endosomal trafficking routes taken by the virus and the nanoATV are identical. Most importantly the results support the idea that HIV-1 and nanoATV while present in the identical subcellular locale influence endosomal trafficking in opposite ways. Cytokine profile for HIV-1 and nanoATV To assess the activation state of the MDM, cytokine production was determined in HIV-1 infected cells with or without nanoATV treatment. Cell culture media from nanoATV treated and untreated HIV-1 infected and uninfected MDM were incubated with capture beads for IL-12, TNF, IL-10, IL-6, IL-1β and IL-8 and a detection fluorochrome. Acquisition was performed by FACSArray cytometry. IL-12 and TNF were increased in HIV-1 infected MDM. However, when the infected cells were treated with nanoATV cytokine levels were reduced (Figure 7), implying a positive correlation with endosomal and lysosomal proteins expression in our analysis mentioned above. NanoATV treated infected macrophages also expressed higher levels of IL-6 and IL-8, compared to infected and uninfected cells. There were no significant changes for IL-10 and IL-1β (data not shown). These results showed a negative correlation in the expression of (See figure on previous page.) Figure 2 Changes in MDM phagosome network for uninfected cells treated with native ATV (A) or nanoATV (B). Proteins were compared to uninfected and untreated MDM controls (p < 0.05) then bioinformatics analysis performed following parallel procedures described in Figure 1. Proteins in red and blue, display up-and down-regulation, respectively. Proteins in green belong to the phagosome network but were not significantly altered by viral infection or treatment. IL-12 and TNF between treated and untreated HIV-1 infected MDM, suggesting a role for nanoATV as a regulator of pro-inflammatory cytokines. Discussion While it is well known that HIV-1 alters cellular nucleic acid binding and regulatory protein functions affecting its transcription and translation [15,[24][25][26] how such virus-host cell interactions are altered when viral replication is attenuated by nanoART is not understood [11]. The effect of nanoART alone was also investigated here. To this end, we used functional proteomics, cell biology and protein chemistry to investigate potential interactions between the virus, the host cell and nanoART to elucidate how metabolic and signaling pathways can be engaged to both support viral replication and, at the same time, affect its elimination. Endolysosomal pathways were uncovered and found to be antagonistic in HIV-1 infected cells with nanoATV treatment. Notably, the results highlight how the cell can be manipulated to either facilitate or inhibit viral growth. The data also serves to highlight unique cellular processes engaged in both the viral replication cycle and the means to attenuate it. Within an infected cell, virions are formed in association with the cellular membrane. Initial investigations of HIV-1 assembly in macrophages were done through electron microscopy studies and suggested that new virions were formed from the limiting membrane of a late endosomal compartment that was linked to vesicle formations as is known to occur in multi-vesicular bodies (MVB) but not at the plasma membrane known to be operative in T cells [27]. The linkage between Endosomal Sorting Complex Required for Transport (ESCRT), MVB biogenesis and viral budding is well known. In past years it was thought that HIV-1 budding is linked to ESCRT through the presence of late endosomal markers associated with macrophage-derived virions. However, the model has now been re-examined with several recent reports showing that the viral compartment has a neutral pH and can be connected to the plasma membrane by micro-channels [28][29][30][31][32][33]. We posit that the virion trafficking and viral budding can be independent but not mutually exclusive pathways and showed that both are likely operative. First, we now show through cross validations of proteomic, cell biology and protein chemistry that the endolysosomal machinery is significantly deregulated by HIV-1 infection and in an opposite manner by nanoART treatment. Second, there is a close association between endosomal-linked pathways that include Rab5, −7 and −11 and viral infection. Third, the fact that this pathway is conversely down regulated by HIV-1 infection likely reflects that the ability of the virus to hijack ESCRT is augmented by the drug nanoparticles. Such a theory was previously put forward in our own past works [11]. In regards to assembly and intracellular accumulation of progeny HIV-1 we need not discount the elegant work performed by immunofluorescence microscopy and immunoelectron microscopy that the organelles of an internally sequestered plasma membrane domain are divergent from endosomes. Both pathways can be operative in this scenario and are not mutually exclusive from one another. Nonetheless, there is little question that virions are endosomal-associated. Progeny virions are pulled down that are associated with these proteins and reverse transcriptase activities are reduced significantly in parallel structures. While we did not study the tetraspanins CD81, CD9, and CD53, their regulation in phagocytosis or intracellular trafficking is appreciated [34]. It is noted that CD81 is linked to activation of mononuclear phagocytes, notably microglia [35]. Moreover, they are also involved in the formation of multinucleated giant cells [36] an additional major feature of viral infection in macrophages. Rab proteins function to evade degradation and direct transport to intracellular locations and utilize host vesicles to affect a stable intracellular niche for microbial stability and longevity [37]. Thus, it is not surprising that HIV-1 would induce the Rab pathways. While in uninfected macrophages the proteins are at the cell surface and in intracellular vacuole-like structures with a complex content of vesicles and interconnected membranes, these compartments are in a dynamic state within the cell and strongly regulated by HIV-1. While we acknowledge that endosome markers could be recruited to the viral structures and incorporated into virions the dynamic process of the virus and the macrophage transcends progeny virion assembly and includes viral trafficking and transport mechanisms. Such observations combined with a broader theory of the complexity of virus-cell interactions in the infected macrophages heralds the notion that multiple events are operative for virion assembly and persistence [38]. NanoART enters the macrophages primarily through clathrin-mediated pathways and is then stored in endocytic compartments. This provides a protected environment for release of the drug to sites of viral growth. Compared with nanoART, the non-formulated native drug didn't show similar trafficking behavior since less endolysosomal proteins were found related to native drug treatment. On the one hand, amorphous native drug can hardly be taken or stored by macrophage; however, native drug cannot be internalized or carried by subcellular compartments for intracellular transportation. Subcellular distribution of nanoformulated ART is in late and recycling endosomal compartments. These same compartments serve as drug depots. Since late endosomes are sites of viral assembly, nanoART stored within such compartments can retain significant antiretroviral activity. This was clearly demonstrated by the significant reductions in HIV-1 RT activity previously observed in isolated endosomal compartments in nanoATV treated HIV-1 infected MDM. All together, the current studies suggest a mechanism whereby the endolysosomal pathway is harnessed for HIV-1 viral replication and this same pathway may provide a means for its elimination. Indeed, while it is known that HIV-1 traffics through Rab5, −7, and −11 endosomal compartments how such early, late, and recycling endosomal pools regulate stages of the viral life cycle are not understood [39]. Such an intersection though is believed critical to the viral life cycle as the functions of the compartments serve to maintain cell homeostasis and protein transport [40]. It has been reported that Rab5 regulates clathrin-mediated endocytosis from the plasma membrane to early endosome pools and serves as an intersection point for proteins sorted to undergo degradation through Rab7-dependent late endosome and lysosomal routes or be sorted back to the plasma membrane through Rab11-dependent recycling pathways [39]. The ability of the macrophage to overcome such degradation events at the subcellular level underlie its abilities to persistent in its macrophage reservoir [39]. Moreover, it is well known and accepted that Rab proteins function to evade degradation and Western blot of Rab5, −7, −11, LAMP1 and β-actin was performed in cell lysates from MDM treated with native ATV or nanoATV and infected with HIV-1 at day 0, 5 or 10 post-drug treatment then incubated for 7 days. Uninfected cells and infected cells without drug treatment served as negative and positive controls for differential expression of cellular proteins during HIV-1 infection. Blots shown are from one donor and experiment, and equivalent to two independent experiments performed. direct transport to intracellular locations and utilize host vesicles to affect a stable intracellular niche for microbial stability and longevity. Similar mechanisms certainly parallel the persistence of nanoART and sustained drug depots for extended time periods. Notably, proteomic tests revealed a large number of proteins deregulated by HIV-1 infection and these were the same protein sets also affected by nanoART. The protein sets included those affecting nucleic acid binding, hydrolase and enzyme activities, oxidoreductase responses, and the cellular cytoskeletal backbone. These were all characterized by GO molecular function that placed Rab5 and −7 proteins as those engaged in GTP catabolic processes, endocytosis and small GTPase-mediated signal transduction pathways. The molecular functions for both endosomal-linked proteins are GDP/ GTPase activity and protein binding [41]. LAMP1 is associated with autophagy, the establishment of protein localization to cell organelles, golgi to lysosome transport, protein transport along microtubules and regulation of natural killer cell degranulation cytotoxic activities. For the cellular component GO classification, LAMP1 is located at the late endosome, lysosome, multivesicular body and vesicular exosome. It is included in the group of enzyme and protein binding molecular function protein sets. Indeed, as described by GO information, Rab family members are small, RAS-related GTP-binding proteins that regulate vesicular transport. Each Rab targets (See figure on previous page.) Figure 6 Subcellular localization of nanoATV, HIV-1 and endolysosomal proteins. Cellular localization of Rab7 or LAMP1 endosomal compartments (red), HIV-1p24 (yellow) and nanoATV (green) are shown by confocal microscopy. Cell nuclei were stained with DAPI (blue). Merged images showed the co-localization of all proteins. Fluorescence images were acquired with a LSM 510 confocal microscopy, 400x magnification. An important finding in the current study was that Rab5, −7 and −11 and LAMP1 were significantly upregulated in expression following HIV-1 infection. HIV-1 could induce those proteins linked to endosomes. While endosome markers could be recruited to the viral structures and incorporated into virions, the dynamic process of the virus and the macrophage transcends progeny virion assembly but also viral trafficking and transport mechanisms also strongly affected during the dynamic course of viral infection. Interestingly, Rab 5, −7, −11 and LAMP1 deregulation in infected MDM was reversed, in part, by nanoATV. The extent of protein deregulation in infected MDM was reduced by nanoATV. As shown in some studies, Rab5 has a role in endocytosis and post-endocytic trafficking [42]. Its activation promotes focal adhesion disassembly, migration and invasiveness in tumor cells [43] and its knockdown decreases cell motility and invasion by an integrin-mediated signaling pathway [44]. Moreover, it has been indicated that Rab5, −7 and −11, affect RGS4 trafficking through plasma membrane recycling or endosomes [41] and are used by the drug particles and the virus in a coordinated manner. This was seen for other viral infections such as hepatitis B virus, which can affect Rab5 and −7 expressions and use pathways for viral transport from early to mature endosomes. This is a required step in the viral life cycle [45]. Similarly, in our study, following endocytosis HIV-1 travels through the complex endocytic pathway networks to reach the nucleus and initiate its replication and as such support the notion that endosomal proteins play a critical role in the viral life cycle. Interestingly, Rab5, −7 and −11 and LAMP1 were down-regulated in HIV-1 and nanoATV-treated cells. This opposite regulation between HIV-1 and nanoATV in regulation of endosomal proteins is likely important in that cellular trafficking pathways may also be involved in the release of infectious progeny virus. As such late endosome-associated Rab7A is known to be required for HIV-1 propagation, regulation of Env processing and the incorporation of mature Env glycoproteins into viral particle [46]. In addition Rab7A promotes Vpu interaction with BST2/tetherin to facilitate HIV-1 release [47]. This may also be operative for nanoparticle viral interactions and suggests that in the present study nanoATV may disrupt mechanisms of critical cellular protein-protein interactions harnessed during the viral life cycle to perpetuate its growth. In other studies, silencing the expression of Rab9 inhibited HIV-replication [48] and silencing the endogenous Rab11a GTPase expression could destabilize HIV-1 Gag and reduce virion production both in vitro and in NOD/SCID/γc−/− mice [49]. It has been well documented that Rab11 is located on pericentriolar recycling endosomes and plays a key role in regulating vesicle trafficking through recycling endosomes to the plasma membrane as well as in exocytosis [11,50,51]. Therefore, down regulation of Rab11, as shown in our study, could destabilize HIV-1 proteins that would fail to traffic through the endosomal compartments and could be redirected for degradation at the lysosomal site. The deregulation of endosomal proteins suggests a new mechanism for viral suppression by nanoART. This includes altered expression of endosomal proteins resulting in parallel reductions in viral assembly sites. In addition, reduction of LAMP1 following nanoART treatment could reduce degradation of the nanoparticles and therefore extend the half-life of ART. Our data is corroborated by studies demonstrating co-localization of HIV-1 and endosomal and lysosomal proteins (Rab7 and LAMP1) [11,52,53]. Moreover, differences in cytokine profile expressions in untreated and nanoATV treated infected HIV-1 macrophages suggest that nanoATV down regulated the expression of pro-inflammatory cytokines. HIV-1 has been linked to the up-regulation of cytokines and in fact HIV-1 Tat upregulates IL-12 and TNF-α and -β expression in monocyte-derived dendritic cells [54,55], suggesting an advantage of nanoATV in the regulation of pro-inflammatory cytokines during HIV-1 infection. In contrast with nanoATV, the effect of native ATV was relatively lesser. Moreover, it has been reported that IL-12 upregulates Rab7 and induces lysosomal transport. Others have reported that Rab proteins are regulated by cytokines and affect TNF secretion by activated macrophages [56][57][58]. These findings provide further support to link Rab, IL-12 and TNF expression. As HIV-1 virions assemble at the plasma membrane and recruit endosomes to enable particle release, nanoATV depletes endosomal/ lysosomal proteins and deregulates pro-inflammatory cytokines thus controlling viral growth. Although a mechanism is now forged to bridge nanoATV activities and endosomal signaling pathways this study serves as only an entry to future investigations. To this end, we are currently examining the possible signaling pathways deregulated by nanoATV. Altogether, we found that SWATH-MS proteomics, bioinformatics analyses and cell biology showed that nanoATV treatment of HIV-infected MDM can down-regulate the endocytic proteins in HIV-1 infected cells and thus decrease the subcellular space available for viral assembly. Through this mechanism, nanoATV has unique but real potential towards improving virus clearance. Our work articulates commonly used pathways that are engaged in common macrophage functions such as phagocytosis and vesicular trafficking that are used both by the virus and the anti-virus. Conclusion HIV-1 and nanoATV deregulate cellular proteins in opposing manners. The common pathways are linked to viral assembly and are endolysosomal-linked. Rab5, -7, -11 and LAMP1 serve to coordinate molecular and biological functions of the virus and the antivirus in subcellular compartments. Alterations made by HIV-1 and nanoATV indicate that specific organelles are action sites for both. These findings provide novel insights into the role played by long acting subcellular targeted nanotherapies for combating HIV-1 infection. Monocyte isolation, cultivation and HIV-1 Infection Human peripheral blood monocytes were obtained by leukapheresis from HIV-1,2 and hepatitis B seronegative donors and plated at a density of 1 × 10 6 cells/mL in Dulbecco's modified Eagle's medium supplemented with 10% heat-inactivated human serum, 1% glutamine, 50 μg/ml gentamicin, 10 μg/ml ciprofloxacin and 1,000 U/ml MCSF [61]. After seven days of cell differentiation, MDM were infected with HIV-1 ADA at a MOI of 0.1 infectious viral particles per cell. After 4 hours the medium was removed and cells were treated with 100 μM native-or nanoATV. Following 16 hours of drug treatment, the media was replaced with drug-free fluids and cells were incubated for an additional seven days [9]. SWATH-MS MDM samples for mass spectrometry were collected seven days after infection and drug treatment. Cells were washed with ice-cold PBS, scraped, pelleted, and stored at −80°C until processed. Cell samples from four donors were processed simultaneously. Cell pellets were resuspended in cell lysis buffer containing 4% (w/v) SDS, 0.1 M dithiothreitol (DTT) and 0.1 M Tris-HCl. Lysates were vortexed at room temperature for 10 min and then boiled at 95°C for 5 min to denature proteins. Protein quantification was performed using the Pierce 660 nm protein assay (Thermo Scientific; Wilmington, DE, USA) following the manufacturer's protocol. On the basis of protein quantifications, 100-200 μg of each sample was processed using filter aided sample preparation (FASP) [62][63][64]. Samples were denatured with urea exchange buffer (8 M urea, 0.1 Tris-HCl, pH 8.5) placed into filter cartridges (10 kDa), centrifuged and then treated with 50 mM iodoacetamide (Sigma-Aldrich). Trypsin (Promega; Madison, WI, USA) was added (2 μg/100 μg protein) and incubated at 37°C overnight on the cartridge. Eluted peptides were dried via vacuum centrifugation. Peptides were cleaned using an Oasis mixed cation exchange cartridge following manufacturer's protocols (Waters Inc.; Milford, MA, USA) and then dried under vacuum. After processing through mixed cation exchange, peptides were subjected to further clean up using C18 Zip-Tips (EMD Millipore; Billerica, MA, USA) and dried under vacuum. Peptides were resuspended in 0.1% formic acid (Honeywell Burdick & Jackson; Muskegon, MI, USA) and quantified using NanoDrop2000 (Thermo Scientific). One μg of peptide was then prepared for SWATH-MS quantitative proteomics analysis, as previously described [15,65]. Samples used to generate the SWATH-MS spectral library were subjected to traditional, data-dependent acquisition (DDA). Bioinformatics Each SWATH-MS condition (per each donor) was transformed independently of other conditions and comparisons between control condition and experimental conditions were calculated. Extracted raw data transformation was performed as described by Haverland et al. [15] The raw intensity for each protein was transformed by taking the natural log (ln) of the intensity followed by assignment of z-score. The p-value for the computed z-score was assigned using standard normal distribution. Functional analysis and signalling pathway representation were performed using an array of complementary, open-access bioinformatic tools. Functional annotation of the proteins differentially expressed was performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID) Bioinformatics Resources (6.7) and the Protein Analysis Through Evolutionary Relationships (PANTHER) Classification System (9.0), by entering the UniProt sequence feature. The gene ontology (GO) annotations showed proteins according to Biological Processes, Molecular Functions and Cellular Components. Protein class functional analysis was obtained by PANTHER. Protein-protein interactions among all identified transcription regulators were investigated using Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) (9.1) considering a confidence of greater than 0.4 (medium confidence). Unconnected proteins (orphan proteins) and unconnected satellite networks (networks which were detached from the largest network) were removed. The complementary pathway analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) was used to determine significant pathways between experimental conditions. The KEGG pathway (71.0) for the phagosome was coloured using the KEGG mapper colour pathway tool. Green represents all proteins confidently identified and red and blue colors are assigned to up-or downregulated proteins, respectively. Antiretroviral activities HIV-1 reverse transcriptase (RT) activity was measured to assess antiretroviral efficacy in HIV-1 infected MDM. MDM were treated with 10, 100 or 250 μM native-or nanoATV for 16 hours then infected with HIV-1 ADA for 4 hours at a MOI of 0.1 immediately and five and 10 days after drug treatment. Following viral infection, cells were cultured for an additional seven days at which time cell media were collected for measurement of RT activity [7,9]. Briefly, in a 96-well plate, 10 μL of sample supernatants were mixed with 10 μL of solution containing 100 mM Tris-HCl (pH 7.9), 300 mM KCl, 100 mM dithiothreitol, 0.1% NP-40 and water. The reaction mixture was incubated at 37°C for 15 min and 25 μL of a solution containing 50 mM Tris-HCl (pH 7.9), 150 mM KCl, 5 mM DTT, 15 mM MgCl 2 , 0.05% NP-40, 10 μL/mL poly (A), 0.25 U/mL oligo d(T) and 10 μCi/mL 3 H-thymidine triphosphate was added to each well; plates were incubated at 37°C for 18 hours. Following incubation, 50 μL of cold 10% TCA was added to each well, the wells were harvested onto glass microfiber filters and the filters were assessed for 3 H-thymidine triphosphate incorporation by β-scintillation spectroscopy using a TopCount NXT (Perkin Elmer Inc.; Waltham, MA, USA). Western blots Protein expressions of Rab 5, 7 and 11, LAMP-1 and Actin were detected by Western blot assays. MDM were treated with native drug or nanoATV and infected with HIV-1 ADA as described. Seven days after infection cells were collected and lysed using CellLytic M Cell Lysis Reagent (Sigma-Aldrich). Protein content was quantitated using the Pierce 660-nm protein assay. Ten μg of protein was separated by electrophoresis using a NuPAGE Novex 4-12% Bis-Tris gel (Life Technologies-Novex; Grand Island, NY, USA). After electrophoresis, the proteins were transferred to a PVDF membrane (BioRad Laboratories, Hercules, CA, USA) and then blocked with 5% non-fat dry milk in PBS and 0.1% Tween-20 (PBST). Membranes were probed with primary antibodies for Rab5, Rab7, Rab11 or LAMP-1 and β-actin (Santa Cruz Biotechnology) followed by horseradish peroxidase-conjugated secondary antibody (Life Technologies-Novex). Proteins were detected using the SuperSignal West Pico Chemiluminescent substrate kit (Thermo Scientific) [7,11,66]. Immunofluorescence and confocal microscopy For immunofluorescence staining, cells were washed three times with PBS and fixed with 4% paraformaldehyde (PFA) at room temperature for 30 min. Fixed cells were permeabilized with 0.1% Triton in PBS and then blocked with 5% bovine serum albumin (BSA) in PBS for 30 min. Cells were washed with 5% BSA in PBS and sequentially incubated with primary antibodies against HIV-1 p24 (Dako; Carpinteria, CA, USA) and either Rab5, −7, −11 or LAMP-1 (Santa Cruz Biotechnology) for 1 hour then washed 3 times with PBS. Secondary antibodies conjugated with Alexa594 or Alexa647 dyes (Life Technologies-Molecular Probes) were applied against the primary antibody isotype and incubated at room temperature for 1 hour then washed 3 times with PBS. Slides were covered in ProLong Gold AntiFade reagent with DAPI (Life Technologies-Molecular Probes) and imaged using a 40X oil lens on a LSM 510 confocal microscope (Carl Zeiss Microimaging, Inc.; Dublin, CA, USA) [7,67]. Cytokine bead array MDM were infected with HIV-1 ADA for 4 hours at a MOI of 0.1 then treated with 100 μM native-or nanoATV for 16 hours immediately. 24 hours after drug treatment, 50 μL cell culture media from treated and infected MDM were tested to determine the concentrations of inflammatory cytokines measured by a cytokine bead array (CBA) detection kit (Becton Dickinson Biosciences; Mississauga, ON, USA) and performed according to instructions of the manufacturer. Monoclonal antibodies specific to interleukin-12 (IL-12), tumor necrosis factor (TNF), IL-10, IL-6, IL-1β and IL-8 were added to the samples in a 96 well plate. A serial dilution of known cytokines generated the standard curve. Following three hours of incubation, all samples were acquired and analysed on a FACSArray. The standard curve was determined using a parameter logistics model and analysed with FCAP Array software. Cytokine levels are expressed as pg/mL [68].
2023-01-18T14:43:24.132Z
2015-01-22T00:00:00.000
{ "year": 2015, "sha1": "16aca98437826583a4d6d9cf24df545397d00a21", "oa_license": "CCBY", "oa_url": "https://retrovirology.biomedcentral.com/counter/pdf/10.1186/s12977-014-0133-5", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "16aca98437826583a4d6d9cf24df545397d00a21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252121801
pes2o/s2orc
v3-fos-license
Regulator of calcineurin 1 deletion attenuates mitochondrial dysfunction and apoptosis in acute kidney injury through JNK/Mff signaling pathway Ischemia-reperfusion (I/R) induced acute kidney injury (AKI), characterized by excessive mitochondrial damage and cell apoptosis, remains a clinical challenge. Recent studies suggest that regulator of calcineurin 1 (RCAN1) regulates mitochondrial function in different cell types, but the underlying mechanisms require further investigation. Herein, we aim to explore whether RCAN1 involves in mitochondrial dysfunction in AKI and the exact mechanism. In present study, AKI was induced by I/R and cisplatin in RCAN1flox/flox mice and mice with renal tubular epithelial cells (TECs)-specific deletion of RCAN1. The role of RCAN1 in hypoxia-reoxygenation (HR) and cisplatin-induced injury in human renal proximal tubule epithelial cell line HK-2 was also examined by overexpression and knockdown of RCAN1. Mitochondrial function was assessed by transmission electron microscopy, JC-1 staining, MitoSOX staining, ATP production, mitochondrial fission and mitophagy. Apoptosis was detected by TUNEL assay, Annexin V-FITC staining and Western blotting analysis of apoptosis-related proteins. It was found that protein expression of RCAN1 was markedly upregulated in I/R- or cisplatin-induced AKI mouse models, as well as in HR models in HK-2 cells. RCAN1 deficiency significantly reduced kidney damage, mitochondrial dysfunction, and cell apoptosis, whereas RCAN1 overexpression led to the opposite phenotypes. Our in-depth mechanistic exploration demonstrated that RCAN1 increases the phosphorylation of mitochondrial fission factor (Mff) by binding to downstream c-Jun N-terminal kinase (JNK), then promotes dynamin related protein 1 (Drp1) migration to mitochondria, ultimately leads to excessive mitochondrial fission of renal TECs. In conclusion, our study suggests that RCAN1 could induce mitochondrial dysfunction and apoptosis by activating the downstream JNK/Mff signaling pathway. RCAN1 may be a potential therapeutic target for conferring protection against I/R- or cisplatin-AKI. INTRODUCTION Acute kidney injury (AKI) is a common disease worldwide characterized by a dramatic decline in renal function, occurring in up to 20% of hospitalized patients and up to 17.5% of cancer patients. Common causes of AKI include ischemia-reperfusion (I/R) injury, sepsis, and exposure to nephrotoxic substances such as cisplatin and contrast agents [1,2]. AKI disrupts the cellular redox balance and induces excessive production of ROS in the kidney, leading to a series of events including loss of tubular function, mitochondrial damage, energy depletion, and cell death of renal tubular epithelial cells (TECs) [3]. Despite significant progress in the pathogenesis of AKI over the past few decades, there remains a lack of effective intervention. Regulator of calcineurin 1 (RCAN1) belongs to a family of endogenous regulators of calcineurin. Rcan1 gene consists of seven exons and six introns, which can generate different isoforms through alternative splicing and express different alternatively spliced proteins [4,5]. Recent studies have shown that RCAN1 plays a direct role in the regulation of mitochondrial function. RCAN1 produces a more confluent mitochondrial network, enhances mitochondrial function, and reduces apoptosis in cardiomyocytes, but RCAN1 overexpression causes mitochondrial dysfunction and induces apoptosis in neuronal cells and β cells [6][7][8][9]. These opposing results suggest that RCAN1 functions related to mitochondrial regulation appear to be dependent on different cellular contexts. Mitochondrial dysfunction is often observed in TECs when AKI occurs [10]. However, the role of RCAN1 in regulating mitochondrial function in tubular cells remains poorly defined. Additionally, whether RCAN1 plays a crucial role in mitochondrial regulation in AKI remains unknown. In the present study, we showed that RCAN1 accelerated the progression of AKI by inducing mitochondrial fragmentation and apoptosis. Mechanistically, RCAN1 activated the downstream c-Jun N-terminal kinase (JNK)/mitochondrial fission factor (Mff) signaling pathway, and RCAN1 directly interacted with JNK and Mff. In conclusion, our findings reveal a damaging effect of RCAN1 in AKI and suggest that RCAN1 might be a novel target for the treatment and prevention of AKI. RESULTS Deletion of tubule RCAN1 reduces renal dysfunction, mitochondrial damage, and apoptosis in AKI mice Renal I/R injury in wild type mice was induced in vivo by applying 30 min of ischemia followed by 24 h of reperfusion. Compared with the sham group, RCAN1.1 S expression increased in the reperfused kidneys, but RCAN1.1 L expression was not significantly changed (Fig. 1A). We next induced mouse AKI by cisplatin injection and similar results were observed (Fig. S2A). Immunohistochemistry staining also showed RCAN1 upregulation compared with the sham group (Fig. 1B). To investigate the role of RCAN1 in renal I/R injury, RCAN1-conditional knockout (RCAN1 CKO ) mice were constructed (Fig. S1, Fig. 1C.). Compared with the sham group, the serum creatinine and urea nitrogen significantly increased in the mice treated with I/R and cisplatin injury and were downregulated in RCAN1 CKO mice (Fig. 1D, E, Fig. S2B, C). Next, alterations in renal histology were observed. As shown in Fig. 1F and Fig. S2D, compared with the sham group, AKI-induced proximal tubular damage, as evidenced via HE staining. Interestingly, RCAN1 deletion attenuated tubular damage compared with the kidneys of the model group. Additionally, we examined the expression of Kim-1 and neutrophil gelatinase-associated lipocalin (NGAL), which are indicators of kidney injury [11]. Increased protein levels of Kim-1 and NGAL after I/R injury were prevented in RCAN1 CKO mice (Fig. 1G). Furthermore, the immunofluorescence (IF) assay also showed that RCAN1 deletion inhibited the upregulation of NGAL in the kidney of I/R injury mice (Fig. 1H). Additionally, as shown in Fig. 2A and Fig. S2E, AKI upregulated the expression of proteins related to cell apoptosis, such as cleaved-caspase-3, cleaved-caspase-9, and Bax, and downregulated the level of anti-apoptotic protein Bcl-2. RCAN1 deletion reversed the balance between anti-and pro-apoptotic factors. This finding was further supported by the results of the TUNEL assay. Compared with the sham group, TUNEL-positive cells increased to about~27%, whereas RCAN1 deletion repressed the apoptotic index to~10% (Fig. 2B, Fig. S2F). We next observed mitochondrial fission, which has been shown to be modulated by Drp1 phosphorylation [12,13]. As shown in Fig. 2C and Fig. S2G, the protein level of phosphorylated Drp1 at Ser616 was upregulated in renal tissue of I/R and cisplatin-induced AKI mice. However, AKI did not influence the total level of Drp1. Moreover, mitochondrial fission factors such as phospho-Mff (p-Mff), Mff, and Fis1 were upregulated in response to AKI, which were reversed by RCAN1 deletion (Fig. 2C, Fig. S2G). Additionally, we observed mitochondrial fusion factors such as Mfn1, Mfn2, and Opa1, which could be considered antagonists of mitochondrial fission. Unexpectedly, compared with the sham group, AKI had no influence on the total level of these proteins (Fig. 2C, Fig. S2G). Mitochondrial ultrastructural changes were also observed by electron microscopy. As shown in Fig. 2D, no noticeable ultrastructural changes were observed in mitochondria in RCAN1 CKO mice. But the percentage of tubular cells with fragmented mitochondria increased in I/R-induced AKI mice, and TECs-specific knockout of RCAN1 attenuated mitochondrial fragmentation. It has been reported that the activation of mitochondrial fission is related to the induction of mitophagy [14]. Therefore, we determined the expression of the autophagy marker LC3 and found that the levels of LC3-I and LC3-II in the I/R-induced injury model kidneys were significantly higher than those in shamoperated kidneys (Fig. 2E). LC3 may accumulate due to the increased upstream formation of autophagosomes or the impaired fusion of downstream autophagosome-lysosomes. To distinguish between these two possibilities, we examined P62, a selective substrate of autophagy. Also, the expression of P62 remarkably increased in injured kidneys compared with the sham group (Fig. 2E). The increase in both LC3 and P62 expression in injury kidneys demonstrated that LC3 accumulation is likely attributable to the inhibition of autophagosome clearance, suggesting impaired autophagy flux in injury kidneys. In addition, the expression of P62 was downregulated in RCAN1 CKO mice, suggesting that blocked autophagy flux was improved in I/Rinduced AKI. All of the findings indicate that deletion of tubule RCAN1 reduces renal dysfunction, mitochondrial damage, and apoptosis in I/R and cisplatin-induced AKI mice. RCAN1 silencing alleviates HR and cisplatin injury by improving mitochondrial dysfunction and inhibiting apoptosis Subsequently, HK-2 cells were used in vitro with a hypoxia/ reoxygenation (HR) and cisplatin stimulus to mimic animal AKI. To further provide more solid evidence for the role of RCAN1 in AKI, siRNA against RCAN1 was transfected into the HK-2 cell line ( Fig. 3A, B, Fig. S5A). Compared with the control group, HR and cisplatin significantly increased the expression of RCAN1.1 S but had no significant effect on RCAN1.1 L, which was consistent with the results in AKI kidney tissues. In addition, IF also showed that the expression of RCAN1 was upregulated in the HR group, and RCAN1 was mainly distributed in the cytoplasm (Fig. 3B). Additionally, HR injury significantly reduced the cell viability of HK-2 cells, and HR-mediated cell death was mostly repressed by RCAN1 silencing (Fig. 3C). As shown in Fig. 3D and Fig. S5B, HR and cisplatin injury upregulated the expression of cle-caspase-3, clecaspase-9, and Bax, while the protein level of Bcl-2 was downregulated in the injured tissue. More importantly, RCAN1 silencing could reverse the balance between anti-and pro-apoptotic factors. Next, we fractionated proteins of cytoplasm and mitochondria, and then detected the expression levels of Cytochrome-c (Cyt-c) and Bax. Compared with the control group, HR promoted Bax migration to mitochondria and therefore reduced the levels of cytoplasmic Bax. At the same time, the pro-apoptotic factor Cyt-c was released from the mitochondria into the cytoplasm after HR injury. Silencing of RCAN1 limited Bax translocation and Cyt-c leakage (Fig. 3D). Moreover, Annexin V-FITC/PI staining showed that RCAN1 knockdown reduced HRinduced apoptosis in HK-2 cells (Fig. 3E). To further explain the effects of RCAN1 on HR injury, we focused on mitochondrial fission. Drp1-related mitochondrial fission is noted as a critical step in aggravating HR injury [15]. As shown in Fig. 4A and Fig. S5C, the mitochondrial morphology changed from an elongated network into small spheres or short rods after HR and cisplatin injury. Interestingly, the silencing of RCAN1 reduced mitochondrial fragmentation. Next, we measured the protein levels of mitochondrial fission via Western blotting. Compared with the control group, the protein levels of phosphorylated Drp1 at Ser616 but not total Drp1 was upregulated in the HR group and cisplatin group. Similarly, mitochondrial fission factors p-Mff, Mff, and Fis1 were upregulated, and these effects were reversed by RCAN1 silencing (Fig. 4B, Fig. S5D). Given that Drp1 translocation onto the surface of mitochondria is the prerequisite for mitochondrial fission, we fractionated proteins of the cytoplasm and mitochondria, and detected the expression levels of Drp1. Compared to the control group, HR promoted Drp1 migration to mitochondria and reduced cytoplasmic Drp1. In the meantime, IF was used to observe the co-location of p-Drp1 S616 and mitochondria. As shown in Fig. 4C, loss of RCAN1 maintained the mitochondrial network and blocked Drp1 translocation to mitochondria. Although the total levels of mitochondrial fusion factors Mfn1, Mfn2, and Opa1 did not change, HR injury reduced the content of these proteins in mitochondria, and knockdown of RCAN1 blocked their reduction (Fig. 4D). Accordingly, mitochondrial homeostasis was measured with or without RCAN1 silencing in the setting of HR and cisplatin injury. Firstly, to explain the role of fission in cellular damage, we focused on the mitochondrial membrane potential and mito-ROS. Mitochondrial membrane potential, assessed by JC-1 staining, was decreased under WT mice were subjected to I/R injury. The kidneys of sham operation or I/ R-AKI were isolated, and RCAN1 was monitored via Western blotting and IHC. Scale bar, 25 μm. n = 6. C RCAN1 expression in RCAN1 f/f and RCAN1 CKO mice was detected by Western blotting (n = 6). D, E Scr and BUN were measured using an assay kit (n = 8). F HE staining was conducted to observe I/R-mediated renal damage. Scale bar, 50 μm (n = 6). G The protein levels of Kim-1 and NGAL in kidney tubular injury were evaluated by Western blotting (n = 6). H An immunofluorescence (IF) assay was performed to analyze the expression of NGAL in response to renal I/R injury. Scale bar, 50 μm. n = 3 *p < 0.05, **p < 0.01. HR and cisplatin injury and was increased to near-normal levels with RCAN1 silencing (Fig. 4E, Fig. S5E). Notably, as shown in Fig. 4F and Fig. S5F, HR and cisplatin injury drove HK-2 cells to produce excessive ROS as demonstrated via MitoSOX staining, but RCAN1 knockdown reduced ROS level. Mitochondrial function was then monitored via ATP production. Compared with the control group, HR injury suppressed the concentration of cellular ATP, and this effect was nullified by RCAN1 silencing (Fig. 4G). In addition to mitochondrial fission, mitophagy is another mechanism that preserves mitochondrial homeostasis. Fission is reportedly accompanied by mitophagy, which aggravates mitochondrial injury facilitating cellular death via excessive self-consumption [16]. As shown in Fig. S4A, B and Fig. S5G, we noticed that HR and cisplatin injury robustly increased mitophagy markers, including LC3, P62, PINK1, Parkin, and BNIP3. However, these increases were partly prevented by the knockdown of RCAN1. Furthermore, we used the IF staining of COX IV, P62, and BNIP3 to observe mitophagy. Our results clearly showed that HR injury promoted the co-localization of P62 and BNIP3 with mitochondria, suggesting impaired mitophagy flux in injury HK-2. RCAN1 silencing reduced P62 and BNIP3 co-localization (Fig. S4C, D), suggesting that impaired mitophagy flux was improved. These data collectively indicate that silencing RCAN1 under HR injury improved mitochondrial dysfunction and reduced reduced I/R-initiated mitochondrial damage and apoptosis. A Western blotting was performed to analyze the expression of Pro-caspase-3, Cle-caspase-3, Pro-caspase-9, Cle-caspase-9, Bax, and Bcl-2 from sham-operated and I/R-AKI kidneys of RCAN1 f/f and RCAN1 CKO mice (n = 6). B A TUNEL assay was conducted to observe cell death. The number of TUNEL-positive cells was counted in the right panel. Scale bar, 50 μm. n = 3. C Western blotting was performed to analyze the expression of p-Drp1 S616 , Drp1, p-Mff, Mff, Fis1, Mfn1, Mfn2, and Opa1 from sham-operated and I/R-AKI kidneys of RCAN1 f/f and RCAN1 CKO mice (n = 6). D Representative TEM micrographs of mitochondrial morphology from sham-operated and I/R-AKI kidneys of RCAN1 f/f and RCAN1 CKO mice. Scale bar, 1 μm. E Western blotting was performed to analyze the protein levels of LC3 and P62 (n = 6). *p < 0.05, **p < 0.01. mitochondrial-dependent cellular apoptosis. Taken together, these data illustrate that RCAN1 plays a crucial role in HR and cisplatin-induced mitochondrial dysfunction and apoptosis. JNK/Mff signaling is involved in RCAN1-mediated HR injury Given that the change of Mff and its phosphorylation was the most significant in the proteins related to mitochondrial division after I/R and RCAN1 deletion, we next focused on Mff. The JNK pathway has been found to be the upstream activator for Mff in the cardiac I/R model [17], suggesting that the regulatory effect of RCAN1 on Mff may rely on JNK activity. As shown in Fig. 5A, compared with the sham group, JNK phosphorylation (p-JNK) level significantly increased in the mice treated with I/R injury and was downregulated in RCAN1 CKO mice. Meanwhile, the protein level of Fig. 3 RCAN1 silencing alleviated HR injury by inhibiting apoptosis. A Renal TEC line HK-2 was used with a hypoxia and reoxygenation (HR) stimulus to mimic animal I/R injury. siRNA against RCAN1 and si-NC were transfected into HK-2 cells. The alteration of RCAN1 was detected by Western blotting (n = 5). B An IF assay for RCAN1 and DAPI was used to tag the nucleus. Scale bar, 10 μm. n = 3. C A CCK-8 assay was performed to analyze cellular viability (n = 5). D The proteins of cytoplasm and mitochondria were fractionated, and Western blotting was performed to analyze the expression of Pro-caspase-3, Cle-caspase-3, Pro-caspase-9, Cle-caspase-9, Bax, Bcl-2, Cyt-c, cyto-Cyt-c, mito-Cyt-c, cyto-Bax, and mito-Bax, with β-actin as the loading control for cytoplasm and COX IV for mitochondria. n = 5. E HK-2 cells were stained with Annexin V-FITC and PI to determine cell apoptosis using a flow cytometry assay (n = 4). *p < 0.05, **p < 0.01. Fig. 4 RCAN1 silencing attenuated HR injury by regulating mitochondrial dysfunction. A Mitochondrial morphology of HK-2 was assessed by MitoTracker TM Deep Red staining, and the average length of mitochondria was measured. Scale bar, 10 μm. n = 3. B The proteins of cytoplasm and mitochondria were fractionated, and Western blotting was performed to analyze the expression of p-Drp1 S616 , Drp1, cyto-Drp1, mito-Drp1, p-Mff, Mff, and Fis1, with β-actin used as the loading control for cytoplasm and COX IV for mitochondria. n = 5. C The colocalization of p-Drp1 S616 and mitochondria was detected by IF, and the mitochondria were labeled with the COX IV antibody. Scale bar, 10 μm. n = 3. D The proteins of cytoplasm and mitochondria were fractionated, and Western blotting was performed to analyze the expression of Mfn1, cyto-Mfn1, mito-Mfn1, Mfn2, cyto-Mfn2, mito-Mfn2, Opa1, cyto-Opa1, and mito-Opa1, with β-actin as the loading control for cytoplasm and COX IV for mitochondria. n = 5. E The mitochondrial potential was observed via JC-1 staining. The red to green fluorescence ratio was recorded to quantify the mitochondrial potential (rate). Scale bar, 25 μm. n = 3. F Mitochondrial ROS levels were detected by MitoSOX and analyzed by confocal microscopy. Scale bar, 25 μm. n = 3. G ATP production was measured to reflect mitochondrial function. n = 4. *p < 0.05, **p < 0.01. p-JNK was significantly increased in HR injury, as well as cisplatin treatment. Loss of RCAN1 prevented the upregulation of p-JNK (Fig. 5B, C). These results indicate that JNK is involved in HR and cisplatin-induced injury, and RCAN1 may regulate JNK activity. Both the silencing of RCAN1 and the inhibition of JNK via SP600125 (SP) markedly reduced HR-induced upregulation of p-Mff and Mff. In contrast, reactivation of JNK in RCAN1-silenced cells via Anisomycin (Ani) re-elevated the expression of p-Mff and Mff (Fig. 5D). Collectively, these data suggest that RCAN1 under HR injury led to an increase in Mff-related mitochondrial fission that occurred at least partially through JNK-mediated Mff phosphorylation. To further investigate the effects of JNK on mitochondria, we observed mitochondrial morphology, mitochondrial membrane potential, and mitochondrial ROS by confocal microscopy. As shown in Fig. 5E-J, SP restored partial mitochondrial morphology to normal levels, decreased mitochondrial ROS release, and increased mitochondrial membrane potential under HR injury, which was consistent with RCAN1 silencing. The role of Ani was opposite to SP. To further explore the role of JNK, we silenced JNK with siRNA. As shown in Fig. 6A, B, Western blotting and IF staining showed that JNK silencing was effective. In addition, p-JNK was mainly distributed in the cytoplasm and nucleus of HK-2 cells (Fig. 6B). It was evident that JNK silencing attenuated HR injury-induced apoptosis and excessive mitochondrial division (Fig. 6C, E). Meanwhile, mitochondrial morphology staining showed that silencing of JNK reduced HR-induced mitochondrial fragmentation (Fig. 6D). Additionally, mitochondrial function was monitored via ATP production. Compared with the control group, HR injury suppressed 6). B, C HK-2 cells were treated with HR or cisplatin, and the expression levels of p-JNK and JNK were detected by Western blotting (n = 5). D HK-2 cells were treated with JNK inhibitor SP600125 (SP) and JNK activator Anisomycin (Ani) and then exposed to HR. Western blotting was used to evaluate changes in p-JNK, JNK, p-Mff, and Mff expression (n = 5). E, F HK-2 cells were treated with SP and Ani, and mitochondrial morphology of HK-2 was assessed by MitoTracker TM Deep Red staining, and the average length of mitochondria was measured. Scale bar, 10 μm. n = 3. G, H HK-2 cells were treated with SP and Ani, and the mitochondrial potential was observed via JC-1 staining. The ratio of red to green fluorescence was recorded to quantify the mitochondrial potential (rate). n = 3. I, J Mitochondrial ROS levels were detected by MitoSOX and analyzed by confocal microscopy (n = 3). Scale bar, 25 μm. *p < 0.05, **p < 0.01. the concentration of cellular ATP, and this effect was nullified by JNK silencing (Fig. 6F). Taken together, these data clearly show that the RCAN1-JNK/Mff pathway was the upstream regulator of HRactivated mitochondrial dysfunction and apoptosis. Given that RCAN1 affected mitochondrial fission and apoptosis via the JNK/Mff signaling pathway, we next explored the relationship between RCAN1, JNK, and Mff. First, we performed Co-immunocoprecipitation (IP) assays between RCAN1, JNK, and Mff, and found a strong interaction between endogenous RCAN1, JNK, and Mff in HK-2 cells under HR injury (Fig. 6G-I). The results of IF and Western blotting showed that RCAN1 resided in the cytoplasm but not mitochondria, and it was clear that RCAN1 and p-JNK were co-localized mainly in the cytoplasm (Fig. 6J, Fig. S3). Confocal microscopy also showed co-localization of JNK with Mff and RCAN1 with Mff (Fig. 6K, L). We found that RCAN1, JNK, and Mff all bound together in HK-2 cells. HK-2-specific RCAN1 overexpression aggravates HR injury To investigate the effects of RCAN1 on the JNK signaling pathway, mitochondrial fission and apoptosis, RCAN1 was overexpressed by transfecting GFP-RCAN1 in HK-2 cells (Fig. 7A). As shown in Fig. 7B and D, RCAN1 overexpression (RCAN1 OVE ) upregulated the protein levels of p-JNK, p-Mff, and proteins related to apoptosis, such as clecaspase-3, cle-caspase-9, and Bax, and downregulated Bcl-2 with or without HR injury. Co-IP assays in HK-2 cells after RCAN1 OVE also showed a constitutive interaction between GFP-RCAN1 with JNK and Mff, similar to the above findings of HR injury (Fig. 7C). As seen in Fig. 7E, confocal microscopy showed that mitochondria changed into small spheres or short rods after RCAN1 OVE . The expression of p-Drp1 S616 and Fis1 was also upregulated (Fig. 7F). RCAN1 OVE promoted Drp1 migration to mitochondria, and the contents of Mfn1, Mfn2, and Opa1 were decreased in mitochondria under RCAN1 OVE (Fig. 7G). Moreover, as shown in Fig. 7H, I, RCAN1 OVE further increased the mitochondrial ROS and decreased the ATP production compared to the HR group. Taken together, these results demonstrate that RCAN1 OVE aggravated mitochondrial dysfunction and apoptosis during HR injury. JNK silencing alleviates RCAN1 overexpression-induced mitochondrial dysfunction and apoptosis Finally, we evaluated whether JNK activity was required for RCAN1 OVE -induced mitochondrial fission and apoptosis. We thus overexpressed RCAN1 by transfecting GFP-RCAN1 and silenced JNK with siRNA to inhibit JNK function. Similar to previous results, Western blotting results further validated that overexpressed RCAN1 upregulated the expression of cle-caspase-3 and Bax, downregulated Bcl-2, and JNK silencing reversed the balance between anti-and pro-apoptotic factors (Fig. 8A). Moreover, silencing of JNK reduced mitochondrial fragmentation and upregulation of p-Drp1 S616 and Fis1 in RCAN1 OVE cells (Fig. 8B, C). Furthermore, SP and si-JNK reduced the phosphorylation level of Mff and the total Mff induced by RCAN1 OVE (Fig. 8D). Finally, compared with the control group, RCAN1 OVE suppressed the concentration of cellular ATP, and this effect was nullified by JNK silencing (Fig. 8E). Taken together, these results suggest that JNK activity is essential for the exacerbating impact of RCAN1 OVE on HR injury. DISCUSSION Numerous studies have shown that RCAN1 is involved in pathophysiological processes such as Alzheimer's disease, myocardial ischemia-reperfusion injury, and diabetes [18][19][20]. However, the role of RCAN1 in AKI is unclear. In this study, we found the following: (1) the expression of RCAN1 was significantly upregulated in I/R-or cisplatin-induced AKI; (2) RCAN1 knockout alleviated renal injury, mitochondrial dysfunction, and caspase-9dependent apoptosis in AKI; (3) I/R, HR or cisplatin all led to the increase of JNK phosphorylation and the upregulation of Mff; (4) silencing of JNK alleviated HR-induced mitochondrial dysfunction and apoptosis, and activation of JNK increased the expression of p-Mff and Mff, triggering excessive mitochondrial fission; (5) RCAN1, JNK, and Mff all bind together in HK-2 cells; (6) RCAN1 overexpression increased mitochondrial dysfunction and apoptosis, but these were alleviated after JNK silencing. To the best of our knowledge, this is the first study to describe the role of the RCAN1/JNK pathway as a mechanism responsible for AKI via the mediation of Mff-required mitochondrial fission. RCAN1 is a multifunctional protein whose primary function is related to the regulation of calcineurin activity and mitochondrial function [21]. It has been reported that RCAN1 is significantly upregulated in neuronal cells of patients with DS and Alzheimer's disease, and β cells in type 2 diabetes, accompanied by disruption of mitochondrial homeostasis [6,8,19,[22][23][24]. However, cardiacspecific RCAN1 overexpression in mice protects the heart from various pathological stresses, including I/R [18]. These controversial results indicate that RCAN1 functions related to mitochondrial regulation appear to be dependent on different cellular contexts. Previous studies have suggested that mitochondrial dysfunction plays a vital role in the pathogenesis of AKI [25][26][27]. However, whether RCAN1 involves in mitochondrial regulation in AKI remains unknown. In this study, we found that RCAN1 was significantly upregulated in I/R-or cisplatin-AKI. To further clarify the role of RCAN1 in AKI, we crossed RCAN1 floxed allele mice with TECs-specific Cdh16-Cre mice to generate TECs-specific RCAN1 knockout mice, and found that RCAN1 knockout reduced renal injury, mitochondrial dysfunction, and apoptosis in AKI. To further explore the molecular mechanism of mitochondrial regulation by RCAN1, we cultured HK-2 cells and constructed models using HR and cisplatin. It was found that HR and cisplatininduced mitochondrial damage, such as mitochondrial fragmentation, decreased mitochondrial membrane potential, mitochondrial ROS overproduction, insufficient ATP supply, Cyt-c leakage, and activation of caspase-9-dependent apoptosis, was improved to some extent by RCAN1 silencing. A growing number of studies have shown that excessive mitochondrial fission is common in HRinjured cells [10,28]. During mitochondrial fission, Drp1 translocation is critical, and its recruitment to mitochondria requires its corresponding receptors that are located on the mitochondrial outer membrane, such as Fis1, Mff, MiD49, and MiD51 [29,30]. Our study found the most dramatic changes in Mff and phosphorylated Mff after I/R injury and RCAN1 deletion, therefore, we focused on Mff and its phosphorylation to understand how RCAN1 controls mitochondrial fission. Many studies have found that Mff and the upstream JNK signaling pathway play an important role in myocardial ischemiareperfusion injury [17], diabetes [31,32], lung cancer [33], and ox-LDL-induced endothelial cell injury [34]. However, the role of the JNK/Mff signaling pathway in AKI has not been reported. In this study, we found that I/R or cisplatin injury upregulated the Fig. 6 JNK silencing alleviated HR injury by regulating mitochondrial dysfunction and inhibiting apoptosis, and RCAN1, JNK, and Mff all bound together. A siRNA against JNK and si-NC were transfected into cells. The expression level of JNK was monitored via Western blotting. (n = 5) B An IF assay for p-JNK, and DAPI was used to tag the nucleus. Scale bar, 10 μm. n = 3. C The renal tubular epithelial cell line HK-2 was used with HR, and siRNA against JNK and si-NC were transfected into cells. The expression levels of Pro-caspase-3, Cle-caspase-3, Pro-caspase-9, Cle-caspase-9, Bax, and Bcl-2 were detected by Western blotting (n = 5). D Mitochondrial morphology of HK-2 was assessed by MitoTracker TM Deep Red staining, and the average length of mitochondria was measured. Scale bar, 10 μm n = 3. E Western blotting was performed to analyze the expression of p-Drp1 S616 , Drp1, p-Mff, Mff, Fis1, Mfn1, Mfn2, and Opa1 in HK-2 cells (n = 5). F Mitochondrial function was measured with an ATP assay kit (n = 4). G-I Representative Co-IP analysis of RCAN1, JNK, and Mff in HK-2 cells under HR injury (n = 3). J-L The co-localization of RCAN1 and p-JNK, Mff, and JNK, as well as Mff and RCAN1, were detected by IF, and DAPI was used to tag the nucleus. Scale bar, 10 μm. n = 3. *p < 0.05, **p < 0.01. expression of RCAN1 and led to the phosphorylation of JNK and Mff. However, RCAN1 knockout reduced their phosphorylation. Furthermore, activation of JNK increased the expression of Mff and p-Mff in HR injury, triggering excessive mitochondrial fission. More importantly, our data identified that RCAN1 overexpressioninduced apoptosis and Mff-mediated mitochondrial dysfunction were alleviated after JNK silencing. Therefore, we speculated that RCAN1 might be located upstream of the JNK/Mff signaling pathway. Surprisingly, the Co-IP results suggest that RCAN1 directly interacted with JNK and Mff in HK-2 cells. Therefore, this is the first study to describe the relationship between RCAN1 and AKI in detail. The results remind us that RCAN1 was not present in the mitochondria. We concluded that RCAN1 promoted phosphorylation of Mff by binding to downstream JNK, and promoted Drp1 migration to mitochondria, ultimately leading to excessive mitochondrial division and apoptosis. Recent evidence shows that the activation of mitochondrial fission is related to mitophagy induction [14]. The PINK1/Parkin signaling pathway plays a central role in regulating mitophagy [35]. In addition, BNIP3 can also interact with LC3 family proteins via its LIR motifs facing the cytosol, thereby mediating mitophagy [36]. Damaged mitochondria are then labeled with ubiquitin and Fig. 7 HK-2-specific RCAN1 overexpression aggravated HR injury. A HK-2 cells were transfected by empty vector and pCMV-EGFP-RCAN1 (RCAN1 OVE ) for 24 h or 48 h, and the protein level of GFP-RCAN1 was assayed by Western blotting (n = 3). B HK-2 cells were used with HR and then treated with or without RCAN1 OVE , and protein levels of GFP-RCAN1, p-JNK, t-JNK, p-Mff, and Mff were assayed by Western blotting (n = 5). C Representative Co-IP analysis of GFP-RCAN1, JNK, and Mff in HK-2 cells under RCAN1 OVE (n = 3). D The expression levels of GFP-RCAN1, Pro-caspase-3, Cle-caspase-3, Pro-caspase-9, Cle-caspase-9, Bax, and Bcl-2 were detected by Western blotting (n = 5). E Mitochondrial morphology of HK-2 was assessed by MitoTracker TM Deep Red staining, and the average length of mitochondria was measured. Scale bar, 10 μm. n = 3. F Western blotting was performed to analyze the expression of GFP-RCAN1, p-Drp1 S616 , Drp1, Fis1, Mfn1, Mfn2, and Opa1 in HK-2 cells (n = 5). G The proteins of mitochondria were fractionated, and Western blotting was performed to analyze the expression of mito-Drp1, mito-Mfn1, mito-Mfn2, and mito-Opa1, and COX IV was used as the loading control for mitochondria (n = 5). H Mitochondrial ROS levels were detected by MitoSOX and then analyzed by confocal microscopy. Scale bar, 25 μm. n = 3. I ATP production was measured to reflect mitochondrial function. n = 3. *p < 0.05, **p < 0.01. bind with LC3 to form mitochondrial autophagosomes [37]. We examined the role of RCAN1 in mitophagy and demonstrated that RCAN1 might mediate LC3 accumulation and autophagosome clearance, resulting in dysfunctional tubular autophagy by regulating PINK1/Parkin and BNIP3-mediated mitophagy in AKI. In conclusion, renal I/R and cisplatin injury induced the upregulation of RCAN1, leading to increased phosphorylation of JNK. JNK activation upregulated downstream Mff and promoted Mff-mediated mitochondrial division, ultimately resulting in apoptosis of TECs. Our findings shed light on the role of RCAN1 in AKI and suggest that RCAN1 might be a new target for the treatment and prevention of AKI. MATERIALS AND METHODS Animals Animal maintenance and all experiments were performed in accordance with the Chinese Ethics Community Guidelines and approved by the Center for Animal Experiment, Wuhan University. Mice were maintained in an airconditioned room (22 ± 2°C) under a 12 h/12 h light/dark cycle and allowed water and standard chow. TECs-specific cadherin-16 (Cdh16)-Cre transgenic mice were purchased from Shanghai Model Organisms Center, Inc (Shanghai, China). RCAN1 flox/flox (RCAN1 f/f ) mice were generated by CRISPR/Cas9-stimulated homologous recombination. To generate mice with RCAN1 deletion specifically in TECs, RCAN1 f/f mice were crossed with Cdh16-Cre mice (Fig. S1A). All mice were crossed on a C57BL/6 background for at least three generations. The genotype of RCAN1 f/f , Cdh16-Cre + , and RCAN1-conditional knockout (RCAN1 CKO ) mice was confirmed by PCR using specific primers (Fig. S1B). Primer sequences are described in Table S1. Renal I/R-AKI and cisplatin-AKI models in vivo Renal ischemia AKI was induced using I/R injury model (8-10 weeks old male mice, n = 6-8/group). The mice were assigned to 4 groups (RCAN1 f/ f Sham, RCAN1 CKO Sham, RCAN1 f/f I/R-AKI, and RCAN1 CKO I/R-AKI). In brief, renal ischemia/reperfusion was induced by the following procedure: mice were placed in a prone position on a heated surface covered with an absorbent pad; the dorsal skin along the midline of the mouse (~1.5 cm) was cut using scissors and forceps; a small incision was made through the bilateral flank muscle and fascia above the kidney and the kidney was exteriorized; ischemia was applied to mice for 30 min by clamping the right renal pedicle and moving the nontraumatic clamps. The sham group was only treated with the sham operation, but no ischemia. Mice were sacrificed after 24 h, and blood samples and kidneys were collected. To induce cisplatin-AKI (Cis-AKI), mice received a single intraperitoneal (i.p.) injection with cisplatin (20 mg/kg body weight). The mice were assigned to 4 groups (RCAN1 f/f Sham, RCAN1 CKO Sham, RCAN1 f/f Cis-AKI, and RCAN1 CKO Cis-AKI), and control mice were injected with 0.9% saline as described previously [38]. After 72 h cisplatin treatment, mice from all groups were sacrificed. The blood was collected for serum creatinine (Scr) and blood urea nitrogen (BUN) measurements, and the isolated serum was stored at −80°C for further analysis. Kidney tissues for histological analysis were fixed in 4% paraformaldehyde (PFA). The remaining kidney tissues were stored at −80°C for protein analysis. Serum biochemical analysis Blood was collected and serum samples were collected by centrifugation at 1500 rpm for 10 min. The levels of Scr and BUN were evaluated using a Creatinine Assay kit (C011-1-1) and a Urea Assay Kit (C013-1-1, Jiancheng Bioengineering Institute, Nanjing, China), respectively. 5). B Mitochondrial morphology of HK-2 was assessed by MitoTracker TM Deep Red staining, and the average length of mitochondria was measured. Scale bar, 10 μm. n = 3. C Western blotting was performed to analyze the expression of GFP-RCAN1, p-Drp1 S616 , Drp1, Fis1, Mfn1, Mfn2, and Opa1 in HK-2 cells (n = 5). D HK-2 cells were treated with SP or siRNA against JNK under RCAN1 OVE , and the protein levels of GFP-RCAN1, p-Mff, and Mff in cells were assayed by Western blotting (n = 5). E ATP production was measured to reflect mitochondrial function. n = 3. *p < 0.05, **p < 0.01. Histopathology, immunohistochemistry (IHC), immunofluorescence (IF), and transmission electron microscopy Kidneys from mice in all groups were fixed in 4% PFA for 24 h at room temperature and embedded in paraffin. Sections were prepared for hematoxylin and eosin (HE) staining, and the tubular injury index was determined as previously described [13]. In addition, for IHC staining, the renal sections were incubated with primary antibody anti-RCAN1 (A5326, ABclonal, Wuhan, China) overnight at 4°C and then with HRP-conjugated secondary antibody (Beijing Fir Jinqiao, Beijing, China) and the DAB substrate. Micrographs of the stained sections were captured by light microscopy (Zeiss Imager A2, Germany) and quantified using ImageJ (NIH, Bethesda, MD, USA). Cells and frozen renal tissue sections were fixed with 4% PFA for 30 min at room temperature, washed with PBS, and permeabilized with 0.3% Triton X-100 for 10 min. After blocking in 5% BSA for 30 min, samples were immunolabeled with primary antibodies overnight at 4°C. The primary antibodies used in the present study were as follows: NGAL ( Briefly, 1 mm 3 fresh renal cortex was placed in an electron microscopy fixative (G1102, Servicebio) at 4°C overnight. The kidney tissues were then dehydrated in an ascending series of ethanol, and embedded in epoxy resin. Ultrathin sections (70 nm) were cut by an ultramicrotome (Leica Ultracut), stained with uranyl acetate and lead citrate, and then, examined in a transmission electron microscope (Hitachi, HT7700, Japan) at 60-80 kV. Terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) assay and Annexin V-FITC apoptosis detection Apoptotic cell death in kidney sections was determined using TUNEL staining (Promega, Madison, WI, USA). Briefly, kidney sections were deparaffinized and pretreated with 0.1 M sodium citrate, pH 6.0, at 65°C for 30 min and then incubated with a TUNEL reaction mixture for 1 h at 37°C in a dark chamber; the nuclei were labeled by DAPI. Positive staining with nuclear DNA fragmentation was detected by fluorescence microscopy (Zeiss, Germany). Each section was selected for ten representative fields randomly and the TUNEL-positive cells per mm 2 were counted. Flow cytometric analysis was performed using the Annexin V-FITC/PI apoptosis detection kit (abs50001, Absin, Shanghai, China) to evaluate the percentage of apoptotic cells. Cells were harvested, washed twice with cold PBS, and resuspended in 100 μL of binding buffer. Cells were stained with 5 μL of annexin V-FITC for 15 min and 5 μL of PI for 5 min at room temperature in the dark, and then measured by laser eight-color flow cytometer (FACSCalibur, BD Biosciences, San Jose, CA, USA) and quantified using FlowJo 7.6 software. Fractionating proteins of cytoplasm and mitochondria Cytoplasmic and mitochondrial proteins were fractionated according to the manufacturer's instructions (C3601, Beyotime Biotechnology, Shanghai, China). β-actin was used as the loading control for cytoplasm and COX IV for mitochondria. Western blotting analysis and co-immunoprecipitation Kidney cortex and HK-2 cells were harvested and lysed with a lysis buffer (50 mM Tris (pH7.4), 150 mM NaCl, 5 mM EDTA, 1% Triton X-100, 1% Glycerol, 1 mM NaF, 1 mM β-Glycerol phosphate, 0.1 mM Na 3 VO 4 , and 60 mM Octyl β-D-glucopyranoside) containing a protease inhibitor cocktail and PMSF. Protein lysates were prepared and centrifuged at 13,000 rpm and 4°C for 15 min to remove insoluble materials. Protein concentration was determined using a BCA protein assay kit (P0010S, Beyotime). An equivalent quantity of protein, including total protein, cytoplasmic protein, or mitochondrial protein was separated on an 8~15% SDS-PAGE gel, and transferred to nitrocellulose membranes (HATF00010, Millipore, Germany). The membranes were blocked with non-fat milk (5%) in TBST buffer for 2 h, then the nitrocellulose membranes were probed with various primary antibodies overnight at 4°C, and then incubated with secondary antibodies conjugated to HRP at room temperature for 2 h. Next, the membranes were washed with TBST and visualized on X-ray film using enhanced chemiluminescence reagent (P0018M, Beyotime). The optical density of each target protein band was assessed using Quantity One (Bio-Rad, USA) and normalized to the density of the corresponding β-actin or COX IV bands in the same sample. Detailed information about primary antibodies is listed in Table S2. Total protein lysates (1 mg) from each sample were used for immunoprecipitation. The samples were incubated with rabbit or mouse polyclonal IgG control antibodies, anti-RCAN1 (sc-377507, Santa), anti-JNK (sc-7345, Santa), anti-Mff (84580, CST), or anti-GFP (sc-9996, Santa). Then, the lysates were rotated overnight at 4°C. Subsequently, a total of 40 μL resuspended volume of protein A/G magnetic beads were added into the lysates, and the mixture continued rotating for another 2 h. After washing and denaturing with immunoprecipitation buffer, the eluted proteins were immunoblotted with anti-RCAN1, anti-GFP, anti-JNK, and anti-Mff as described above. Mitochondrial morphology, mitochondrial membrane potential, and mitochondrial ROS Mitochondrial morphology in live cells was observed by staining with MitoTracker Deep Red (100 nM, M22426, Thermo Fisher) followed by confocal microscopy (Leica TCS SP8). The mitochondrial length was analyzed using Image-Pro Plus 6.0 software (Media Cybernetics). Mitochondrial membrane potential was determined using the JC-1 assay (T3168, Thermo Fisher) according to manufacturer's protocol. Cells were washed with PBS and then stained with JC-1 probe for 30 min at 37°C/5% CO 2 in the dark. Subsequently, PBS was used to remove free probe, and images were captured using confocal microscopy (Leica TCS SP8). The redto-green fluorescence ratio was employed to evaluate the changes in mitochondrial membrane potential. To determine immunofluorescence, the red/green immunosignals were converted into an average grayscale intensity and subsequently analyzed using Image-Pro Plus 6.0 software (Media Cybernetics). MitoSOX Red mitochondrial superoxide indicator (5 μM, M36008, Thermo Fisher) was used to stain mitochondrial ROS. In brief, cells were stained with the MitoSOX Red for 30 min at 37°C/5% CO 2 in the dark. Samples were subsequently washed with PBS to remove free probe. ROS quantification was determined by the fluorescence intensity of mito-ROS, based on previous study [43]. ATP measurement ATP levels were measured using an ATP assay kit according to the manufacturer's instructions. Briefly, the collected cells and tissues were lysed with lysis buffer and then centrifuged at 12,000 g for 10 min at 4°C. After that, an aliquot of the supernatant plus ATP detection solution was added to a 96-well plate. Luminescence was detected using a SpectraMax M5 MultiMode microplate reader. Statistical analysis All data are presented as the mean ± SD from triplicate or more experiments performed in a parallel manner unless otherwise indicated. Statistically significant differences were determined by one-way ANOVA followed by Bonferroni's Multiple Comparison Test using GraphPad Prism 6 software. A value of P < 0.05 was considered significant.
2022-09-09T06:18:06.349Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "62331a75c9e86ebbba34b910ed5389de250fb8d1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-022-05220-x.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c264450518374943ae642046e2e0743bd9535ffd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218898598
pes2o/s2orc
v3-fos-license
Moving a neodymium magnet promotes the migration of a magnetic tracer and increases the monitoring counts on the skin surface of sentinel lymph nodes in breast cancer Background We suspected that moving a small neodymium magnet would promote migration of the magnetic tracer to the sentinel lymph node (SLN). Higher monitoring counts on the skin surface before making an incision help us detect SLNs easily and successfully. The present study evaluated the enhancement of the monitoring count on the skin surface in SLN detection based on the magnet movement in a sentinel lymph node biopsy (SNB) using superparamagnetic iron oxide (SPIO) nanoparticles. Methods After induction of general anesthesia, superparamagnetic iron oxide nanoparticles were injected sub-dermally into the subareolar area or peritumorally. The neodymium magnet was moved over the skin from the injection site to the axilla to promote migration of the magnetic tracer without massage. A total of 62 patients were enrolled from February 2018 to November 2018: 13 cases were subjected to magnet movement 20 times (Group A), 8 were subjected to 1-min magnet movement (Group B), 26 were given a short (about 5 min) interval from injection to 1-min magnet movement (Group C), and 15 were given a long (about 25 min) interval before 1-min magnet movement using the magnetometer’s head (Group D). In all cases, an SNB was conducted using both the radioisotope (RI) and SPIO methods. The monitoring counts on the skin surface were measured by a handheld magnetometer and compared among the four groups. Changes in the monitoring count by the interval and magnet movement were evaluated. Results The identification rates of the SPIO and RI methods were 100 and 95.2%, respectively. The mean monitoring counts of Group A, B, C, and D were 2.39 μT, 2.73 μT, 3.15 μT, and 3.92 μT, respectively (p < 0.0001; Kruskal-Wallis test). The monitoring counts were higher with longer magnet movement and with the insertion of an interval. Although there were no relationships between the monitoring count on the skin surface and clinicopathologic factors, magnet movement strongly influenced the monitoring count on the skin surface. Conclusion Moving a small neodymium magnet is effective for promoting migration of a magnetic tracer and increasing monitoring counts on the skin surface. Trial registration UMIN, UMIN000029475. Registered 9 October 2017 Background A sentinel lymph node biopsy (SNB) has been established as the standard method for staging clinically node-negative breast cancer [1,2], and an SNB technique using superparamagnetic iron oxide (SPIO) nanoparticles and a handheld magnetometer has been reported [3]. While the radioisotope (RI) and dyecombined method has been thought to be the standard, an SNB using SPIO has been adopted because the RI method has disadvantages of radiation exposure [4], regulations regarding radioisotope management [5], and painful tracer injection [6,7]. However, the SPIO method also has its own drawbacks, including the time needed to identify sentinel lymph nodes (SLNs) and the long-duration persistence of SPIO pigmentation. We suspected that that moving a small neodymium magnet would promote migration of the magnetic tracer to the SLN. Therefore, in a previous study, we waved a small neodymium magnet from the injection site to the axilla over the skin without massage after injection under general anesthesia during an SNB by SPIO. We found that this approach was useful for detecting SLNs, and the identification rate was extremely high with SPIO [8,9]. One advantage of the RI or SPIO method is that the uptake is assessed as a quantitative monitoring count on the skin surface. Even in SNBs using indocyanine green (ICG) in gastric cancer, the fluorescence intensity in fluorescent nodes is reportedly evaluated using an ICG intensity imaging software program (the hyper eye medical system) [10]. Higher monitoring counts on the skin surface before making an incision can help us detect SLNs easily and successfully. In the RI method, the measured value can be trusted even if the count is 1, but the reliability of monitoring counts on the skin surface is reduced when the response of the magnetometer is weak, especially when the value is < 1 μT. Changing the length of magnet movement or inserting an interval between the injection and magnet movement might be useful for increasing the count. We therefore evaluated the effect of moving the magnet on the monitoring count at the skin surface for SLN detection. Methods The study was approved by the local ethics committees and was registered in the University hospital Medical Information Network (UMIN) Clinical Registry (UMIN000029475). The subjects of the study were primary breast cancer patients diagnosed by a needle biopsy or fine-needle aspiration cytology who were ≥ 20 years old with no suspected axillary lymph node metastasis on imaging. We excluded cases with a history of breast and/or axillary surgery (such as after breast implant insertion), male breast cancer, and ipsilateral breast tumor recurrence after breast-conserving surgery. Patients who met the inclusion criteria were enrolled consecutively for this study. Written, informed consent was obtained from 69 patients from February to November 2018. An SNB was conducted using both the RI and SPIO methods. Tc-99 m phytate was injected the day before the surgery at a dose of 74 MBq, and a dose of 37 MBq was given if patients were injected on the day of surgery. After induction of general anesthesia, 0.5 ml of ferucarbotran (Resovist® Inj.; FUJIFILM Toyama Chemical Co., Ltd., Tokyo, Japan) was injected sub-dermally into the subareolar area (total mastectomy case) or peritumorally (partial mastectomy case). A neodymium magnet (Neomag, KOKUYO Co., Ltd., Osaka, Japan) was moved over the skin 20 times from the injection site to the axilla to promote migration of the magnetic tracer without massage (Fig. 1). Drug injection and magnet movement were performed by two surgeons with no difference in height (MM, EM). Regarding the magnet movement procedure, the practitioner stood beside the affected breast of the patient and operated a wiper-like operation with the elbow as the axis at a distance of about 40 cm and a speed of about 1 sec for 1 round trip. The dye method of an SNB was not performed in addition to the RI method because of the omission of massage after performing injections in this study. Before the skin incision, the monitoring count on the skin surface was measured by a novel handheld magnetometer and confirmed twice. The magnetometer developed by Tokyo University contains a small neodymium magnet in its tip ( Fig. 1) [11,12]. After skin incision, if the removed node had a measurable RI count or a value exceeding 1 μT on the magnetometer, it was considered an SLN. To determine how best to achieve a ≥ 1.5-μT count at the skin surface, the length of magnet movement was changed or an interval was inserted between the injection and magnet movement in this protocol. Several procedures were attempted in 69 cases consecutively: moving the magnet 20 times at the first step, changing the length of magnet movement at the second step, inserting an interval at the third step, and inserting another interval and magnet movement using the magnetometer's head at the final step (Fig. 2). We proceeded to the next step after achieving higher counts than with the previous step. Based on these four steps, we set four groups of cases subjected to the same procedure (≥7 cases per group). Seven cases were excluded: three cases subjected to just three minutes of magnet movement, three cases with missing time records at each check point, and one case in which a neodymium magnet instead of the magnetometer's head was used. We therefore ultimately enrolled 62 of the 69 cases. To compare the monitoring count by the length of the magnet movement or the interval from injection, 4 groups were analyzed: 13 cases were subjected to magnet movement 20 times (Group A), 8 were subjected to 1min magnet movement (Group B), 26 were given a short (about 5 min) interval from injection to 1-min magnet movement (Group C), and 15 were given a long (about 25 min) interval before 1-min magnet movement using the magnetometer's head (Group D). As a preparation for surgery before entering the cleanup operation, the time taken to confirm breast cancer using ultrasound and mark the resection area was set at a short interval. The interval from the operator's hand washing to draping and just before the the start of skin incision was set at a long interval. In Groups A and B, only the monitoring count after the magnet movement was evaluated. In the 41 cases in Groups C and D, the monitoring count and time from injection were evaluated at certain check points, such as after injection, after the interval, before magnet movement, and after magnet movement (Fig. 3). The monitoring counts at the skin surface after magnet movement were compared among the four groups. Changes in the monitoring count by interval and by magnet movement were evaluated. To compare the four groups, the χ2 test was used for variables presented as numbers of cases, and the Kruskal-Wallis test was used for those presented as the average value. Furthermore, when comparing the average values of the counts between the two groups, the Mann-Whitney test was used. Wilcoxon's signed-rank test or Friedman's test was used when comparing the average counts within the groups. The computer software program "Stat View for Windows version 4.54 "(Abacus Concepts, Inc., Berkley, CA, USA) was used for all analyses. Statistical associations were deemed significant at P-values < 0.05. Table 1 shows the characteristics of the cases, and Table 2 shows the results of SNBs. The identification rates with the SPIO and RI methods were 100 and 95.2%, respectively. It took an average of 79.9 min from the injection of ferucarbotran to the removal of the SLN. Because we performed an SNB after making skin flap, the average time was slightly long. The lymph node Fig. 1 Magnet movement by a neodymium magnet. a A neodymium magnet was moved over the skin from the injection site to the axilla repeatedly to promote migration of the magnetic tracer without massage after the magnet tracer is injected. b A neodymium magnet (Neomag, KOKUYO Co., Ltd.). c The handheld magnetometer developed by Tokyo University. It contains a small neodymium magnet in its tip. d The monitoring count on the skin surface was measured by a handheld magnetometer Fig. 3 Measuring monitoring count and timing of the magnet movement of 4 cohorts. In Groups A and B, only the monitoring count after magnet movement was evaluated. In the 41 cases in Groups C and D, the monitoring count and time from injection were evaluated at certain check points, such as after injection, after an interval, before magnet movement, and after magnet movement. Changes in the monitoring count by interval and by magnet movement were evaluated Fig. 2 The consecutive steps and recruited cases in this study. Seven cases were excluded: three cases subjected to just three min of magnet movement, three cases with missing time records of at each check point, and 1 case in which a neodymium magnet instead of magnetometer's head was used. We ultimately enrolled 62 out of the 69 cases Table 3 shows the results of the comparison among the four groups. The mean monitoring counts of Groups A, B, C, and D were 2.39 μT, 2.73 μT, 3.15 μT, and 3.92 μT, respectively (p < 0.0001; Kruskal-Wallis test). The monitoring counts were higher with longer magnet movement and with the insertion of an interval. Results The relationship between the time from injection and monitoring count in Groups C and D is shown as a scattergram in Figs. 4 and 5, respectively. Sequential lines indicate each evaluated case. Symbols show the mean values at check points, such as after injection, after an interval, before magnet movement, and after magnet movement. Although there were some cases in which the monitoring counts were ≥ 1.5 μT after injection, the monitoring counts increased after magnet movement in all cases. Sequences of the mean values at each check point in Groups C and D are shown in Fig. 6. The monitoring count gains per minute at each time point are also shown. At the same time points, there were no marked differences between Groups C and D. However, in the same group, the monitoring count increases were significantly greater during magnet movement than after injection or during an interval. The monitoring counts increased gradually with time, but they showed a greater increase during magnet movements for a short period of time than without such magnetic movements. Although magnet movement strongly influenced the monitoring count at the skin surface, there were no remarkable relationships between the monitoring count at the skin surface and clinicopathologic factors (Table 4). Discussion An SNB has been established as the standard method for staging clinically node-negative breast cancer [1,2]. The benefits of an SNB performed by SPIO include the lack of radiation exposure, the fact that it can be performed at any hospital regardless of the presence of a radioisotope department. Indeed, with the SPIO method, the location of SLNs or non-palpable breast tumors can also be identified by a detector before a skin incision is made similar to the RI method [3,[13][14][15][16][17]. Thill reported that an SNB using SentiMag® and Si-enna+® (Endomagnetics, Inc., Austin, TX, USA) was useful in a multicenter study using magnetic techniques to detect SLNs for breast cancer [15]. An SNB was performed using ferucarbotran (Resovist® Inj.; FUJIFILM Toyama Chemical Co., Ltd.) and a novel handheld magnetometer developed by Tokyo University in the present study. This method has drawbacks, including the time needed to identify SLNs and the long-duration persistence of SPIO pigmentation. We therefore performed magnet movement using a small neodymium magnet to promote the migration of the magnet tracer in a previous study [9]. That study involved 69 patients evaluated from March 2017 to January 2018. After the induction of general anesthesia, 0.3 ml of ferucarbotran was injected into the subareolar area or peritumorally. The identification rate was 98.6% (68/69) with RI and 100% (69/69) with SPIO. The identification rate using the SPIO method with magnet movement was estimated to be better than 95% (90% confidence interval: 95.75-100%). In contrast, the identification rates of RI methods were slightly low (95.2%) in the present study. However, in that previous study, the identification rate was 98.6% (68/69, 90% confidence interval: 93.3-99.9%) with RI in our hospital, and the value of 95.2% falls within that confidence interval. When using the RI method, it is easy to detect SLNs because the RI probe can detect the radiation beam from SLNs. However, it is slightly difficult to detect SLNs by SPIO, as the magnetometer must seek out a small tracer collection point. The purpose of the present protocol was to determine how best to obtain a higher count at the skin surface. To this end, the usefulness of magnet movement was evaluated from the perspective of the monitoring count at the skin surface. After increasing the dose of ferucarbotran from 0.3 ml (previous study) to 0.5 ml (Group A in the present study) and moving the magnet 20 times, the mean monitoring count increased significantly from 1.37 μT to 2.39 μT (p < 0.0001, Mann-Whitney test), and the identification rate of SPIO was 100% [9]. None of the patients showed any pigmentation despite the dose escalation. In subsequent steps, the length of magnet movement was changed, or an interval was inserted between injection and magnet movement to obtain a higher count at the skin surface. The monitoring counts increased with longer magnet movement as well as with insertion of an interval. The monitoring counts of the resected SLNs were comparable to those at the skin surface. These increased monitoring counts at the skin surface helped us detect SLNs easily and successfully. Ultimately, 1-min magnet movement with the magnetometer's head approximately 30 min after tracer injection was found to be the best procedure for obtaining a higher monitoring count. A small neodymium magnet was contained in the tip of the magnetometer developed by Tokyo University, and the magnetic force of this magnet was about five times as strong as the neodymium magnet Neomag. Several factors, including obesity and age [18,19], have been reported to affect the outcome of an SNB, but no relationships were noted between the SPIO method and these clinicopathologic factors in the present study. Furthermore, of the 19 SLNs that were histopathologically positive, 13 (68.4%) were identified by RI, and 19 (100%) were identified by SPIO. There was actually one case in which one SLN that could not be identified by the RI method but that could be identified by the SPIO method was positive for metastasis. Movement of a small neodymium magnet to promote migration of the magnetic tracer is thus considered to be a promising method to employ during an SNB using SPIO, based on the identification rate, enhanced monitoring count, and the precise and optimal detection of SLNs. The principles of an SNB are that injected small molecules pass through lymphatic vessels from the injected site and leach into the nodes through the lymphatic flow. Thus, the outcome of an SNB is affected by several factors, including tracer infiltration into the lymphatic vessels, the flow of lymph, and lodging in the nodes. To improve tracer infiltration into the lymphatic vessels, a longer period from injection to detection [19,20] and massage after injection [21] have been applied previously. While these approaches did result in a small amount of tracer leaching into the nodes, the majority of the tracer failed to do so, instead spreading into the surrounding breast tissue. In such cases, skin pigmentation can occur clinically if the tracer is colored [20]. The lymph flow is affected by not only patient factors, including age and obesity, but also by tracer factors, such as the particle size [22]. While ferucarbotran is small enough to flow smoothly, it has also been found to be taken into neutrophils through phagocytosis and lodged in the lymph nodes. Inducing movement using a neodymium magnet is useful and expected to localize the tracer to SLNs smoothly and certainly when performing SNBs by SPIO. Because a SNB is an intraoperative examination performed under general anesthesia, the patients could not be imaged twice (once with the magnet movement and once without it) in order to ensure that the same lymph nodes were marked. In addition, the detection of SLNs is based on the priority of the lymph nodes that receive the lymph flow, so the number of SLNs may differ depending on the timing of observation, and the same result may not be obtained even when performing imaging evaluations, such as computed tomography. From the perspective of priority, the present results suggest that the RI and SPIO methods have similar priorities. This is because all but one lymph node detected by the RI method were detected by the SPIO method, and the lymph node detected only by the RI method also had a count of 0.8 μT by the SPIO method. In addition, there is some concern in the present study whether or not magnet movement produces a non-physiological lymphatic flow. However, no infiltration of the magnetic tracer into the skin was detected, and the infiltration of the magnetic tracer was not concentrated in the direction of the magnet movement. Furthermore, it was observed with the naked eye that the magnetic tracer even reached the margins of the resected lymph node, similar to the dye method, and it could also be histologically confirmed in the subcapsular sinus of the lymph node. We therefore believe that magnet movement did not create new anatomical lymph vessels but instead simply changed the speed of the physiological flow in existing lymph vessels. Several limitations associated with the present study warrant mention. The number of patients in each group was not set in advance because the method changed while devising new ways to increase the count at the skin surface. Furthermore, because of the consecutive nature of the enrollment, background factors, such as age and obesity, could not be organized. An SNB is an intraoperative examination performed under general anesthesia that ends with the removal of lymph nodes and thus cannot be repeated in the same patient. Finally, this was not a randomized controlled trial. The present findings suggest that, when performing an SNB by the SPIO method, the addition of magnet movement facilitated the identification of SLNs before surgery. This approach was also able to be performed in a relatively short time after the introduction of general anesthesia in a hospital without a radiation-controlled area. Patients can also avoid pain due to the injection or radiation exposure that must be endured with the RI method. The RI method is the standard for SNB, and the amount of radiotracer may be minimal. However, it has been reported that the operator reaches the maximum allowable exposure level for 1 year (i.e. 1 mSv) after 333 operations [4]. The magnetic movement accelerated the speed of the magnetic tracer flow in the lymph vessels and increased the accumulation in the lymph nodes. This approach may also be used as a new drug delivery system for increasing the concentration of specific drugs in specific organs. Conclusion Magnet movement using a small neodymium magnet from the injection site to the axilla over the skin without massage after injection under general anesthesia was performed in order to promote migration of a magnetic tracer in an SNB by SPIO. The movement was evaluated based on the monitoring count at the skin surface, and this approach was found to be useful for promoting the migration of the magnetic tracer and thereby obtaining higher monitoring counts at the skin surface. Magnet movement during an SNB by SPIO can be performed easily and certainly during surgery without causing pigmentation.
2020-01-30T09:15:18.589Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "831e913de33a48fd0b6a483c181803753766c25b", "oa_license": "CCBY", "oa_url": "https://bmcmedimaging.biomedcentral.com/track/pdf/10.1186/s12880-020-00459-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c455c966abf1520e2f6d991d9e4f80434125967", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
208516093
pes2o/s2orc
v3-fos-license
C-reactive protein as a biomarker of response to inhaled corticosteroids among patients with COPD (ICS) attenuate CRP the moderate-to-severe exacerbations, exacerbations and all-cause mortality among patients with COPD currently exposed corticosteroids (ICS) stratified by CRP levels compared to never with ICS exposure was determined time-depen-dently, as current, recent, past or never users. We evaluated the risk of moderate-to-severe exacerbations, severe exacerbations and all-cause mortality among ICS users stratified by CRP levels. Results: 17,722 subjects diagnosed with COPD met the inclusion criteria. Among current or never ICS with elevated CRP levels we found, no significantly reduced risk of moderate-to-severe or severe exacerbations. For patients currently exposed ICS with CRP levels ≥8 mg/L there was no reduced risk of moderate-to-severe ex- acerbations (adjusted hazard ratio [adj. HR] 0.99; 95% confidence interval [CI] 0.76–1.31) or severe exacerbations (adj.HR 1.52; 95% CI 0.71–3.27). However, we found an increased risk of all-cause mortality among COPD patients with CRP levels ≥8 mg/L irrespective of ICS exposure. Conclusion: We did not find a reduced risk of moderate and/or severe COPD exacerbations among COPD patients with varying CRP levels currently exposed to ICS. However, low-grade systemic inflammation was associated with all-cause mortality among COPD patients. atopic dermatitis, use of oxygen, short-acting beta-2 agonists, long-acting beta-2 agonists, short-acting muscarinic an-tagonists, long-acting muscarinic antagonists, xanthine derivatives and oral corticosteroid 6 months prior and use of antibiotics 1 month prior. b 2781 moderate-to-severe exacerbations were recorded among 17,722 patients with COPD. Introduction Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide [1] and is projected to be the third leading cause of death by the year 2030 [2]. Due to the heterogeneity of COPD, there is a growing interest in biomarkers and their potential to guide therapy among sub-groups of patients with COPD [3]. Inhaled corticosteroids (ICS) are additionally used to suppress inflammation and reduce exacerbation risk in patients unresponsive to bronchodilators. Hurst and colleagues assessed 36 biomarkers with the potential for diagnosis and management of COPD, one of which was C-reactive protein (CRP) [4]. CRP is an acute-phase protein synthesised predominantly by the hepatocytes and acts by binding to receptors of the phagocytes and impacting on apoptosis and necrosis [5]. The ECLIPSE study illustrated that after 3 years of follow-up, all-cause mortality and exacerbation frequency was higher in persistently inflamed COPD patients (as measured by an increase in various biomarkers including CRP) compared to non-inflamed patients [6]. COPD is not an isolated condition with pathological mechanisms specifically localised in the lungs, but a heterogeneous disease with chronic inflammatory processes in various tissues accompanied by changes in various biomarkers [3]. Reports from clinical studies have suggested that ICS attenuate CRP levels among patients with moderate to severe COPD [7][8][9] and elevated CRP levels have been linked with increased risk of severe exacerbations and mortality [10,11]. The Copenhagen City Heart Study followed COPD patients, a small proportion of whom were exposed to ICS over a period of 8 years, and reported that patients with elevated CRP levels in combination with other biomarkers were at increased risk of severe exacerbations [12]. A randomised controlled trial (RCT) that enrolled more than 6,000 COPD patients over a 3-year period found that ICS exposure reduced COPD exacerbations but not all-cause mortality [13]. Furthermore, a meta-analysis of 15 studies showed that high baseline CRP was associated with a higher risk of mortality among COPD patients [11]. The use of ICS has been reported to significantly reduce CRP levels [7][8][9], hence we speculate that this attenuation of CRP will result in decreased risk of exacerbations and mortality. Biomarker guided therapy has the potential to help reduce the risk of fractures and pneumonia associated with ICS exposure by identifying patients groups more likely to benefit from ICS treatment [3]. No longitudinal study has yet evaluated the role of CRP in guiding ICS therapy among patients with COPD in a large general practice setting. Therefore, the aim of this study was to evaluate the risk of moderate-tosevere exacerbations, severe exacerbations and all-cause mortality among patients with COPD currently exposed to ICS and never ICS users stratified by CRP categories compared to never ICS users with the lowest CRP categories. Data source Data for this study were obtained from the Clinical Practice Research Datalink (CPRD). CPRD holds computerised medical records of 674 primary care practices in the United Kingdom. The database provides detailed information on drug prescriptions, clinical events, demographics, specialist referrals, and hospital admissions [14]. In addition, laboratory test results are available, including biomarker information. Data collection began in January 1987 and over 11 million persons are currently included [15]. This database has been used in various studies among COPD patients [16][17][18]. Approval for this study was obtained from the Independent Scientific Adversary Committee (ISAC) of the Medicines and Healthcare product Regulatory Authority (protocol no: 18_323R). Study population For this study we selected all newly diagnosed COPD subjects aged 40 years and older as recorded by a first read code during our study period January 1, 2005 (after the introduction of the Quality Outcome Framework (QOF)) to the January 31, 2014. Subjects were followed from the date of their COPD diagnosis (index date) until the end of data collection, date of death, and end of study or when the outcome of interest occurred, whichever came first. Subjects needed to have at least one CRP measurement before the index date to be included in the study. Subjects with a history of asthma at baseline were excluded. Subjects with acute exacerbations of COPD or oral glucocorticoid use within 30 days prior to index date were excluded. Exposure Each patient's follow-up time was divided into fixed periods of 90 days. Exposure to ICS was determined time-dependently during followup. Prior to the start of each interval, ICS exposure was determined based on the time since the most recent prescription, and classified as current (1-30 days), recent (31-60 days), past (> 60 days) or never use. Never users of ICS comprised of patients without ICS exposure during follow-up. Subjects could move between exposure groups over time. Our exposure groups of interest included current users (subjects exposed to ICS within the first 30 days to the start of an interval) and never users (subjects with no ICS use within an interval). Current and never users of ICS were further stratified by the most recent CRP measurements, with the CRP levels classified into categories (category 1 (0-3 mg/L), category 2 (4-7 mg/L) and category 3 (≥8 mg/L). We derived CRP categories by splitting the CRP distribution at the 33.3rd and 66.7th centiles using the PROC univariate procedure (SAS 9.4). CRP values were assessed time-dependently. We introduced a 12 months look-back period to determine CRP levels during follow-up; this choice was based on the mechanistic plausibility for ICS to attenuate CRP over this time period, hence we also provided a missing CRP category [9]. Patients with missing CRP measurements were classified into a separate category. When two or more CRP measurements were recorded on the same date, the mean CRP value was calculated. We used the term "elevated CRP" levels to refer to CRP levels > 3 mg/L in this work. Outcome The primary outcome of interest was moderate-to-severe exacerbation, which was defined using validated definitions (H312200, H3y1.00) for acute exacerbations of COPD from the clinical and referral files [19]. The secondary outcome was a severe exacerbation, defined as a COPD-related hospitalisation/accident and emergency visit using read codes (8H2R.00, 66Yi.00) from either the clinical or referral files or the read codes (H312200, H3y1.00) for acute exacerbations from the referral file. This definition of severe exacerbations is based on the fact that about 93% of patients with acute exacerbation of COPD reporting to the emergency department in the UK end up being hospitalised, with an average length of hospital stay of 1.25 days, which qualifies as a severe exacerbation [20]. The primary and secondary outcomes were not fully mutually exclusive. We also evaluated the risk of all-cause mortality. Referral files contain referral details recorded by general practitioners (GPs) while the clinical file contains all medical data entered by the GP [21]. Covariates Potential confounders were assessed time-dependently with the exception of gender, smoking status, alcohol use, and body mass index, which were determined at baseline. The following covariates were considered as potential confounders, and identified at the start of each interval: a history of congestive heart failure, ischemic heart disease, anxiety, chronic liver disease, cancer excluding non-melanoma skin cancer, stroke, rheumatoid arthritis, diabetes mellitus, hypertension, inflammatory bowel disease, solid organ transplant, atopic dermatitis, renal dialysis, human immunodeficiency virus or osteoporosis. In addition, the use of the following drugs within 6 months prior to the start of an interval were considered as potential confounders: antihistamines, proton pump inhibitors, antipsychotics, or antidepressants [22][23][24][25]. We statistically adjusted our analyses for proxy indicators of the severity of obstructive airway disease, as previously defined as use of short-and long-acting beta-agonists, short-and long-acting anti-muscarinic agents, xanthine derivatives, oxygen use or oral corticosteroids [26,27]. In addition, the use of antibiotics for COPD exacerbations in the month prior to an interval was considered as a potential confounder [28]. Statistical analysis We evaluated the risk of moderate-to-severe exacerbations, severe exacerbations and all-cause mortality stratified by ICS use and CRP levels using Cox regression analysis (SAS 9.4). Current and never ICS users were stratified by CRP levels. Categories were made to reduce the impact of outlying values and to account for the positively skewed distribution of CRP. The reference category for this study included patients who were never exposed to ICS with the lowest CRP levels (0-3 mg/L). Furthermore, we compared the risk of study outcomes between never users in category 2 (4-7 mg/L) and current ICS users in category 2 (4-7 mg/L), never ICS users in category 3 (≥8 mg/L) and current ICS users category 3 (≥8 mg/L) using Wald test and we only reported any significant differences the groups in our fully-adjusted analyses p-values less than 0.05. Potential confounders were included in the final model if they changed the beta-coefficient for the association between current ICS and never use and the outcome of interest by at least 5% or when consensus about inclusion existed within the team of researchers, supported by clinical evidence from literature. For the baseline characteristics ICS exposure status was determined prior to the start of follow-up (index date) as ICS users were patients with an ICS prescription ever before, and non-using patients as all patients without a record of ICS ever before. Sensitivity analysis We repeated the above-mentioned analyses but broke down the highest CRP level category into patients with a serum CRP level 8-19 mg/L and those with values ≥ 20 mg/L. We chose this threshold because we felt that it may better reflect the acute phase inflammatory process [29]. Results (588) We identified 213,561 patients with COPD, 17,722 patients met the inclusion criteria (Fig. 1). Table 1 shows the baseline characteristics of all COPD patients. At baseline 5162 COPD patients were exposed to ICS and 12,560 were not. Over half of the ICS users were females, with a mean age of 69.1 years. At baseline, the mean CRP levels were 12.9 ( ± 29.0) mg/L for ICS users with COPD and 12.4 ( ± 30.0) mg/L COPD patients who had not used ICS. About 29.3% of ICS users were obese versus 24.1% of ICS non-using subjects, 35.6% of ICS users were current smokers compared to 45.0% among subjects not exposed to ICS. Table 2 shows that the risk of moderate-to-severe exacerbations was not different between COPD patients who were current ICS users with CRP levels of 0-3 mg/L compared to never ICS users with low CRP serum (0-3 mg/L, adjusted hazard [adj.] ratio [HR] 0.95 95% confidence interval [CI] 0.70-1.27 (4-7 mg/L or ≥8 mg/L), Wald tests showed that the risk of moderate-to-severe COPD exacerbations was not different between ICS users and never users with COPD. Regardless of ICS exposure status, risk of moderate-to-severe COPD exacerbations was not statistically different between different CRP categories (0-3 mg/L vs 4-7 mg/L vs or ≥8 mg/L). Table 3 shows that these findings were largely similar for the risk of severe COPD exacerbations. All-cause mortality All-cause mortality among COPD patients with low CRP levels (0-3 mg/L) was not different between current ICS users and never Abbreviations: SD, standard deviation; COPD, chronic obstructive pulmonary disease; BMI, body mass index; CRP, C-reactive protein; SABAs, short-acting beta-2 agonists; LABAs, long-acting beta-2 agonists; SAMAs, short-acting muscarinic antagonists; ICS, inhaled corticosteroids; LAMAs, long-acting muscarinic antagonists; n, number; %, percentage. **All medications were assessed 6 months prior to index date and comorbidities were assessed ever before index date. a Measured in the year prior to index. b ICS exposure status was determined prior to start of follow-up (index date) as ICS users were patients with an ICS prescription ever before and non-using patients as all patients without a record of ICS ever before. users, adj. HR please add (Table 4). This finding was consistent when ICS use was compared with never use in other CRP categories (4-7 mg/ L or ≥8 mg/L). However, all-cause mortality among COPD patients with CRP levels ≥8 mg/L showed approximately three-fold significantly increased as compared to COPD patients with low CRP serum levels (0-3 mg/L) adj. HR 2.81; 95% CI 2.20-3.58 for ICS non-using COPD patients. Sensitivity analysis In the sensitivity analysis, never or current use of ICS among COPD patients with CRP levels of ≥20 mg/L was not associated with a significant reduction in the risk of moderate-to-severe or severe COPD exacerbations compared to never ICS users with low CRP serum levels (0-3 mg/L, Tables S1 and S2). However, we found that the risk of allcause mortality among COPD patients with CRP levels ≥20 mg/L was 3.5-4-fold increased among ICS and ICS non-using patients, adj. HR 4.03; 95% CI 3.11-5.22, among never users of ICS. Main findings In this study, the risk of moderate and/or severe COPD exacerbations or all-cause mortality was comparable between ICS users and nonusers, irrespective of CRP levels. Regardless of ICS exposure status, risk of moderate-and/or severe COPD exacerbations was not different between different CRP categories (0-3 mg/L vs. 4-7 mg/L vs. or ≥8 mg/ L), whereas all-cause mortality was approximately three-fold increased among patients with CRP levels ≥8 mg/L as compared to COPD patients with low (0-3 mg/L) CRP serum levels. Exacerbations of COPD are important drivers of COPD-related hospitalisations and mortality [30]. Very few researchers have evaluated the role of CRP in guiding ICS use in the improvement of moderate-tosevere or severe exacerbations. In a large population-based prospective study that included over 6000 COPD patients, Thomsen et al., [31] reported that patients with elevated CRP levels, fibrinogen, and leucocyte counts had increased risk of exacerbations. However, when the investigators evaluated each elevated biomarker alone (including CRP), no significantly increased risk of frequent exacerbations was found, in line with our findings. It is important to note that the authors stated that the use of ICS was "relatively rare" among the patients enrolled and might have affected their findings (with only 3% of the study population was exposed to ICS at baseline) [31]. In our study, we noted an increased risk of moderate and/or severe exacerbations when we adjusted only for age and sex (Table 2), following adjustments for all possible confounders the risk disappeared. This is the first study to specifically evaluate the risk of moderate-to-severe, severe exacerbations and all-cause mortality among COPD patients currently exposed to ICS stratified by CRP levels. A randomised controlled trial conducted across 11 centres among 289 COPD patients treated with ICS, with or without LABA, found no reduction in CRP levels although serum protein D levels decreased, suggesting that ICS treatment affects lungspecific biomarkers rather than systemic inflammatory markers [8]. Furthermore, ECLIPSE investigators found that elevated levels of CRP, fibrinogen and leucocyte counts were associated with the occurrence of exacerbations in the first year in a univariate analysis [32]. However, the effect disappeared following multivariate adjustments except for leucocyte counts. Approximately 30% of patients with COPD exacerbations have been reported to have normal CRP levels [33], which questions the validity of CRP as a robust biomarker. Furthermore, De Torres et al., [34] reported that current exposure to glucocorticoids did not influence CRP levels, contrary to previous reports [7]. Consistent with our study, there was a huge variation in CRP levels across the mean, suggesting that CRP cannot be used as a clinical biomarker [35] The exact mechanism by which ICS interact with CRP in COPD remains unclear. However, interleukin 6, a potent regulator of CRP generation known to be present in high concentrations in the serum and expiratory condensates of patients with COPD, can be down-regulated by corticosteroids [36]. All-cause mortality is an important end-point, which is seldom assessed among COPD patients mostly due to the short duration of patient follow-up in most studies. In our study we found an increased risk of allcause mortality among COPD patients with elevated CRP levels (8 mg/ L) compared to patients with low CRP. Consistent with our finding, the Lung Heart Study which enrolled over 4800 patients with mild to moderate COPD stratified by CRP quartiles reported that patients with elevated CRP levels were at increased risk of all-cause mortality [37]. Similarly, a cohort study of 1302 patients with airflow limitation with a median follow-up of 8-years in Denmark, reported a greater risk of COPD deaths among patients with high CRP levels compared to patients with lower CRP levels [12]. However, a multi-center study with 218 patients with stable COPD found that elevated CRP levels did not increase the risk of all-cause mortality [34]. This might be due to the low statistical power of their study. More recently, a meta-analysis and systematic review of 15 studies, which included 11,180 COPD patients, found that higher baseline CRP was associated with a higher risk of mortality [11]. Cardiovascular events and cancers accounted for most deaths [11,38]. With no clear CRP cut-off, researchers have called for the adoption of a clearly defined threshold for clinical and observational studies, in order to truly reveal the potential of CRP in COPD management [39]. The heterogeneity of systemic inflammation has been recognised in patients with COPD. These include high heterogeneity in serum CRP, fibrinogen, and TNF, which are largely attributed to host or diseaserelated factors [40]. Elevated CRP levels have been reported in obese patients and epidemiological data shows an age-related increase in inflammatory biomarkers [40]. In general population a dose-related effect has been reported between cigarette smoking and increased levels of CRP and fibrinogen [41,42] and CRP levels remained elevated for approximately two decades after smoking cessation [43]. Although, better understanding of the effects of confounders on CRP levels exist, CRP stability and variability of remains critical to its relevance as a guide for therapeutic interventions in COPD. The ECLIPSE study reported CRP as the least stable biomarker assessed, with only 21% of patients having a 3-months measurement within 25% of baseline values [44]. Furthermore, CRP is known to have a half-life in plasma of 19 h and the National Health and Nutrition Examination Survey (NHANES) observed significant short-term variability (approximately 2.5 weeks) in CRP levels, particularly at high values [45]. These factors make CRP a poor biomarker for personalised management of COPD. A major strength of this study was the inclusion of patients from one of the world's largest primary care databases, thus providing a large population-based cohort of COPD patients with CRP measurements followed over time with fair recording of all-cause mortality [46,47]. Second, in our study we used validated definitions for moderate and/or severe exacerbations of COPD, using read codes reported having a 96% positive predictive value of identifying an acute exacerbation within the CPRD [19]. Nevertheless, we may have missed considerable numbers of exacerbations, which may be miscoded e.g. as respiratory tract infections such as pneumonia. Third, time-varying classification of exposure to ICS, CRP and covariates allowed us to conduct an "on-treatment analysis", which results in less non-differential misclassification of exposure than in an 'intention to treat analysis' which ignores ICS Table 4 Risk of all-cause mortality among never and current ICS user stratified by CRP levels. exposure during follow-up. Lastly, data on confounding factors such as smoking status, BMI, comorbidities, and drugs prescribed were available and as such these covariates were adjusted for in our models. Limitations to our study include, a potential for confounding by disease severity as we lacked information on COPD disease stage. Confounding by disease severity is a multifactorial phenomenon that may act in different directions. Although we did not use information on disease severity, we adjusted for proxies of COPD disease severity. While we excluded asthma patients, it was impossible to rule out the inclusion of patients with reversible airflow limitation. CRP measurements are not routinely collected as part of diagnosis of COPD; they are most likely requested by the GP in suspicion of bacterial infections and might have introduced misclassification bias. We expect this bias to be non-differential among COPD patients exposed to ICS and ICS never users leading to biased estimates towards the null. While this might have masked the true risk of moderate and/or severe exacerbations among patients with elevated CRP levels, we found significant associations among patients with elevated CRP levels for all-cause mortality, suggesting that our results could not have been affected by this bias. Furthermore, because ICS use is associated with pneumonia, these patients might have a higher CRP level among the ICS users potentially resulting in differential misclassification. This will lead to bias estimates towards or away from the null. However, considering the similarities in mean CRP levels between "ever before" ICS users and ICS non-using patients, it is less likely that this bias had a huge impact on or estimates. We had a significant amount of patients with missing CRP serum levels during follow-up; this was due to the choice of a 1-year look-back period for CRP assessment. We could not determine cause-specific death such as COPD-related mortality, considering that only approximately 58% of general practices are consented to required linkage [15], which will substantially diminish the power of our study to accurately detect study outcomes. We could not determine the criteria for ICS prescription in this study as a significant dissociation exist between clinical recommendation/guidelines and actual prescribing practice by GPs [48].While the choice of this look back window results in missing CRP counts, this choice was rightly made considering the mechanistic plausibility of ICS to attenuate CRP and exacerbations over this period [9]. The choice of a longer period would have resulted in less missing CRP counts during follow-up but will not be realistically plausible in clinical settings. Key message In conclusion, we did not find a reduced risk of moderate and/or severe COPD exacerbations among COPD patients with varying CRP levels currently exposed to ICS. However, we found an increased risk of all-cause mortality among patients with elevated (≥8 mg/L) CRP levels irrespective of ICS use. There is tremendous enthusiasm and effort to improve precision medicine using biomarkers in COPD. While CRP might be a useful biomarker for COPD prognosis, it does not seem to have the potential to guide ICS therapy in COPD management. What is known about this subject 1. Reports from clinical studies have suggested that ICS attenuate CRP levels among patients with moderate to severe COPD. 2. Elevated CRP has been associated with an increased risk of moderate-and-severe exacerbations and mortality 3. ICS helps to improve exacerbation outcomes among patients with COPD. What this study adds 1. Irrespective of ICS exposure patients with persistently elevated CRP levels had an increased risk of mortality. 2. Among patients with persistently elevated CRP, current ICS exposure did not reduce the risk of moderate-to-severe or severe exacerbations. Data availability statement Research data are not shared as they belong to the CPRD in the UK and only licensed institutions are given access to the data.
2019-12-02T14:03:47.752Z
2019-11-27T00:00:00.000
{ "year": 2019, "sha1": "ae6c5cea20368f2c638ef7870c22980743e83159", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.pupt.2019.101870", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7617acbacdbca8a00fac694001b79d9be005d862", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119281067
pes2o/s2orc
v3-fos-license
Neutral current neutrino oscillation via quantum field theory approach Neutrino and anti-neutrino states coming from the neutral current or $Z_0$ decay are blind with respect to the flavor. The neutrino oscillation is observed and formulated when its flavor is known. However, it has been shown that we can see neutrino oscillation pattern for $Z_0$ decay neutrinos provided that both neutrino and anti-neutrino are detected. In this paper, we restudy this oscillation via quantum field theory approach. Through this approach, we find that the oscillation pattern ceases if the distance between the detectors is larger than the coherence length, while both neutrino and antineutrino states may be coherent. Also the uncertainty of source (region of $Z_0$ decay) does not have any role in the coherency of neutrino and antineutrino. Introduction While neutrino oscillation is a window to the new physics, it is one of the most interesting quantum mechanical phenomena. Historically, neutrino os-cillation has been established more than 50 years and confirmed experimentally more than 10 years [1]. The observation of neutrino oscillation depends on the coherency of neutrinos during the production, propagation and detection [2,3]. The production and detection coherence conditions are satisfied provided that the intrinsic quantum mechanical energy uncertainties during these processes are large compared to the energy difference ∆E jk of different neutrino mass eigenstates: where σ E = min{σ prod E , σ det E }. This condition implies that, during the production and detection processes, one cannot discriminate the neutrino mass eigenstates. Conservation of the coherency during the propagation means that the wave packets describing the mass eigenstates overlap from the production until the detection regions. The wave packets describing the different neutrino mass eigenstates propagate with different group velocities. After propagating L, the separation of different mass wave packets is Consequently, the coherent propagation is guaranteed provided that where v g is the average group velocity of the wave packets of different neutrino mass eigenstates and σ xν is their common effective spatial width. In other words, similar to the double-slit experiment, if one could determine which mass eigenstate is created or detected, the neutrino oscillation pattern would disappear. For instance, conservation of energy and momentum implies that exact determination of energy-momentum of charged leptons leads to the determination of mass eigenstate of the corresponding neutrino (in fact exact momentum conservation causes neutrino state is entangled kinematically to the corresponding charged lepton state) and neutrino oscillation is ceased [4]. The kinematics analysis shows that a neutrino state created through charged current interactions has a specific flavor. For instance, muon neutrino is created by the pion decay while muon decay gives only electron neutrino. In contrast, neutral current or Z 0 decay is blind with respect to the neutrino flavors. In other words, every flavor eigenstate as well as every mass eigenstate is created with the same probability. However, there is another property that is noticeable; neutrino and antineutrino states are entirely correlated in the sense that they have same flavor. It has been shown that if both neutrino and antineutrino are detected, one can observe neutrino oscillation pattern between the detectors [5]. Nevertheless, if only either neutrino or antineutrino is detected, the neutrino oscillation is ceased; therefore, it is a realization of the Einstein-Podolsky-Rosen paradox [6]. In the center of mass frame, in particular, the oscillation pattern occurs at distance L +L, where L andL are the distance of the neutrino and antineutrino detectors from the source, respectively. In this paper, we reanalyze it via quantum field theory approaches. In this approach, the oscillating states become intermediate states, not directly observed, which propagate between a source and a detector. The localization conditions are respected with attributing a localized wave function to interacting initial and final states in the source and detector [7,8,9,10,11]. Indeed, these localizations are essential for the observation of the neutrino oscillation and guarantee the coherence issues [4]. In the case of Z 0 decay neutrinos, we will see the localization of source (region of Z 0 decay) is not important while the localization of detectors plays role in the coherency condition of neutrino and antineutrino. Moreover, as was said in general, the coherency is spoiled during propagation because the group velocities of various mass eigenstates are different. Therefore, maybe one expects that when both neutrino and antineutrino propagate coherently, it is possible to have oscillation pattern. However, we will show that it is necessary the distance between detectors to be smaller than the coherence length. In the following, we develop Z 0 decay neutrino oscillation through quantum field theory approach. Finally, we discuss on the coherency properties which appears through quantum field theory approach. 2 Developing neutral current neutrino oscillation through quantum field theory approach We can describe any particle physics processes by S-matrix formalism in quantum field theory provided that it is adjusted according to the physical situations. In particular, to describe neutrino oscillation one needs to notice that neutrinos are produced and detected in confined space-time regions. The source and detector regions are separated by a finite distance which is usually much larger than the size of these regions. Neutral current neutrino oscillation consists of the following three processes: • Creation neutrino and antineutrino in the source • Detection of neutrino in the corresponding detector • Detection of antineutrino in the other detector In order to define the initial and final states, the localizations of interactions in the source and detectors require to integrate on momentum with a localized distribution function around the corresponding averaged momentum. Therefore, the initial states are defined as follows: where D I (D I ) is target in the detectors of neutrino (antineutrino). [dp] denotes . The final states are written similarly as follows: Here, D F (D F ) refers to created nucleon (antinucleon) in detector due to neutrino (antineutrino) collision. l − (l + ) denotes the created charged lepton corresponding to neutrino (antineutrinos) in the detector. In the above defined states, F 's are momentum distribution functions which are localized around the corresponding mean momentum. The amplitude of the neutrino and anti-neutrino production -propagation -detection processes is given by the following matrix element: whereT is the time ordering operator and H I are the weak interaction Hamiltonian. The quantity A p.w. j is the plane wave amplitude of the process with the j'th neutrino and antineutrino mass eigenstate propagating between the source and the detectors and is written as follows: where M 's are the plane wave amplitudes of the processes. It is convenient to switch to shifted 4-coordinate variables x, x 1 and x 2 defined according to where the propagation times T andT are defined by and the propagation distances by and we redefine Taking into account that we redefine the amplitudes (including spinors) correspond to the production and neutrino and antineutrino detection processes, respectively, as follows: Substituting the initial and final state from (3) and (4) into (5) and using above issues, we have Notice that the integration over x in the recent equation leads to the δ-Dirac function representing energy-momentum conservation in the source. Hereafter, we assume, for simplicity, the momentum wave functions of the initial and final states to be Gaussian which are sharply peaked around the corresponding averaged momentum similar to where σ p , width of momentum distribution, is assumed to be very smaller than the corresponding averaged momentum. Therefore, similar to the method presented in [7], the amplitude of the total process can be written as where σ x 's are the position uncertainties which are related to the momentum ones through σ x σ p ∼ 1 2 and v's denote the group velocities of the corresponding particles. Since the elements of matrix M are smooth functions of the on-shell 4-momenta, whereas the wave packets of the external states are assumed to be sharply peaked at or near the corresponding mean momentum, one can replace M by their values at the mean momenta and pull out of the integral. Moreover, we define Therefore, using above issues and carrying out the integration over x 1 and x 2 one can write the amplitude as follows: where . Now, one should carry out the integration over the momentum of either propagating neutrino or propagating antineutrino. Here, we integrate over the momentum of antineutrino. After applying the following change in integration variable We perform the integral over p ′ using the Grimus-Stockinger theorem [8] This theorem is valid for a function φ which is differentiable at least three times such that φ itself and its first and second derivatives decrease at least as 1 p ′2 as | p ′ |→ ∞. Performing the recent stage, one can write the amplitude as follows: in which |p j |= Ē 2 j − m j 2 . To carry out the integration over the neutrino 4-momentum, it will be more convenient for us to integrate first over q 0 and then over the components of q. It is noticeable that since the pole at q 0 = −E j + iǫ is not physical, the contribution to the integral is only given by the residue at the pole of the neutrino propagator at q 0 = E j − iǫ . We obtain The remaining integration over q can be done by using saddle-point approximation at q = p j . We expand E j (q) about q = p j as follows: where v j = ∂E j ∂|q| at | q |=| p j |. Also S(q) +S(p − q) is expanded as follows; where the first derivative of S+S at q = p j vanishes and the second derivative is given by Using the above issues, one can perform the integration over d 3 q. Consequently, the amplitude is obtained as follows: The probability of the process is proportional to | A αβ | 2 . In a practical experimental setting L andL are usually fixed and known quantity while T and T are not measured. Therefore, the probability of detecting a neutrino with flavor α and an antineutrino with flavor β by the neutrino and antineutrino detectors located at the distances L andL from the source, respectively, is obtained by the time average of | A αβ | 2 , which leads to where and N * j is its complex conjugate. Since we are concerned with relativistic neutrinos, we use the following approximations. The differences between the energies and momenta of various mass eigenstates are due to the thin splitting of masses. Hence, we approximate in which E is the common neutrino energy when m i = 0 and ρ is determined from the energy-momentum conservation [12]. Equation (21) leads to the following approximations and Also, due to these approximations, one can easily show that where σ 2 x ≡ σ 2 xD + σ 2 xD and Moreover, one can see that the relativistic approximation leads S +S to be minimum. Therefore, in the relativistic approximation we obtain the following expression for the flavor-changing probability: with the oscillation length L osc jk and the coherence length L coh jk , for j = k, given by The exponent in the transition probability obtained for neutral current neutrino includes three terms; the first term leads to the usual oscillation pattern between the detectors, the second term indicates that the coherency condition is satisfied provided that the distance between the detectors is not larger than the coherence length and finally the third term shows that the position uncertainty due to the detection mechanisms must not be larger than the oscillation length. It is noticeable that • the coherent propagation of both neutrino and antineutrino is not sufficient because the oscillation pattern is ceased if the distance between the detectors is larger than the coherence length. In fact, in quantum field theory approach, the conservation of energy-momentum due to the integration over the coordinates of the Z 0 decay vertex makes neutrino and antineutrino propagators entirely entangled. • the integration over the coordinates of the vertex of Z 0 decay gives energy-momentum conversation and the uncertainty of source is, practically, excluded from calculations. In other words, the source uncertainty does not play any role in the coherency of neutral current neutrinos and the detector uncertainties are analogues to the production and detection uncertainty in the case of he standard neutrino oscillation in the baseline L +L. source uncertainty does not play any role in the coherency of neutral current neutrinos and the detector uncertainties are analogues to the production and detection uncertainty in the case of he standard neutrino oscillation in the baseline L +L.
2015-05-26T18:58:56.000Z
2015-05-26T00:00:00.000
{ "year": 2015, "sha1": "356fa743a3fff7181366c14bb2c0e2767b63b03d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2015.05.048", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8b5cbd7ab4f165a1f3c15d0eeba896aca9269526", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251592679
pes2o/s2orc
v3-fos-license
Increased interleukin-6 and TP53 levels in rotator cuff tendon repair patients with hypercholesterolemia Background A previous study reported that hyperlipidemia increases the incidence of tears in the rotator cuff tendon and affects healing after repair. The aim of our study was to compare the gene and protein expression of torn rotator cuff tendons in patients both with and without hypercholesterolemia. Methods Thirty patients who provided rotator cuff tendon samples were classified into either a non-hypercholesterolemia group (n=19, serum total cholesterol [TC] <200 mg/dL) and hypercholesterolemia group (n=11, serum TC ≥240 mg/dL) based on their concentrations of serum TC. The expression of various genes of interest, including COL1A1, IGF1, IL-6, MMP2, MMP3, MMP9, MMP13, TNMD, and TP53, was analyzed by real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR). In addition, Western blot analysis was performed on the proteins encoded by interleukin (IL)-6 and TP53 that showed significantly different expression levels in real-time qRT-PCR. Results Except for IGF1, the gene expression levels of IL-6, MMP2, MMP9, and TP53 were significantly higher in the hypercholesterolemic group than in the non-hypercholesterolemia group. Western blot analysis confirmed significantly higher protein levels of IL-6 and TP53 in the hypercholesterolemic group (p<0.05). Conclusions We observed an increase in inflammatory cytokine and matrix metalloproteinase (MMP) levels in hypercholesterolemic patients with rotator cuff tears. Increased levels of IL-6 and TP53 were observed at both the mRNA and protein levels. We suggest that the overexpression of IL-6 and TP53 may be a specific feature in rotator cuff disease patients with hypercholesterolemia. INTRODUCTION Rotator cuff repair is widely practiced as a treatment method for rotator cuff tears. However, failure of the rotator cuff to heal after surgical treatment is a well-known complication that is reported in 20%-94% of cases [1]. Fatty degeneration is an important prognostic factor that determines the anatomical and functional outcome after rotator cuff repair [2]. However, it is difficult to re-eISSN 2288-8721 Copyright© 2022 Korean Shoulder and Elbow Society. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. verse the progress of fatty degeneration by rotator cuff repair alone [2]. Hypercholesterolemia is a crucial health problem that is associated not only with heart disease but also with tendon pathology [3]. Lipid-related changes in tendon pathology affect several mechanical properties of the tendon, including stiffness and modulus [4]. Multiple mechanisms have been proposed to clear these cholesterol-related changes, including alterations in tenocyte protein and gene expression, matrix turnover, cytokine production, and tissue vascularity [5]. A previous study reported that hyperlipidemia increases the incidence of tears in the rotator cuff tendon and affects healing after repair [6]. In animal models, hypercholesterolemia has been found to cause a decrease in the biomechanical properties of the tendon-to-bone healing of the rotator cuff [7]. However, few studies have reported differences in molecular level changes on the effects of hypercholesterolemia in rotator cuff tears. The rotator cuff healing process is divided into three stages: inflammation, repair, and remodeling. This healing process is accomplished by various molecular mediators. The healing process of the tendon is initially composed of collagen type III, which is replaced by collagen type I, thus increasing the collagen type-Ito-III ratio [8]. Collagen type I is encoded by the COL1A1 and COL1A2 genes, respectively. In an in vitro study, tendon cells were shown to synthesize only collagen type I [9]. A different in vitro study found that insulin-like growth factor-1 (IGF-1) increased collagen synthesis in tendons and ligaments by stimulating fibroblast proliferation and synthesis of extracellular matrix (ECM) proteins [10]. In addition, it has been demonstrated that IGF-1 promotes the healing of tendons and ligaments in animals [11]. Interleukin (IL)-6 is one of the cytokines involved in triggering the inflammatory cascade in the early phase of the tendon healing process [12]. Moreover, it leads to collagen production in tendons and is significantly elevated after both exercise and trauma [13]. Matrix metalloproteinases (MMPs) are believed to play an important role in ECM remodeling during the remodeling phase of tendon healing [14]. MMP2, MMP9, and MMP13 are involved in cell transformation and morphogenesis as well as degradation in both pathological and non-pathological conditions [15]. Tenomodulin (TNMD) has been confirmed to be a relatively specific molecular marker of late tendon differentiation and plays a central role in the development and maturation of tendons [16,17]. p53 is a tumor suppressor protein known to inhibit fatty acid synthesis and lipid accumulation and to promote programmed cell death of tendon cells in rotator cuff tendinopathy [18][19][20]. In the present study, the gene expression levels of nine molecu-lar mediators were analyzed in the rotator cuff tendon of patients both with and without hypercholesterolemia. The protein expression levels of the molecular mediators that showed significant differences in gene expression levels were analyzed. We hypothesized that hypercholesterolemia would affect the gene and protein expression of molecular mediators involved in tendon healing in torn rotator cuff tendons. Understanding the molecular basis of lipid-related changes in rotator cuff tendons may eventually prevent the progression of these changes and improve outcomes after rotator cuff repair. METHODS This study was approved by the Institutional Review Board of Kyungpook National University (No. KNUH 2016-11-020) including the procedure for informed consent from participants based on the Declaration of Helsinki in the study of human participants. Participants From October 2016 to November 2017, 240 patients who underwent arthroscopic rotator cuff repair for a full-thickness rotator cuff tear at our institution were enrolled in this study. Among them, 164 patients who could not contribute tissue from the rotator cuff tendon without prior informed consent were excluded. Among 76 patients, patients without preoperative serum lipid evaluation (n = 31) and with anteroposterior dimension of tear size < 1 cm or > 3 cm (n = 6) were excluded. Finally, patients with a borderline serum total cholesterol (TC) of ≥ 200 mg/dL and ≤ 240 mg/dL (n = 9) were excluded from the diagnostic criteria for hyperlipidemia [21]. Thirty patients were classified into either the non-hypercholesterolemia group (n = 19, TC < 200 mg/dL) or the hypercholesterolemia group (n = 11, TC ≥ 240 mg/dL) based on the concentrations of TC (Fig. 1). In the preoperative magnetic resonance imaging, any fatty infiltration of the supraspinatus, infraspinatus, and subscapularis muscles was graded according to the classification system of Goutallier et al. [22]. Tendon Tissue Collection from Patients All patients included in the study provided informed consent for tissue collection of residual rotator cuff tendons that occurred during the debridement process during surgery. Specimens of about 5 mm × 5 mm were obtained from the tendons, placed in labeled plastic tubes with RNAlater (QIAGEN, Valencia, CA, USA) for nucleic acid extraction, and then transferred to a -80°C freezer until processing. RNA Extraction and cDNA Synthesis Frozen tissue samples stored at -80°C were homogenized in TRIzol reagent (Invitrogen, Carlsbad, CA, USA) using an OMNI TH Homogenizer (OMNI International, Kennesaw, GA, USA). RNA extraction was carried out as per the manufacturer's protocol using TRIzol reagent (Invitrogen). The RNA concentration and quality were determined by measuring the ratio of absorbance at 260 nm to that at 280 nm using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), with all samples achieving a minimum ratio of 1.80. The RNA (250 ng) was reverse-transcribed using an iScript Reverse Transcription Supermix for quantitative reverse transcription polymerase chain reaction (qRT-PCR; Bio-Rad, Hercules, CA, USA). Western Blot Analysis Proteins were detected with the following antibodies and reagents. Total proteins were extracted using a radioimmunoprecipitation assay lysis buffer (Rockland Inc., Limerick, PA, USA) containing a protease inhibitor cocktail (Quartett, Berlin, Germany). The total proteins (20 μg/sample) were applied to sodium dodecyl sulfate-polyacrylamide gel electrophoresis, and the proteins were transferred to nitrocellulose membranes using the Trans-Blot Turbo Transfer System (Bio-Rad). The membranes were blocked with tris-buffered saline containing 5% skim milk and 0.2% Tween 20. Primary antibodies were used against the following proteins: IL-6 (Abcam, Cambridge, MA, USA), p53 (Cell Signaling Technology, Danver, MA, USA), and GAPDH (Cell Signaling). After reaction with horseradish peroxidase-conjugated secondary antibodies (Santa Cruz Biotechnology, Santa Cruz, CA, USA), the protein bands on the membranes were visualized using a Clarity Western ECL Substrate Chemiluminescence Assay Kit (Bio-Rad) following the manufacturer's suggested procedure. Densitometry of the bands was performed using a Chemi-Doc XRS+ Imaging System (Bio-Rad) and normalized to GAPDH band intensity. Statistical Analyses The mean values were compared using the Student t-test or Mann-Whitney U-test for continuous variables and the chisquare or Fisher's exact test for categorical variables to statistically evaluate the differences between groups. The statistical analysis was conducted using SPSS version 12.0 (SPSS Inc., Chicago, IL, USA) with the significance level set at p < 0.05. The data are presented as the mean ± standard deviation. A post hoc power analysis was performed on 30 patients, and the true effect size was evaluated using an α of 0.05 and an average effect of 0. 8 to derive a significant result, the sample was analyzed as having 66% power. Demographic Data According to the demographic and clinical data, age, sex, prevalence of hypertension, diabetes and hyperthyroidism, rotator cuff tear size, fatty infiltration, duration of symptoms, and visual analog scale score were not significantly different between the two groups. The hypercholesterolemia group had higher serum TC and low-density lipoproteins concentrations (246.27 ± 7.79 mg/dL and 157.45 ± 21.93 mg/dL) as compared with the non-hypercholesterolemia group (192.87 ± 16.22 mg/dL and 116.72 ± 28.44 mg/ dL) (p = 0.009 and p = 0.009, respectively). Serum high-density lipoprotein concentrations in the non-hypercholesterolemia group (66.20 ± 12.99 mg/dL) were significantly higher than in the hypercholesterolemia group (43.36 ± 9.08 mg/dL) (p = 0.012). Serum triglyceride concentrations were not significantly different between the two groups (p = 0.108) ( Table 1). Western Blot To investigate the effect of hypercholesterolemia on protein expression, an immunoblotting analysis was performed with antibodies against IL-6 and p53 based on the results of qRT-PCR; GAPDH was used as the loading control. A comparison of the Western blot band intensities (mean, 0.46 ± 0.24 and 0.23 ± 0.19, respectively) for IL-6 and TP53 revealed that their protein levels were significantly higher in patients with hypercholesterolemia (Table 3). DISCUSSION In this study, we found that in torn rotator cuff patients, the gene expression levels of IL-6, MMP2, MMP9, and TP53 were significantly higher in patients with hypercholesterolemia compared to those without hypercholesterolemia, and the gene expression of IGF1 was significantly higher in patients without hypercholesterolemia. Upon Western blot analysis, the expression of IL-6 and TP 53 proteins was significantly higher in patients with hypercholesterolemia than in those without. The incidence of hypercholesterolemia is rapidly increasing in the elderly population and manifests as a debilitating medical condition accompanied by numerous systemic complications. In a high-cholesterol environment, lipids accumulate within the tendon ECM, forming a precipitate called a "yellow species." These lipid-related changes affect a variety of mechanical properties, including modulus and stiffness, in intact tendons [4]. There are several mechanisms that explicate these cholesterol-related changes, including changes in the tenocyte protein and gene expression, matrix turnover, cytokine production, and tissue vascularity. Hypercholesterolemia can alter the ECM of the tendons so that the damage is increased or becomes difficult to heal [5]. A previous study reported that hyperlipidemia increases the incidence of tears in the rotator cuff tendon and affects healing after repair [6]. However, the effects of hypercholesterolemia on the tendon at the molecular level are not yet known. In this study, we found significant overexpression of IL-6 and TP53 in the torn rotator cuff tendons of patients with hypercholesterolemia when compared with those of controls. IL-6 is a cytokine involved in the regulation of the immune response and inflammation or hematopoiesis, and it acts on various cells [12]. Cytokines can in-fluence a wide array of ECM components [23]. In addition, IL-6 has been shown to be responsible for the inhibitory effects of wound fluid on fibroblast division [24]. Moreover, it leads to collagen production in tendons and is significantly elevated after both exercise and trauma [13]. TP53 dominates the cell cycle, induces cell death, and plays an important role in tumor suppression through its regulation of protein-related metabolism. In addition, previous studies have shown that TP53 regulates lipid metabolism by direct protein-protein interactions or transcriptional control of the proteins involved in fatty acid synthesis, fatty acid oxidation, the mevalonate pathway, lipid droplet formation, and cholesterol efflux [18]. Generally, TP-53 suppresses fatty acid synthesis and lipid accumulation. No studies have been conducted on the changes in TP53 levels in hypercholesterolemia or its effect on the rotator cuff tendon healing process. In their study of different types of organs, Yao et al. [25] confirmed an increase in p53 levels in the kidneys of mice with hypercholesterolemia and reported that p53 induced apoptosis in the kidneys. A previous study reported a significant increase in p53 levels in supraspinatus tears and speculated that tenocyte apoptosis may be a relatively early feature in rotator cuff tendinopathy [20]. Kane and Greenhalgh [26] found that wounds in diabetic animals displayed a delayed onset of p53 transcription but had persistently greater levels for longer periods of time. Diabetic animals appear to lose the indirect relationship between p53 and bcl-2. These findings suggest that p53 levels are increased in the early phase of healing, after which it becomes necessary to stop the inflammatory process and decrease p53 levels to allow cell proliferation to occur for tissue repair. In patients with hypercholesterolemia, fatty acid synthesis and lipid accumulation in the rotator cuff tendon are increased, which maintains the expression of TP53 in an elevated state for an extended time and may affect rotator cuff healing. Abboud and Kim [6] re- ported that patients with rotator cuff tears were more likely to have hypercholesterolemia than were those without tears. Chung et al. [3] observed that high cholesterol levels had a significant effect on rotator cuff healing in a rat model. To some extent, controlling hypercholesterolemia could stop or reverse the harmful effects of hypercholesterolemia even after rotator cuff canine repair surgery in a rat model. Despite these findings from these different studies, the pathophysiology of lipid-related tendon pathology remains incompletely understood [27]. In our study, IL-6 and TP53 levels were significantly higher in hypercholesterolemic patients who had undergone a rotator cuff repair. However, little is known about the effects of hyperlipidemia on the rotator cuff tendon at the molecular level. Several studies have reported the effects of lipid-lowering agents on cytokine levels in different tissues. Researchers who investigated the effects of cholesterol synthesis inhibitors on cytokine production capacity in vitro have explained the inhibitory effects on the production of several cytokines. Lovastatin inhibits lipopolysaccharide-induced synthesis of proinflammatory cytokines, such as tumor necrosis factor-α, IL-1βα, and IL-6, in rat primary astrocytes, microglia, and macrophages [28]. Sakoda et al. [29] reported that simvastatin reduces IL-1α-induced production of inflammatory cytokines, such as IL-6 and IL-8, in human oral epithelial cells. Thus, simvastatin has an anti-inflammatory effect on human oral epithelial cells via mechanisms that are independent of cholesterol lowering. The effects of statins on cytokine levels in other tissues in hypercholesterolemia remain unclear, which is also the case for the rotator cuff tendon. This study had some limitations to consider for further study. First, although IL-6 and TP53 levels were significantly higher in patients with hypercholesterolemia, there was still insufficient evidence for the association of IL-6 and TP53 with hypercholesterolemia in this study. In addition, the protein expression of all molecular mediators that showed significant differences in gene expression have not yet been analyzed. Second, although it is known that hypercholesterolemia affects various mechanical properties of the tendon, it is still unclear whether elevations in IL-6 and TP53 expression have any significant effect on the healing of the rotator cuff in the presence of hypercholesterolemia. Third, the present study only analyzed the expression of genes and proteins in tissues either with or without hypercholesterolemia. Thus, we did not consider any other comorbidity that might affect the expression of these genes and proteins. Chung et al. [30] demonstrated that overexpression of MMP-9 and IL-6 may be one of the causes of high healing failure rates after rotator cuff repair in diabetic patients. Fourth, Tucker and Soslowsky [31] showed that treatment with simvastatin for 3 months alters some mechanical and histological properties of the tendon in a model of diet-induced hypercholesterolemia. Their simvastatin group had significantly more spindle-shaped cells in the midsubstance region of the supraspinatus muscle than their hypercholesterolemia group. Additionally, these data suggest that simvastatin use does not have any strong negative effect on the mechanical and histological properties of tendons, which implies that patients prescribed simvastatin may not experience any tendon damage. Among patients with hypercholesterolemia, those who were taking medication for treatment were not excluded from the study. Therefore, drug-induced changes in cytokine and growth factor production were not reflected in the results. Garcia et al. [32] reported that hypercholesterolemia was a significant risk factor for re-tears after arthroscopic rotator cuff repair. However, the type and dose of statin medication did not significantly affect the incidence of re-tears. Fifth, we could not include all the cytokines or growth factors relevant to tendon tears or hypercholesterolemia. Instead, we evaluated only selected cytokines or growth factors that were of our interest. Including more cytokines or growth factors in the analysis could detect other factors that may be related to rotator cuff tears in patients with hypercholesterolemia. Our results showed an increase in inflammatory cytokine and MMP levels in tendon tissues obtained from patients with hypercholesterolemia who had undergone rotator cuff repair. Significantly higher IL-6 and TP53 levels were observed in the torn cuff tendon tissues not only at the mRNA level but also at the protein level. We suggest that the overexpression of IL-6 and TP53 may be an important feature in rotator cuff tears in patients with hypercholesterolemia.
2022-08-17T06:16:18.934Z
2022-08-16T00:00:00.000
{ "year": 2022, "sha1": "1c0e0a5694cb8c8722153db8d1c7cdd4d5cd0448", "oa_license": "CCBYNC", "oa_url": "https://www.cisejournal.org/upload/pdf/cise-2022-00976.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ce5412c0a8358362854a264fc6eac8ab3b58088", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119260231
pes2o/s2orc
v3-fos-license
Dusty Magnetohydrodynamics in Star Forming Regions Star formation occurs in dark molecular regions where the number density of hydrogen nuclei, nH, exceeds 10^4 cc and the fractional ionization is 10^-7 or less. Dust grains with sizes ranging up to tenths of microns and perhaps down to tens of nanometers contain just under one percent of the mass. Recombination on grains is important for the removal of gas phase ions, which are produced by cosmic rays penetrating the dark regions. Collisions of neutrals with charged grains contribute significantly to the coupling of the magnetic field to the neutral gas. Consequently, the dynamics of the grains must be included in the magnetohydrodynamic models of large scale collapse, the evolution of waves and the structures of shocks important in star formation. Introduction Trumpler [1] in 1930 and Stebbins, Huffer and Whitford [2,3] in 1934 and 1939 reported the results of photometric studies that demonstrated that dark "holes" in the Galaxy, noted by Herschel close to 150 years before, are due to obscuration by interstellar dust. In the first part of his important May 1941 paper on the charge carried by interstellar dust, Spitzer [4] wrote "The presence of dust particles implies that the physical state of such a medium may be somewhat different than that discussed in the classical work by Eddington, in which only atoms were assumed to be abundant". Despite Spitzer's early insight, the first papers on the effects of dust on the magnetohydrodynamic behavior of interstellar matter appeared only about three decades ago. In 1997 Hartquist, Pilipp and Havnes [5] published an introductory review of work on dusty plasma in interstellar clouds and star forming regions. While numerous advances have since occurred, we refer the reader to that article for a more detailed presentation of many of the basic physical processes and equations important for the area. Though we mention later work here, as well as give a qualitative explanation of some of the key concepts, we have not attempted to be comprehensive. Our current interest in multi-fluid shocks in star forming regions has guided our choice of emphasis, but we do mention some other progress. Section 2 contains a brief description of the processes establishing the fractional ionization in dark regions in dense cores, molecular structures that are progenitors of stars; recombination on grains is significant. Section 3 provides a description of the role that dust plays in ambipolar diffusion, the motion of ions relative to neutrals resulting from magnetic field gradients, in dense cores and protostellar formation. Section 4 contains a brief summary of the results of an early study of the effects of dust on the damping of Alfvén waves in dense cores. Section 5 is a selective short review of work on the effects of dust on the structure of shocks assumed to propagate perpendicularly to the magnetic field in star forming regions. Section 6 contains a similar review of work published prior to 2009 on shocks propagating 2 obliquely to the upstream magnetic field. Section 7 concerns more recent work on such shocks. Section 8 concludes the paper. The Fractional Ionization in Dark Star Forming Regions. Stars form in dense cores, which are molecular condensations embedded in more diffuse molecular gas that is many tens or more times less dense than the dense cores. Most, if not all, of the more diffuse gas surrounding dense cores is in translucent clumps making up Giant Molecular Clouds (GMCs) or GMC complexes (see, e.g., section 2 of [5] for a review). An n H (i.e. the number density of hydrogen nuclei) range of 10 4 -10 5 cm -3 is typical of dense cores in which solar-like stars form, but an n H range of 10 6 -10 7 cm -3 is typical of regions in which high-mass stars, those that contain at least 8 solar masses, are born [6]. Of course, star formation leads to values of n H greatly exceeding these, but most of this paper concerns phenomena occurring in the n H range of 10 4 -10 7 cm -3 . Oppenheimer and Dalgarno [7], Gail and Sedlmayr [8], Elmegreen [9], Draine and Sutin [10] and Umebayashi and Nakano [11] are amongst those who have contributed to the development of our understanding of the ionization structure and the calculation of the grain charge in dark star forming regions. Many of the considerations have much in common with those addressed by Shukla and Mamun [12]. Cosmic ray ionization of H 2 at a rate of the order of 10 -17 -10 -16 s -1 leads to H 2 + , which reacts with H 2 to form H 3 + . H 3 + in turn reacts with neutral species including O, H 2 O and CO (the most abundant gas phase molecule other than H 2 in most circumstances) to form molecular ions. Dissociative recombination is a major mechanism for removing molecular ions in dense cores, but its effectiveness is moderated if metals, like magnesium and sodium, are not too depleted onto grains. Charge transfer between molecular ions and such metals produces metallic ions. The gas phase process removing them is radiative recombination, which is 3 many orders of magnitude slower than dissociative recombination. The primary mechanism removing metallic ions is recombination on grains. At the typical temperatures of dense cores of the order of 10 K and typical core densities, most grains will carry one negative charge if nearly all of the grain material is in grains of sizes of around 0.1 μm. Considerable uncertainty concerning the grain size distribution exists. Certainly in more diffuse clouds many grains of much smaller size exist. However, in dense, dark regions the formation of ice mantles by the depletion of gas phase species may lead to almost all grains having sizes of about 0.1 μm. The grain charge probability distribution function and the fractional ionization must be calculated self-consistently. Some of the authors cited above and many others have done so. Fig.7 of [5] shows results for the fractional abundances of electrons, gas phase ions, grains carrying a single negative charge, grains carrying a single positive charge and neutral grains as functions of n H ranging from 10 4 to 10 14 cm -3 . All grains were assumed to be spherical with radii of 0.1 μm and to have a fractional abundance relative to n H of 4x10 -12 . The cosmic ray induced ionization rate and the temperature were taken to be 10 -17 s -1 and 10 K, respectively. For n H up to about 10 9 cm -3 , the fractional ionization drops roughly as n H -½ from 3x10 -8 . The fractional abundances of ions and electrons are nearly equal and almost all grains are negatively charged. At n H above 10 10 cm -3 and below 10 12 cm-3 most grains are neutral and the fractional abundances of ions and of negatively charged grains are nearly equal, while the fractional abundance of electrons is lower. At n H above 10 12 cm -3 , the fractional abundances of negatively charged and positively charged grains are nearly equal, while the fractional abundance of ions is lower. Ambipolar Diffusion in Dense Core Collapse. There has been considerable debate about the relative roles, in dense core formation and evolution, of magnetically regulated, gravitationally driven collapse and non-linear magnetohydrodynamic processes that occur even in the absence of gravity [13]. Ambipolar diffusion is important for the relevant non-linear magnetohydrodynamic processes [14] and grains must be included in the treatment of its effect on the those processes. However, the bulk of the studies on the importance of grains for ambipolar diffusion in star formation have been performed for a picture in which a dense core is supported by a combination of thermal pressure and magnetic forces. In this picture collapse occurs as magnetic force drives the charged particles to move relative to the neutrals, thus reducing the magnetic contribution to the support. The timescale for the decrease in the magnetic support depends upon the force per unit volume due to collisions between neutrals and charged species. Baker [15], Elmegreen [9], and Nakano and Umebayashi [16] were the first to recognize the importance of the contribution of collisions between neutrals and charged grains to this force per unit volume and to calculate its effect on the ion slip velocity and ambipolar diffusion/collapse time scale. We summarize the conditions [5] under which grain-neutral collisions significantly affect the velocity of electrons relative to the neutrals in a region undergoing gravitational induced collapse as a consequence of ambipolar diffusion. We assume that the charges on all grains are equal and do not fluctuate. Ω is the magnitude of the grain gyrofrequency, ν jk is the frequency at which a single particle of species j undergoes collisions with particles of species k. The subscripts can be n, g or i indicating neutrals, grains and ions, respectively. If ν ng << Ω , the condition is In a series of papers Ciolek and Mouschovias, e.g. [17], and Tassis and Mouschovias, e.g. [18], have presented the results of the most thorough studies of ambipolar-diffusion regulated, gravitationally driven collapse. They have considered the collapse of thin disks. They performed self-consistent calculations of the fractional ionization and probability distribution function for the charges on grains. Though all grains were taken to have the same size, fluid equations were included for grains of each charge. For the low temperatures involved only three grain charges needed to be considered. The Effects of Dust on Alfvén Waves. Of course, the most famous paper on the effects of dust on waves is that of Rao, Shukla and Yu [19] on dust acoustic waves. By now the literature on waves in dusty plasmas is immense. Here, we consider results of an early study [20] Pearce [21] considered the damping of Alfvén waves in a weakly ionized dustless plasma due to ion-neutral friction. An analysis restricted to linear waves propagating parallel to the large scale magnetic field leads to an approximate dispersion relation of where k is the complex wavenumber, ω is the real angular frequency, B 0 is the strength of the large-scale magnetic field and ρ i is the mass density of ions. 6 As grain-neutral collisions in star forming regions had been recognized to be important for ambipolar diffusion, the initiation of studies of their role in wave propagation was a natural step. One set of results from a linear analysis [20] of Alfvén waves propagating parallel to the large-scale magnetic field in a dense core were for n H = 2x10 4 cm -3 , T=20 K and B 0 =10 -4 G. The dust grains were assumed to contain one percent of the mass and have uniform radii of Van Loo et al. [14] have found that the nonlinear development of waves with ω in about this range is sensitive to the degree of collisional coupling between charged and neutral fluids. In the near future, the inclusion of at least two grain fluids in treatments of the nonlinear development of waves in star forming regions will be expected. The effect of the charge fluctuations of grains on the dissipation of the waves was revealed in the initial study of Alfvén waves in dusty media [20]. The influence of charge fluctuations on wave damping has been studied subsequently in many contexts. Perpendicular shocks. Once stars form, their outflows affect the environment. The interaction of the outflows with ambient molecular gas leads to the formation of shocks that may lead to the compression of inhomogeneities to trigger more star formation or possibly to the disruption of them resulting 7 in the termination of stellar birth. The shocks also alter the chemical and dust contents of the star forming regions, in part by sputtering dust grains. Two key papers on the nature of shocks in star forming regions are those by Draine [22] and Draine, Roberge and Dalgarno [23]. Shocks propagating perpendicular to the large-scale magnetic field are fast-mode shocks. We will assume that the magnetic pressure is significantly greater in the preshock medium than the thermal pressure. Thus, the fast-mode and Alfvén speeds are about equal. Two upstream Alfvén speeds are relevant. One is the Alfvén speed of low frequency waves, V Al , which is determined by the upstream magnetic field strength, B 0 , and the sum of the mass densities of all species. The other is the Alfvén speed of high frequency waves, V Ah , which depends on B 0 and the mass density of charged species that are well-coupled to the magnetic field. If V S , the shock speed, exceeds V Ah the shock will have jumps in all fluid parameters. However, for a range of V S < V Ah but >V Al , the flow variables in all fluids are continuous. We consider only such C-type shocks. In a frame comoving with the C-type shock, the charged particles decelerate from the shock velocity further upstream than the neutrals do. This creates a precursor. Requiring the gradient in the magnetic pressure to be comparable to the drag force per unit volume on ions due to collisions with neutrals, one can find that the precursor thickness is roughly Δ is the lengthscale over which dissipation occurs, and the smaller Δ is the hotter the gas becomes. If grains were present and well-coupled to the magnetic field, the expression for Δ would contain the sum of ν ni and ν ng , rather than ν ni alone. However, collisions tend to decouple the grains from the magnetic field, an effect considered by Draine [22] and Draine, Roberge and Dalgarno [23] whose models of perpendicular shocks are reliable for cases in which the preshock value of n H is ≤ 10 6 cm -3 . 8 At higher densities a self consistent calculation of the average charge on the grains and the fractional ionization is required to obtain accurate results for ν ni and for the effect that grains have on the shock structure. Pilipp, Hartquist and Havnes [24] included such a calculation in each of their models of perpendicular shocks. They also included fluid equations for the grains rather than adopt a simpler approximation to calculate the effects of dust [22,23]. This led them to discover a run-away process operating in perpendicular shocks for which the preshock value of n H is 10 7 cm -3 or higher. At such densities, the ratio of n(e)/|Z g |n g drops below unity within the precursors of sufficiently fast shocks. n(e) is the electron number density, |Z g |e is the magnitude of the average charge carried by grains and n g is the number density of grains. Once this ratio drops below unity, |Z g | begins to drop as there are insufficient electrons to continue charging the grains. Assume that the shock propagates in the x-direction and the magnetic field is in the y-direction and that Ω/ν gn , the grain Hall parameter, is small. Then the grains separate from the other charged particles sufficiently to generate an x-component of the electric field with a magnitude given approximately by Here m g is the mass of a grain. This creates an ion drift velocity component in the z-direction with a magnitude of cE x /B y . As |Z g | drops, this component of the drift velocity increases. This causes an increase in the rate at which a grain experiences collisions with ions, which leads to a further drop in |Z g |. Hence, a runaway occurs. The recent work of Guillet, Jones and Pineau des Forêts [25,26] represents a significant development in the modeling of perpendicular shocks in dusty star forming regions. In their work on C-type shocks [25], they adopted a hybrid approach. The gaseous species were described as fluids, whereas the trajectories and charges of many individual dust particles were followed. The treatment gave self-consistent results for the grain charges, gas phase ion and electron abundances and dynamics. Though the fluid approach of Pilipp et al. [24] is valid for a wide range of parameter space, the Guillet et al. [25] method is required in cases in which dust gyroradii are comparable to or larger than the scales on which variations of parameters vary in shocks. Steady-State Models of Oblique Shocks. Pilipp and Hartquist [27] adopted a fluid description of grain dynamics in studies of steady shocks propagating obliquely to the upstream magnetic field in dusty star forming regions. We will assume that a shock propagates in the x-direction and that the upstream magnetic field has x and y components but its z component is zero. They found that grain-neutral collisions lead to a rotation of the magnetic field in a C-type shock precursor around the xdirection. The following considerations show why such rotation occurs. In the shock frame the z- By integrating from an upstream point in the downstream direction, Pilipp and Hartquist [27] succeeded in finding only intermediate-mode shock solutions. Such solutions are inadmissible [28]. Wardle [29] showed that integration in the downstream direction will not yield steady fastmode solutions because the downstream state corresponds to a saddle point. He found fastmode solutions by integrating upstream from the downstream state. Chapman and Wardle [30] have extended this work and shown that the inclusion of PAHs leads to a drop in the gas phase electron abundance and enhanced rotation of the magnetic field. PAHs are nano-particles thought to be abundant in clouds more diffuse than star forming regions. As mentioned earlier, in such regions desorption of material from the gas phase may result in all particles growing to sizes close to 0.1 μm. Integration in the upstream direction is not appropriate if conditions anywhere in a shock deviate from equilibrium. After shocked gas has cooled, the abundance of H 2 O, an important coolant in shocked dense core material, remains far from its equilibrium value for many times the flow time through a shock. Other chemical species also have abundances that are far from their equilibrium values for considerable periods. Consequently, the calculation of shock structures by integration in the upstream direction is inappropriate, and the use of a timedependent, rather than a steady-state, approach is necessary to overcome the difficulties found by Pilipp and Hartquist [27] and explained by Wardle [29]. Falle [31] has developed an appropriate time-dependent approach. Time-Dependent Models of Oblique Shocks in Dusty Regions. Van Loo et al. [32] have used the technique developed by Falle [31] to model oblique shocks in uniform density media. Figures 1 and 2 show results obtained for a shock that has evolved to a steady-state structure. As shown by Wardle [29] when field rotation is significant the trajectory in B y , B z phase space corresponds to a spiral node in the vicinity of the upstream fast-mode state. As seen from Figures 1 and 2, the velocity structure is complicated where magnetic field rotation is significant. Results similar to those that they presented are displayed in Figure 3. Conclusion The major challenge in star formation theory will be the incorporation of the effects of dust in multidimensional, time-dependent magnetohydrodynamic simulations. The work reported in many of the papers cited in this brief review demonstrates the existence of a community that appreciates the role that dust plays in the phenomena investigated in simulations of the dynamics of star formation. However, so far the assumption of non-ideal magnetohydrodynamics has been relaxed in only a handful of the multidimensional numerical studies, e.g. [14]. Hard but interesting work remains. Figure 1. The velocity structure, in the shock frame, along the shock normal for a steady-state C-type shock propagating at 25 km/s through a homogeneous medium with n H =10 5 cm -3 . Each grain has a radius of 0.4 μm and a mass of 8x10 -13 g and one percent of the mass is in grains. The upstream magnetic field strength is 10 -4 G. The shock velocity is at an angle of 45 o with respect to the upstream magnetic field. The blue line represents the neutral fluid, the red line the ion and electron fluids and the black line the grain fluid. 15 Figure 2. The ratio of B z to B y in the shock for which results are shown in Fig.1. 16 Figure 3. The velocity structure, in the initial shock frame, along the shock normal 5000 yr after the steady shock for which results are shown in Fig.1 propagates into a region of lower density (from n H = 10 5 cm -3 to 10 4 cm -3 ). The blue line represents the neutral fluid, the red line the ion and electron fluids and the black line the grain fluid.
2019-04-12T20:32:50.289Z
2010-01-06T00:00:00.000
{ "year": 2010, "sha1": "5426816d9416690992bc0892939a7aa35882dd97", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.0929", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b753b7afceef411d8a271e07f397ab1fda2e9bca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264950951
pes2o/s2orc
v3-fos-license
Embracing the Emotion in Emotional Intelligence Measurement: Insights from Emotion Theory and Research Emotional intelligence (EI) has gained significant popularity as a scientific construct over the past three decades, yet its conceptualization and measurement still face limitations. Applied EI research often overlooks its components, treating it as a global characteristic, and there are few widely used performance-based tests for assessing ability EI. The present paper proposes avenues for advancing ability EI measurement by connecting the main EI components to models and theories from the emotion science literature and related fields. For emotion understanding and emotion recognition, we discuss the implications of basic emotion theory, dimensional models, and appraisal models of emotion for creating stimuli, scenarios, and response options. For the regulation and management of one’s own and others’ emotions, we discuss how the process model of emotion regulation and its extensions to interpersonal processes can inform the creation of situational judgment items. In addition, we emphasize the importance of incorporating context, cross-cultural variability, and attentional and motivational factors into future models and measures of ability EI. We hope this article will foster exchange among scholars in the fields of ability EI, basic emotion science, social cognition, and emotion regulation, leading to an enhanced understanding of the individual differences in successful emotional functioning and communication. Introduction Over the past three decades, emotional intelligence (EI) has gained significant popularity as a scientific construct.It has entered the lexicon of everyday conversations to describe people who demonstrate adeptness or struggle when navigating emotionally charged encounters with others.Despite "rumors of the death" of EI in its early years due to problems with its conceptualization and measurement (Ashkanasy and Daus 2005), research in the field continues to thrive (e.g., Dasborough et al. 2022).However, the conceptualization and measurement of EI still face limitations, with many early criticisms (e.g., Locke 2005) remaining relevant today (Dasborough et al. 2022).For example, problems with defining objective scoring criteria and establishing construct validity in performance-based EI tests have already been discussed by Brody (2004), Geher and Renstrom (2004), Matthews et al. (2002), or Pérez et al. (2005). In the present paper, we argue that this problem is still present and partly stems from a lack of theoretical foundation within existing EI tests.We propose avenues for future advancements in EI measurement by connecting some of the main EI components to models and theories from the broader emotion literature and by suggesting ways in which this literature can inform the development of novel and improved measures of EI. Specifically, the present paper focuses on the assessment of ability EI, which is one of the two dominant EI approaches (see Fiori and Vesely-Maillefer 2018 for a review).Ability EI refers to a set of cognitive skills related to emotions, including "the ability to perceive emotions, to access and generate emotions so as to assist thought, to understand emotions and emotional knowledge, and to reflectively regulate emotions so as to promote emotional and intellectual growth" (Mayer and Salovey 1997, p. 5).Measuring such skills requires performance-based tests and emotion-related tasks with correct and incorrect (or more and less effective or adaptive) responses to capture "maximal performance."For example, typical ability EI measures include judging which emotion was expressed in a picture or what action would best reduce one's anxiety in a particular situation (situational judgment approach). In contrast, the second dominant EI approach refers to self-perceptions of emotional skills.Trait EI "essentially concerns people's perceptions of their emotional world" and is rooted in personality research (Petrides et al. 2016, p. 335).Trait EI models vary substantially in the number and skills they consider and, therefore, each requires specific self-report instruments with items reflecting the skills included in the model.Nevertheless, all trait EI instruments target the test-takers' propensity to behave in a certain way ("typical performance", Sarrionandia and Mikolajczak 2020).This conceptualization requires self-report measures that present general context-free statements asking about people's subjective self-perceptions. Though both trait and ability EI conceptualizations have advantages and limitations, researchers have highlighted that ability EI aligns more closely with the term EI (e.g., Cherniss 2010;Roberts et al. 2010).It maintains a narrower focus on emotions than the broader trait EI approach, which encompasses other concepts from positive psychology, including well-being and optimism.Additionally, ability EI is associated with intelligence, whereas trait EI is not (Roberts et al. 2010).Nevertheless, after three decades of research, only a limited number of scientifically validated ability EI tests exist. The most widely used test is the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer et al. 2003), in which participants judge the appropriateness or effectiveness of actions or emotion labels in pictures or vignettes describing emotional situations.Other widely used tests are the Situational Test of Emotional Understanding (STEU) and the Situational Test of Emotion Management (STEM; MacCann and Roberts 2008).Like the MSCEIT, they use a situational judgment approach where participants choose an emotion label to describe an emotional situation (STEU) or an effective action for regulating an emotion in a vignette (STEM).Though several other ability EI tests exist, such as the Test of Emotional Intelligence (TIE; Śmieja et al. 2014), the Audiovisual Test of Emotional Intelligence (AVEI; Zysberg et al. 2011), and the Test of Emotional Intelligence (TEMINT; Blickle et al. 2011), they are notably less utilized (see review by Bru-Luna et al. 2021).More recently, the Geneva Emotional Competence Test for the Workplace (GECo; Schlegel and Mortillaro 2019) has been developed. The most common EI components across these tests are emotion perception/emotion recognition (the ability to identify and differentiate between emotions in oneself and others), emotion understanding (the ability to comprehend complex emotional states, transitions, and the causes and consequences of emotions), and emotion regulation/management (the ability to manage and respond to emotions in oneself and others effectively).These are the central ability EI components across different conceptualizations and taxonomies (e.g., Elfenbein and MacCann 2017;Mayer et al. 2016;Schlegel and Mortillaro 2019;Vesely Maillefer et al. 2018). As we will show in this article, a vast amount of the literature exists outside of the EI domain for each of these components, and the general emotion science literature can be readily linked to them.However, the ability EI conceptualization, research, and assessment developed independently from the general emotion science literature.Though this may sound surprising, we can suggest some reasons for this separation: different research methods (laboratory studies vs. testing), different goals (basic research vs. applied research), and a critical approach toward the concept of EI in the emotion literature. Nevertheless, this is probably one of the most surprising and unjustified separations between bodies of literature in psychology.Besides a few notable attempts (Fontaine 2016;MacCann and Roberts 2008;Peña-Sarrionandia et al. 2015), most empirical studies on EI and the measures they use refer only to other EI studies, with very little integration of the emotion literature despite that this problem was already pointed out more than twenty years ago (Matthews et al. 2002).With this paper, we would like to indicate how research on emotions and related fields can and should be the foundation of future EI assessments for each of these specific EI components. Emotion Understanding Emotion understanding competence refers to the ability to reason about the antecedents of the emotional experience and its implications for the person's behavior.According to Mayer and colleagues (2016), emotion understanding is a higher-order competence that groups several areas of reasoning; among others, we can list labeling emotions and recognizing relationships among them, as well as appraising the eliciting situation, predicting how a person might feel in certain conditions, and recognizing cultural differences in the evaluation of emotions. How Definitions of Emotion Can Inform the Assessment of Emotion Understanding Modeling emotion understanding and its measurement requires a clear and coherent theoretical framework that defines emotions, their components, and their implications.Unfortunately, this theoretical reasoning is often left implicit by researchers whose primary focus is creating a psychometrically sound measure.For example, in MSCEIT subtests for emotion understanding, the authors did not refer to any theoretical model to justify how they created the items and response options and how the correct response was defined.Concerning this last point, they relied on "expert scoring", which is undoubtedly meaningful, but has several shortcomings, especially when experts are difficult to define or they disagree with each other (Barchard and Russell 2006).For these reasons, we think that theoretical grounding should be critical for building and scoring emotion understanding tests (see also Hellwig and Schulze 2021). The emotion literature suggests three main theoretical views that can help define and measure emotion understanding.First, basic emotion theory (Ekman 1999;Keltner et al. 2019) is the approach used by most studies in emotion psychology and can be considered the standard in emotion recognition measurement, even in instruments that do not explicitly adopt this view.In a nutshell, according to this view, emotions are distinct categories, and it is possible to attribute a precise label to a specific emotional state.This conceptualization is implicit whenever one asks to label a scenario or an expression by choosing one particular emotion label.It is crucial, though, to understand that for many researchers, this is not an endorsement of the idea that emotions are universal and discrete, but a pragmatic way to access the knowledge about when emotions are experienced and how they are expressed. Second, dimensional theories of emotions propose that emotions can be understood and classified based on a small number of underlying dimensions.Russell (1980) introduced the circumplex model, which posits two primary dimensions in the emotional space: valence and arousal.Valence refers to the pleasantness or unpleasantness of an emotion, whereas arousal represents the level of activation or energy associated with it.Russell's model suggests that a wide range of emotions can be mapped onto a circular space defined by these two dimensions.For instance, joy and love are located in the positive valence region, whereas fear and anger occupy the negative valence region.This model provides a foundation for understanding emotional experiences in a structured manner, but, to our knowledge, has never been used explicitly to assess emotion understanding.Still, it is not difficult to imagine researchers using this approach to build valid instruments.They could ask respondents to identify the valence and activation that one person may experience in the situation described in the item instead of asking to attribute an emotion label.A similar measure of emotion understanding may be simpler than the emotion labeling approach and valuable for clinical populations, young children, and in general in all those cases when labeling could be problematic (e.g., language difficulties, cultural variability). Third, appraisal theory describes emotion as the result of a set of subjective cognitive evaluations that happen with or without awareness (Moors et al. 2013;Roseman 1996;Scherer 2001).In other words, it is not the events or the objects per se that elicit the emotion, but how one person appraises them.This subjectivity explains individual differences in emotional reactions, but also provides the basic framework to find commonalities between even very diverse experiences of one emotion.For example, anger can be characterized by an event appraised as goal-obstructive and unpleasant, likely caused by somebody (e.g., not casual, and due to chance), and for which the angry person has a high sense of coping.Given its flexibility and detail in explaining emotional experience, we think appraisal theory is the best candidate to model emotion understanding. Using Appraisal Theory to Asses Emotion Understanding A few authors have used appraisal theory to create emotion understanding tests.Mac-Cann and Roberts (2008) chose Roseman's appraisal theory (Roseman 1996) for developing their Situational Test of Emotion Understanding (STEU).Roseman's theory defines the appraisal profiles of seventeen emotions.Based on these theoretically predefined profiles, the authors created vignettes of emotional situations that became the items of the test.Answers are defined as correct or wrong depending on the theoretical pattern predicted by the theory. Similarly, the emotion understanding subtest of the Geneva Emotional Competence test is grounded in appraisal theory (Schlegel and Mortillaro 2019).In this case, the authors used the Component Process Model (CPM) of emotion (Grandjean and Scherer 2008;Scherer 2001Scherer , 2009)).Like other appraisal models, the CPM identifies a set of appraisal dimensions that guide evaluating events and situations and generate specific emotional responses (Scherer 2001).These dimensions do not fully overlap with other models (e.g., Roseman's), directly affecting how to develop the scenarios.In the GECo emotion understanding test, the items describe scenarios that reflect the collection of appraisals that characterize an emotion according to the CPM.For example, one scenario describes "John" attending an interesting presentation and being repeatedly disturbed by his neighbor who asks him questions.Regarding appraisals, the situation is moderately relevant, the other person's behavior is obstructive but not intentionally harmful, and John has the potential to cope with the situation.This set of appraisals characterizes an experience of irritation. This way of measuring emotion understanding implies that emotion understanding involves perspective-taking and considering all the appraisals involved.Instead of directly attributing an emotional meaning to the event or situation, a person skilled in emotion understanding should be able to infer the likely appraisal process of the other person (Mortillaro et al. 2011).Is it something unexpected for them?Is it goal-conducive or goal-obstructive?Do they think that somebody else is responsible for it?Do they feel that they can cope with the situation?Being able to make these judgments accurately shows a high level of emotion understanding and would be a possibility for phrasing emotion understanding items. Emotion understanding in the sense of knowledge can also be measured for emotion components other than appraisals (Scherer 2009), including (1) physiological reactions that occur during emotional experiences; for instance, fear may be accompanied by increased heart rate and sweating; (2) expressive behavior, that is, the outward display of emotions through facial expressions, vocalizations, and body language; (3) action tendencies, that is, the behavioral inclinations or urges associated with specific emotions; for instance, fear may prompt a person to flee or avoid a threatening situation; (4) the subjective experience component, that is, the subjective and consciously "felt" aspect of emotions; for example, when feeling happy, an individual experiences a positive, pleasant subjective state.It is important to note that these components are interactive and interdependent, forming a dynamic system within the emotional experience.They influence and modulate each other, resulting in a coherent emotional response. Two recent measures demonstrated the feasibility of assessing knowledge about these four components in standard emotion understanding tests.First, the Geneva Emotion Knowledge Test (GEMOK; Schlegel and Scherer 2018) includes a subtest on measuring an accurate understanding of emotion blends through vignettes that systematically include information on all five emotion components (appraisal, expression, physiology, action tendencies, subjective feeling).It also includes a subtest that measures respondents' accuracy in judging the likelihood of features (representing all five components) to occur when a specific emotion is experienced.Similarly, Fontaine and colleagues (Huyghe et al. 2022;Sekwena and Fontaine 2018) developed the Components of Emotional Understanding Test (CEUT), which consists of scenarios built based on the CPM and cross-cultural linguistic studies (Fontaine et al. 2007;Fontaine et al. 2013).For each scenario, participants rate the likelihood of several emotions, appraisals, action tendencies, bodily reactions, expressions, and subjective feelings.In the CEUT and GEMOK, participants must reason about the whole emotion process, making them excellent examples of how emotion theory can offer innovative ways to conceptualize and assess EI skills.This approach can be used to measure other under-assessed aspects of emotion understanding, such as knowledge about cultural differences (particularly in the expression component) and accuracy in predicting future emotions (affective forecasting) or emotion trajectories (Mayer et al. 2016). Recently, one more theoretical framework has been suggested for modeling and measuring emotion understanding: the empathic agent paradigm, consisting of two phases (Hellwig et al. 2020).In the first phase, test-takers learn about the emotion-related contingencies of a target person, that is, emotions, events, and actions.After this acquisition phase, the test takers apply this new knowledge to a novel situation involving the target person.This allows for objective scoring without assuming an absolute correct behavior, but only a more likely one based on contingencies.This approach tries to circumvent the problem of choosing a theoretical framework explicitly and, at the same time, not adopting a consensus-scoring approach.However, expecting an almost invariant behavior across similar situations for the same person implicitly assumes an appraisal approach (what matters is not the situation per se, but how the person appraises it). Emotion Recognition The ability to accurately recognize what another person is feeling from nonverbal cues (emotion recognition ability; ERA) is central to most ability-based theories, models, and taxonomies of EI (e.g., Mayer et al. 2016;Elfenbein and MacCann 2017;Vesely Maillefer et al. 2018).Specifically, ERA is assumed to contribute to the accurate understanding of the causes and implications of emotional situations (see previous section) and to the ability to influence what another person is feeling (see the section on emotion management).Perhaps because individual differences in ERA are assumed to be crucial for successfully navigating social interactions (for an overview, see Palese and Mast 2020), research on ERA and its assessment have had a long tradition dating back to the 1970s (e.g., Hall 1978). Despite the theoretical integration of ERA in EI models, the two constructs continue to be studied relatively independently.Research on ERA is scattered across different fields of psychology and comes with various and inconsistently used labels (e.g., emotion decoding, theory of mind, emotion perception, cognitive empathy).Other fields also tend to use different ERA measures (with their respective construct labels), and there have been only a few efforts to map the terrain of ERA assessment across domains.However, such integration is necessary for at least two reasons.First, ERA tests typically have low intercorrelations and, thus, do not measure one single skill (Schlegel et al. 2017).Second, most ERA tests have been constructed in a rather atheoretical fashion and reviewing them within the context of emotion and social perception theories can benefit the creation of new and improved assessment tools. The Dominance of Basic Emotion Theory in ERA Measurement Within the ability EI literature, the most common assessment of ERA is the MSCEIT Faces subtest 1 , in which participants rate the presence of several emotions in a series of photos of facial expressions.Most other standard ERA tests, however, use a forced choice format in which participants choose, out of a predefined set of emotion words, the option that best describes an emotional expression (typically, in a static picture of a face; for an overview, see Bänziger 2016).The expressions used in the MSCEIT and other tests are often posed by actors and limited to a few emotions.As a notable exception, the GERT consists of 14 emotions expressed by actors in videos with sound (Schlegel et al. 2014). The widespread use of discrete emotion categories to create the stimuli and present the response options makes basic emotion theory (BET) the predominant theoretical framework for measuring individual differences in emotion recognition, that is, it is (implicitly) assumed that (facial) emotional expressions are readouts of discrete emotions with a fixed meaning and that emotions are decoded by matching sensory inputs of nonverbal cues with internal representations of distinct emotion categories, leading to the selection of the most likely emotion label (Dricu and Frühholz 2016).This approach also implies that individuals can have selective impairments in recognizing specific emotions-an idea widely studied in clinical research (e.g., Dalili et al. 2015).From a psychometric perspective, the BET approach to ERA testing has the advantage that the correct response for each item can be easily defined (it usually corresponds to the emotion the actor intended to portray).Additionally, a forced choice paradigm makes it easy to calculate ERA scores as the sum of correct choices and reduces testing time compared to rating scale items.However, the reliance on few emotion categories in terms of the stimuli and the dominance of the forced choice format have also sparked some criticism. Going beyond a Small Set of Basic Emotions and the Forced Choice Paradigm Concerning the stimuli and emotions used, several scholars argued that in real life, people experience and express many more than just six or seven emotions and that naturalistic expressions rarely correspond to the prototypical portrayals used in standard stimulus sets (e.g., Matsumoto and Hwang 2017).In addition, using only a few response options makes some tests very easy, restricting the measurement of ERA in the higher ability range (Kenny 2013). Recent research on emotional expressions in the BET tradition provides a lot of potential for broadening the scope of ERA tests and increasing ecological validity.Several large-scale and cross-cultural studies have shown that perceivers can reliably distinguish 20 or more discrete emotions based on facial, vocal, and bodily expressions (e.g., Cordaro et al. 2020;Cowen et al. 2019;Cowen and Keltner, 2020).For example, Cowen and Keltner (2020) found that naturalistic facial-bodily expressions can reliably signal 28 distinct emotion categories.Although these studies do not focus on individual differences, their stimulus databases can likely be used to build new ERA assessments for different sensory modalities and a wide range of emotions. In a different attempt to go beyond prototypical emotional expressions, Israelashvili et al. ( 2021) created the Emotional Accuracy Test (EAT), which consists of four videos in which a young woman talks about an emotional life event.Test-takers rate each video on ten emotions, and ERA performance is calculated as the absolute difference between the participant's and the target's own ratings on each emotion.The EAT has demonstrated strong correlations with established ERA tests, showing that using naturalistic expressions with verbal content without defining a single correct answer is a viable approach to ERA measurement. Using emotion rating scales like in the EAT has also been suggested by others as an alternative to forced-choice testing (Fontaine et al. 2022;Hess and Kafetsios 2021).One obvious advantage is that it allows assessing accuracy in perceiving blends and complex affective states.In addition, Kafetsios and Hess (2023, in this special issue) have argued that even for "classic" pictures of discrete facial expressions, rating scales yield meaningful psychological information beyond the traditional "percent correct" score.Specifically, this format allows distinguishing "accuracy" (intensity of target emotion) from "bias" (intensity ratings on all non-target emotions) in line with the truth and bias model of person perception by West and Kenny (2011).According to this approach, participants can be both "accurate" in detecting a target emotion and have a "bias" towards perceiving emotions not present in the stimulus.An open question is whether this model and scoring format yields meaningful information when applied to more naturalistic expressions where no clear-cut target or "correct" emotion is present. Going beyond Emotion Words Nevertheless, like more traditional ERA assessments, the EAT and other tests using emotion rating scales face another potential limitation of BET-the reliance on emotion categories.One problem with using emotion words is that their underlying meaning may differ between cultures, languages, or even age groups (Barrett et al. 2007;Hoemann et al. 2021).For example, in the GERT, English, French, and German speakers vary in their accuracy rates for sadness, despair, and anger, which might reflect cultural differences in the expression of emotions or differences in the meaning of the respective words (response options) in each language (Schlegel 2013). According to the circumplex model of emotion (Russell 1980) and appraisal models (e.g., Scherer 2001), it would, therefore, be more appropriate to measure ERA in terms of accurate evaluations of underlying emotional dimensions (valence, arousal) and appraisals (goal conduciveness, coping potential, novelty, etc.) of the event preceding an emotional expression.According to the CPM (Scherer 2001; see also Fontaine 2016), it would also be meaningful to include ratings of action tendencies or physiological variables associated with nonverbal expressions (Mortillaro et al. 2011). Many studies have examined the meaning dimensions underlying emotion words and nonverbal expressions (e.g., Fontaine et al. 2013;Laukka et al. 2005;Mortillaro et al. 2011;Shuman et al. 2017) and the results have been successfully implemented in the measurement of emotion understanding (CEUT and GEMOK; see previous section).Still, standard ERA assessments have not yet adopted dimensional or appraisal theories of emotion.One reason against adopting this strategy might be that emotion categories seem to have more explanatory value than appraisal dimensions for large sets of naturalistic expressions, contradicting dimensional emotion theories.For example, Cowen and colleagues (Cowen et al. 2019;Cowen and Keltner 2020) found that appraisal dimensions captured less variance in categorical judgments of facial, bodily, and vocal emotion expressions than emotion labels. However, appraisal dimensions and other emotion components might be more readily inferred and gain explanatory power when the emotional expressions presented are more complex and embedded in a social context.One future avenue worth exploring would be to ask participants to rate appraisal dimensions underlying the emotional experience using naturalistic videos with affective content. In addition, when naturalistic videos are used, participants could also be asked to make more complex inferences about the stimuli, for example, about what is happening in the situation or about the relationships among the individuals in the situation (Keltner et al. 2019).In fact, such assessments would be similar to tests that are already used in the clinical social cognition literature, e.g., the Movie for the Assessment of Social Cognition (MASC; Dziobek et al. 2006) or the Reading the Mind in Films Task (Golan et al. 2006).Another recent video-based test asking participants to make complex inferences about the characteristics, causes, and implications of affective situations was developed by Dael et al. (2022) in the interpersonal accuracy field (Workplace Interpersonal Perception Skill test, WIPS). However, even though ERA tests with more diverse and complex response scales, including appraisals and other dimensions, would arguably capture emotion perception more ecologically, the definition of what precisely they measure may become blurred (e.g., can ERA be distinguished from emotion understanding?). (Re)Defining the Scope of ERA Tests? Recent theoretical approaches vary widely in the role attributed to social context during emotion perception, which has important implications for measuring individual differences in ERA.For example, in their theory of constructed emotion (TCE), Gendron and Barrett (2018) propose that the stimulus-driven process of perceiving and categorizing nonverbal expressions (see Dricu and Frühholz 2016) is far less critical in real-life communication than the prediction of upcoming sensory input based on the shared situational environment of interaction partners.According to these authors, emotion perception should (only) be studied in settings where conceptual systems of emotion expressers and perceivers dynamically interact. The empathic accuracy paradigm developed decades ago by Ickes (e.g., Ickes 2001) fits within this theoretical approach.In this paradigm, two interaction partners freely label their felt emotions when viewing a recording of their interaction.Then, they label their partner's emotions while viewing the recording a second time.The degree of correspondence between self and partner ratings is used to measure empathic accuracy.However, this procedure is very time-consuming and cannot be used as a standard test in which all participants are exposed to the same items.Thus, in this form, the TCE seems incompatible with measuring individual differences in standardized assessments. In a more moderate approach, Kafetsios and Hess (2023, this issue; also, Hess and Kafetsios 2021) have also criticized current ERA tasks for not containing social context because stimuli usually show only one individual without situational information.In their view, existing tests lack validity because they capture cognitive rather than social perception skills.Indeed, context is an influential variable shaping emotion perception and judgment (e.g., Hassin et al. 2013).In order to "infuse" social context into ERA measurement, Kafetsios and Hess (2023) developed the Assessment of Contextual Emotions (ACE), in which participants rate the presence of several emotions in a still picture of a target person who is surrounded by two other individuals also showing an emotional facial expression.In the future, this approach could be extended to cover more emotions (the ACE stimuli are based on four emotions) and multimodal stimuli to enhance ecological validity. In a contrasting view, Fiori and colleagues (e.g., Fiori and Vesely-Maillefer 2018) emphasized the need to develop more measures of context-free "fluid" emotion information processing skills, such as the ability to make fine-grained discriminations among emotions presented in blends (Gillioz et al. 2023).These authors have presented the first evidence that context-free basic nonverbal processing skills might have incremental validity in explaining real-life outcomes above more knowledge-based facets of EI and emotion perception (Fiori et al. 2022). The above discussion highlights how theories of emotion and social perception can inform how ERA is conceptualized and measured beyond the EI literature.For example, depending on the adopted framework, ERA may be conceived as a set of basic emotionprocessing abilities or complex language-dependent and prediction-based communication skills.Future developments in assessing ERA should be explicitly embedded in these frameworks, which will help identify the facets of emotion perception for which standard tests are missing (e.g., tests including social context).In addition, researchers using current ERA tests should be aware that most of them are implicitly based on BET and acknowledge the implications when interpreting their findings. (Intrapersonal) Emotion Regulation A necessary clarification should be made about the terminology that we use here. In the original ability EI model, emotion management refers to both interpersonal and intrapersonal emotion management (Mayer and Salovey 1997).However, the literature outside EI uses the term emotion regulation rather than emotion management, which can lead to confusion.Furthermore, we think that intrapersonal and interpersonal emotion management should be considered as two independent components.We suggest using the term "emotion regulation" for the ability to regulate emotions in the self, and "emotion management" for the ability to regulate emotions in others (Schlegel and Mortillaro 2019).This distinction is already apparent in the literature, where research on emotion regulation predominantly refers to internal cognitive processes, such as reappraisal or suppression, as strategies for self-regulation (for example, McRae 2016; Ochsner and Gross 2008).In contrast, emotion management in others (or interpersonal emotion regulation) primarily involves behavioral strategies that necessitate anticipating others' behaviors and engaging in interactive processes.Though it is common for emotion regulation and emotion management to be required simultaneously in real-life situations, it seems preferable to consider the two forms as separate abilities and measure them separately.Recent studies show that these two competencies have low correlations, empirically supporting the conceptual distinction (Schlegel and Mortillaro 2019;Simonet et al. 2021;Völker et al. 2023). The Process Model of Emotion Regulation Emotion regulation is considered one of the most critical EI skills, and hundreds of empirical studies contribute meaningful evidence supporting its relevance for well-being, positive life outcomes, and even health (Gross 2013;McRae and Gross 2020).Therefore, one would expect this literature to be crucial for studies focused on multi-branch EI models.Unfortunately, research on emotion regulation remained largely separated from general EI research, as discussed in recent work by Peña-Sarrionandia and colleagues (Peña- Sarrionandia et al. 2015).These authors made a remarkable effort to reconcile these two bodies of literature and highlighted the need for theoretically grounded instruments. The Process Model of Emotion Regulation is currently the most largely supported model of emotion regulation (Gross 2013;McRae and Gross 2020;Ochsner and Gross 2008).This model postulates that individuals employ various strategies to influence their emotions' intensity, duration, and expression.It identifies different moments when regulation strategies can be applied: focused on the antecedent or the response.Antecedent-focused strategies involve modifying the initial emotional response, whereas response-focused strategies aim to regulate emotions after they have already been experienced.Five strategies are part of this model: (1) Situation Selection: at this initial step, individuals can regulate their emotions by selectively choosing or avoiding certain situations or environments.For example, if someone is aware that a situation consistently triggers negative emotions, they may proactively avoid it to prevent emotional distress.(2) Situation Modification: in this step, individuals modify the specific features of a situation to regulate their emotions.It may involve altering the environment, adjusting the timing of an event, or changing the nature of the interaction to create a more desirable emotional experience.For instance, someone might request a change in their work schedule to reduce stress or modify the physical environment to enhance positive emotions.(3) Attentional Deployment: during this step, individuals, by focusing on specific aspects of a situation, can influence their emotional responses.For example, consciously shifting attention toward positive aspects of a situation or away from negative images can reduce the intensity of an unpleasant state.(4) Cognitive Change is related to the appraisal process and implies the ability to modify the interpretation or evaluation of a situation.This step involves cognitive reappraisal, where individuals reinterpret the meaning of an event to alter their emotional responses.For instance, perceiving a challenging task as an opportunity for growth rather than a threat can lead to a more positive emotional experience.(5) Response Modulation focuses on strategies to regulate emotions after they have been experienced, for example, by suppressing the outward expression of emotions. It is essential to mention that the effectiveness of each strategy can vary depending on the situational demands and individual characteristics, and this variability can be the basis for assessing individual differences in emotion regulation competence (Gross and John 2003;Webb et al. 2012). Current Measures of Emotion Regulation Some self-report questionnaires originated from the process model of emotion regulation.This group includes the Emotion Regulation Questionnaire that investigated the last two strategies of the model-reappraisal, for cognitive change, and suppression, for response modulation (Gross and John 2003)-and the Cognitive Emotion Regulation Questionnaires that focuses on adaptive and maladaptive cognitive strategies used to regulate negative emotions (Garnefski et al. 2001; see below for a description of the strategies).Until recently, though, not even self-report questionnaires mapped all possible strategies suggested in the theoretical model discussed above.Recent examples are moving in this direction; this is the case of the Process Model of Emotion Regulation Questionnaire (PMERQ), which investigates ten strategies covering all steps of the process model (Olderbak et al. 2022). If we turn to performance-based tests, we typically find emotion regulation only as part of multi-branch assessments.The relative absence of stand-alone emotion regulation performance tests can be partly related to the difficulty of assessing what is mainly an intrapersonal skill through tests that ask about overt behaviors.This difficulty is evident if we look at the few available examples.In the "emotion management task 2 " of the MSCEIT, test-takers read a story about a person experiencing an emotion and decide how effective different behaviors are for regulating the emotion toward reaching a specific goal, e.g., reducing anger or prolonging joy (Mayer et al. 2003).The stories described in the items are varied, and it is possible to relate the response options to specific stages of the process model of emotion regulation described above; however, this is only a posthoc interpretation, and there is no systematic application of the model in creating the response options (see also a similar post-hoc analysis of regulation strategies in Allen et al. 2015).A similar approach is used in the Ability Emotional Intelligence Measure (AEIM), another multi-branch performance test that includes subscales targeting emotion regulation (Warwick et al. 2010).Although the AEIM has been withdrawn from use by the authors because of methodological problems involved in its validation, it used an original approach.Specifically, respondents read four scenarios and evaluate how effective three possible actions are to increase, decrease, or maintain a specific emotion.Though both the MSCEIT and the AEIM use consensus scoring to determine the effectiveness of each proposed action, the AEIM additionally measures confidence with the selected choices.AEIM confidence ratings were weakly positively correlated with performance, intelligence, and empathy, leading the authors to conclude that such ratings may capture a separate factor, that is, individuals with higher confidence scores may be better able to regulate their emotions during emotion-related decision making, and, hence, measuring such scores can complement consensus-derived knowledge-focused scores (Warwick et al. 2010).Confidence ratings in ability EI assessments may also provide a link with trait EI, as trait EI measures often encompass self-evaluations of one's performance and selfefficacy in dealing with emotions (Joseph et al. 2015).All in all, confidence ratings can be a useful addition to ability assessments, especially when responses are scored in a binary (correct/incorrect) format, but further investigation is needed. A Proposal for Future Performance Measures of Emotion Regulation In most current measures, the authors' expertise and consensus or expert rating fully guided the item construction and scoring procedure.However, ignoring theories and evidence from emotion regulation research is a missed opportunity for ability EI; this reasoning motivated a different approach in the subtest of emotion regulation of the GECo (Schlegel and Mortillaro 2019).Here, the focus is explicitly on one specific stage of the process, cognitive change, the one most directly linked to the quality of the emotional experience.As discussed before, indeed, appraisals are the main determinants of emotions, and from the perspective of emotion regulation, reappraisal is one of the most effective and beneficial ways to regulate emotions (McRae and Gross 2020; Uusberg et al. 2019;Uusberg et al. 2023). In line with other performance measures, the GECo uses scenarios and asks respondents to choose the option they consider most appropriate to reduce negative emotions.In contrast to other questionnaires, the GECo asks participants to select the two cognitive strategies they would most likely use in the scenario presented in the item.Critically, the test asks about "thoughts" instead of "behaviors".The response options were systematically created based on the cognitive emotion regulation strategies framework proposed by Garnefski and colleagues (Garnefski et al. 2001).This theory informed the creation of two adaptive and two maladaptive options, as defined in this model.The respondents choose two options, and their responses are correct if they pick the adaptive ones.Across items, the test includes five adaptive emotion regulation strategies (acceptance, acknowledging and accepting the situation and one's emotions without judgment or suppression; positive refocusing, deliberately redirecting one's attention toward positive or neutral aspects of the situation; putting into perspective, gaining a broader perspective on the situation; refocus on planning, developing a plan of action; positive reappraisal, actively reframing or reinterpreting a situation to find positive or beneficial aspects within it) and four maladaptive strategies (self-blame, attributing responsibility solely to oneself; other-blame, attributing responsibility solely to others; rumination, repetitive and passive dwelling on negative thoughts; catastrophizing, magnifying or exaggerating the negative aspects of a situation). This approach allowed scoring the items based on theoretical assumptions without relying on consensus and experts (although these two criteria were used during the validation process).Similarly, in their Emotion Regulation Profile Revised questionnaire (ERP-R), Nelis and colleagues (2011) 3 present 15 vignettes and ask respondents to choose one or several of eight strategies considered more or less adaptive to achieve the regulation goal.Adaptive strategies include the behavioral display of positive emotions, mindfully savoring the moment, capitalization, and positive mental time travel, and maladaptive strategies include the inhibition of emotion expression, fault finding, inattention, and external attribution/nostalgia.Interestingly, the regulation goals covered in this questionnaire are both reducing negative emotions and enhancing positive emotions.This choice is linked to the emerging literature on the positive role of strategies like "savoring" (see the section on emotion management below).Although the ERP-R strategies refer to different stages of the emotion regulation process, they do not systematically map them as the PMERQ does. Based on the advantages and limitations of the measures discussed above, we suggest that a performance-based measure of emotion regulation should ideally fully cover the process model of emotion regulation.It should include items for the different stages and response options that reflect engagement and disengagement strategies.The PMERQ is a recent example of a more comprehensive and theory-grounded measure of self-reported emotion regulation, and performance measures should take the same direction.Furthermore, future tests should consider that the effectiveness of regulation strategies can vary depending on the context (Ladis et al. 2022). Emotion Management or Interpersonal Emotion Regulation As stated above, existing ability EI tests (except the GECo) typically do not distinguish between the ability to regulate one's own and others' emotions.In the MSCEIT, emotion management is measured through vignettes of situations in which a person is experiencing a positive or negative emotion.Test-takers are then asked to rate, for each of several possible reactions, how helpful it would be for the person.The reactions combine various thoughts and behaviors and cannot be mapped onto a specific theoretical framework.Only a few vignettes describe situations in which someone else is experiencing an emotion that can be managed.As such, the MSCEIT focuses primarily on knowledge about successful emotion regulation in the self.The STEM uses the same approach. In contrast, the GECo contains a subtest in which test-takers explicitly identify the most appropriate action to manage someone else's emotions (e.g., a colleague's sadness when missing a promotion).These actions were created to represent the five strategies of conflict management theory (Thomas 1992), including avoidance, accommodation, collaboration, compromise, and competing.Importantly, based on the situational features of the scenario (available resources, time, expected future events, etc.), each of the five strategies was defined as the correct one in some of the scenarios, rather than always defining collaboration or compromise as the "best" strategy.This theoretical framework is particularly suitable for workplace settings that the GECo targets, but, obviously, many more strategies for influencing what another person is feeling can be imagined.It would be desirable for future assessments to capture the breadth of available emotion management styles to help generalize findings beyond the narrow set included in the GECo.The goal of this section is, thus, to review how theories and research outside the EI field can be harnessed to create new measures of the ability to manage others' emotions. Extending the Process Model of Emotion Regulation to Interpersonal Emotion Regulation As a straightforward extension of the previous chapter on intrapersonal emotion regulation, the model by Gross and John (2003) can also be adapted to the management of others' emotions (Little et al. 2012).Specifically, the emotional experience of others, such as work colleagues or subordinates, can be influenced by interpersonal situation selection (e.g., creating an external environment to prevent stressful situations for others, e.g., by adjusting deadlines, delegating tasks differently), situation modification (e.g., alleviating the impact of stressors for others by offering assistance for meeting a deadline), attentional deployment (e.g., helping a disappointed colleague to focus their attention on a positive achievement), cognitive change (e.g., guiding a person to reframe negative thoughts or beliefs), and response modulation (e.g., comforting another person through appropriate nonverbal expressions).As for emotion regulation in oneself, emotion management strategies used in each of the five stages can be engagement-or disengagement-oriented, with engagementoriented strategies expected to be more effective (Olderbak et al. 2022). Though Little et al. (2012) developed a self-report questionnaire of people's tendencies to manage others' emotions in the workplace at each stage (Interpersonal Emotion Management Scale, IEMS), this model could also be used for creating standard assessments to measure the ability to choose the most effective strategy in a given context.A promising way would be to create vignettes of emotional situations with specific situational characteristics that are theoretically well suited to each of the five regulation stages, similar to the approach taken for the emotion management subtest in the GECo (see above; Schlegel and Mortillaro 2019).This would accommodate the increasing evidence that many emotion regulation strategies are not uniformly "good" or "bad" across all situations (e.g., Brockman et al. 2017). Co-Enhancing and Co-Dampening as Adaptive and Maladaptive Emotion Management Styles Though the process model of emotion regulation is typically applied to negative emotions, a different line of research has coined the terms "enhancing" or "savoring" and "dampening" for regulatory responses to positive affect.Enhancing involves intentionally amplifying and prolonging one's own positive emotions, whereas dampening downgrades or diminishes the positive experience, for example, by minimizing its importance (Feldman et al. 2008;Quoidbach et al. 2010).Generally, enhancing is positively associated with wellbeing, while dampening has been linked to lower well-being and depression (Quoidbach et al. 2010).Whereas most research has focused on these constructs in relation to one's own emotions, Bastin et al. (2018) have examined them within the context of dyadic peer relationships.Specifically, they defined co-enhancing as jointly elaborating on and celebrating each other's positive emotions within a relationship, fostering shared joy and deepening the emotional bond. In contrast, they defined co-dampening as downgrading discussions of positive emotions in a dyadic relationship, potentially undermining the positive impact of shared experiences and relationship satisfaction for both individuals involved.Bastin et al. (2018) also developed the Co-Dampening and Co-Enhancing Questionnaire (CoDEQ), which asks about the frequency with which dyad members engage in specific behaviors associated with the two styles when one of them feels happy (e.g., "we talk about how proud the person who is happy can be", "we remind each other that happy feelings don't last").Given that (co-)enhancing and (co-)dampening are conceptualized as adaptive and maladaptive, respectively, these styles and the specific behaviors in which they manifest (see also Quoidbach et al. 2010) could be used to measure emotion management ability specifically in response to positive situations.For example, similar to the GECo emotion regulation subtest, two behaviors reflecting each style could be used as response options in vignettes, and participants could be asked to choose the two options that best reflect what they would do.Each selected behavior corresponding to co-enhancing would be scored with one point. Other Strategies for Influencing People's Emotions Various other strategies for influencing what others are feeling have been examined in different fields of psychology, although these efforts remain to be integrated (e.g., Niven et al. 2009;Nozaki and Mikolajczak 2020).Recently, Xiao et al. (2022) have examined highand low-engagement strategies for managing others' emotions (labeled "extrinsic emotion regulation").These include downward comparison, expressive suppression, humor, distraction, direct action, reappraisal, receptive listening, and valuing.Some of these strategies, although without the systematic distinction between high and low engagement, have also been included in a widely used self-report questionnaire measuring the regulation of one's own and others' emotions, labeled intrinsic and extrinsic emotion regulation (Emotion Regulation of Others and Self (EROS) scale; Niven et al. 2011).With a newly developed questionnaire, Xiao and colleagues (2022) showed that the MSCEIT positively correlated with three high-engagement processes (reappraisal, receptive listening, and valuing) and negatively correlated with two low-engagement processes (downward comparison and expressive suppression).These results suggest that high-ability EI individuals are willing to engage in effortful emotion management processes.As this research allows distinguishing between more and less adaptive management strategies (adaptive in the context of enhancing well-being and relationship quality; MacCann et al. 2023), it could also be used to create and score situational judgment response options in ability EI measures. Going beyond the use of single emotion management strategies, some authors have also examined the perceived quality of different strategy sequences.For example, Feng (2009) found that emotion management efforts were perceived as more effective when they followed a sequential pattern of problem inquiry, problem analysis, emotional support, and advice giving than when they did not follow this order.Future EI tests could thus probe test-takers' knowledge and use of such patterns. Though the emotion management/interpersonal emotion regulation literature typically focuses on strategies involving verbal behavior (e.g., humor) and complex actions (e.g., modifying a situation), a person's emotions can also be influenced through nonverbal behaviors such as facial and vocal expressions and touch (e.g., Debrot et al. 2021).To date, individual differences in using such nonverbal behaviors have not yet been examined within the context of EI and emotion management.Therefore, a promising future avenue would be to develop predictions about more and less "adaptive" nonverbal behaviors within emotional encounters and incorporate them in video-based responses to situational vignettes.These responses, depicting people's attempts at managing another person's emotion, could differ only in their nonverbal, but not their verbal, content.Test-takers would then be asked to select the most effective response. Focusing on Different Preferences of the "Target" Whereas the above literature assumes that some regulation strategies are generally more adaptive than others, other research highlights that the "target" individuals in the management process can differ in the strategies they prefer others to use.For example, Liu et al. (2021) examined the perceived helpfulness of 13 emotion management strategies in romantic partners which were classified as problem-oriented (e.g., reappraisal, problem-solving, and blaming) versus emotion-oriented (e.g., encouraging sharing, affection, emotion invalidation) and as supportive versus unsupportive.Their results showed that people differ in the strategies they prefer their partner to use in different situations.Similarly, Williams et al. (2018) showed that people differ in their tendency to seek social support in response to emotional events and in the extent to which they perceive social support as helpful. Therefore, future emotion management tests could consider incorporating information about the target person's strategy preferences to measure test-takers' sensitivity in identifying and flexibly applying different management strategies.Similarly, the behavioral adaptability model suggests that emotionally intelligent individuals should be able to adapt their behaviors to the different needs and traits of the interaction partner (Carrard and Mast 2015;Palese and Mast 2022).Supporting the need to include behavioral flexibility and adaptability when managing others' emotions in standard EI assessments, this group of researchers found that individuals with higher ERA displayed higher behavioral adaptability to subordinates' preferences when in the role of a leader (Schmid Mast and Hall 2018). Summary and Discussion The aim of this article was to connect multiple fields and research lines within the broad domain of emotional functioning that rarely "talk" to each other and cite their respective works.As we discussed here, the creation of future ability EI assessments and the field of EI in general can benefit from the vast literature and recent developments in research on emotion, emotion regulation, and social cognition.The main recommendations and possibilities for ability EI test development addressed in this paper are summarized in Table 1. Emotion Understanding and Recognition Relevant Citations Incorporate knowledge about cultural differences (e.g., display rules) (Mayer et al. 2016) Assess understanding/recognition of emotion blends and transitions (Schlegel and Scherer 2018) Assess understanding/inferences about emotion components such as physiology or action tendencies and how they unfold in a target person (Scherer 2009;Fontaine 2016) Incorporate varying contexts and differences in target person's characteristics; assess learning of new emotion-person contingencies rather than general knowledge (Hellwig et al. 2020) Use rating scales (e.g., for appraisal dimensions or for emotion labels) instead of forced choice format (Fontaine et al. 2022) For emotion recognition specifically: Use a wider range of emotion categories (Cowen and Keltner 2020) Use multimodal and/or naturalistic emotion expressions (Schlegel et al. 2014;Israelashvili et al. 2021) Incorporate social context into stimuli (Hess and Kafetsios 2021) Emotion regulation and management Apply strategies like the following to the regulation of own and others' emotions; use them to create and score response options in situational judgment items: With respect to emotion understanding and emotion recognition ability (ERA), the review focused on the three prevailing paradigms in emotion science (basic emotion theory, dimensional/constructivist models, and appraisal models).Substantial progress has been made in assessing emotion understanding in recent years, with various authors proposing innovative approaches rooted in theory.As we look to the future, a promising next step would involve integrating the componential approach within a more contextualized framework that seeks to evaluate the process of understanding emotions, their evolution, and the intricate interplay between different emotional states.It is crucial to acknowledge the role of cross-cultural variability in future tests of emotion understanding, particularly when considering a constructivist or appraisal perspective.For example, behaviors deemed norm violations in one culture, likely triggering anger, may be acceptable in another cultural (or organizational) contexts and fail to elicit any emotion.Incorporating cross-cultural factors is also imperative for the development and validation of new ERA tests.Despite support for modern BET, there is also clear evidence for nonverbal dialect theory (Elfenbein 2013), indicating that emotion expressions are more challenging to decode when the target and perceiver come from different cultures. Turning to emotion regulation and emotion management, our review encompasses the process model of emotion regulation and its extensions to interpersonal regulation processes.Furthermore, we explored recent research aimed at identifying and measuring specific adaptive and maladaptive regulation strategies, such as engagement versus disengagement-focused strategies, and how these findings can inform the development of performance-based tasks.Advancements in the field highlight the need for new tasks that explicitly consider contextual factors, which can be easily manipulated within situational judgment items.In the case of emotion management, it is also crucial to account for differences in target characteristics, such as individual preferences, to achieve a comprehensive assessment. Though the present article focused on each of the four components separately, the measurement of ability EI would, ultimately, also benefit from theoretical efforts to connect the single competencies.Although with the cascading model of EI (Joseph and Newman 2010), a starting point has been made to connect the ability EI branches, the most recent version of the ability EI model, as well as other ability EI conceptualizations (e.g., Elfenbein and MacCann 2017;Fiori et al. 2022), focus on a taxonomy of skills and do not specify the process through which they are potentially linked. A process model of EI should also examine the motivational and attentional aspects of emotionally intelligent behavior, which are likely to determine whether and how individuals use their maximal performance (which is what ability EI tests usually measure) in real-life settings.For example, some individuals with ERA scores may not pay much attention to others' nonverbal behavior in everyday life and will, thus, not be able to fully use their ERA skill.Research on individual differences in "emotional attunement" is still in its infancy (Schlegel 2020).Further, there has been evidence for "motivated inaccuracy" in recognizing others' emotions when accurate perception might harm a relationship (Simpson et al. 2003).Finally, a process model of EI should consider the mental effort required for emotionally intelligent behavior.For instance, Niven (2017) emphasized that managing others' emotions may be depleting to perform and that some strategies tend to be particularly costly in terms of resources (cf. the distinction between high-and low-engagement strategies above; MacCann et al. 2023). Future research should, therefore, examine individual differences in the perceived levels of effort involved in each of the steps of the emotional communication process-paying attention to one's own and others' emotions, decoding emotional information, and engaging in different regulation and management strategies.Perceived effort, in combination with context-dependent motivational factors, may help explain the discrepancies between maximal and typical performance (Freudenthaler and Neubauer 2007). Future ability EI assessments should also consider culture's role in shaping emotionally intelligent behavior when using tasks like the ones we described for emotion understanding.Assuming that we can confidently say that an intentional goal-obstructive behavior by somebody should likely elicit anger in individualistic cultures, this may differ in collectivistic cultures where the cost of social conflict may be higher. Though situational judgment tests became a standard for ability EI measures, future measures should consider the specific social context in which they will be used.Several authors have argued that EI is not invariant across situations, for example, if we compare behavior in a family context to that in a work context (Jordan et al. 2002;Michinov and Michinov 2022).First, we can expect that the strategies employed for emotion regulation and management will differ depending on whether a person interacts with their supervisor or a six-year-old child.Second, we are likely better at handling the emotions of people we know better.One can more easily anticipate a close relative's emotional reaction than a stranger's in the same situation.Third, there is increasing evidence that most emotion regulation strategies are not inherently good or bad, but vary in their adaptiveness across situations, contexts, and people (Brockman et al. 2017). Last but not least, technological innovation could become an important asset for future assessments.To our knowledge, for example, no performance test can measure emotion expression and rate the extent of a successful "suppression" strategy or the ability to deliver a chosen emotion management strategy effectively (but see Olderbak et al. 2021).With the rise of AI technology, future assessments might also consider recording participants' written or video-recorded reactions to emotional scenarios and automatically scoring these for emotion understanding or management (e.g., Schlegel et al. n.d.). Towards a Chaos of Measures? A Glimpse into the Future of Ability EI Testing Though existing ability EI tests will continue to be useful and have generated a large knowledge base about EI, many scholars emphasized the necessity for new measurement tools (for a discussion, see Dasborough et al. 2022).If our knowledge about ability EI is based on only a few tests, it will remain unclear whether the findings are due to the construct or the instruments (Roberts et al. 2010). We see at least two complementary strategies to develop new ability EI instruments in a systematic fashion.First, new tests might be developed for facets of EI branches that have been neglected by existing tests, such as the aptitude for expressing emotions or the understanding of cultural variations in emotion expressions and display rules.This approach would allow measuring the theoretical domain of EI more comprehensively, facilitating an exploration into which facets or branches are most predictive of central life outcomes or behavioral patterns. The second strategy might focus on creating batteries of tests for all EI branches rather than focusing on single branches and their subfacets.Though this second approach would likely aim for a unidimensional structure within each branch/subtest to facilitate the scoring and interpretation of the test scores, it would be advisable to base the item creation within each branch on more than one theory to cover each branch more broadly.For example, a new subtest to measure emotion management/interpersonal emotion regulation could cover the strategies from Gross' (Gross and John 2003) model, as well as the high-and low-engagement strategies proposed by Xiao et al. (2022) and other strategies based on nonverbal behavior, as discussed above. The two approaches could collectively streamline research into the factorial structure of EI, as exemplified by Simonet et al. (2021) or MacCann et al. (2014).Drawing parallels from the history of cognitive ability testing, this process is likely to trigger several cycles of creating new test generations, evaluating their intercorrelations and structure, testing their validity, and refining or developing new tests.Although it will take time, we believe that this process is necessary to move the field forward. If we venture a glimpse into the future of ability EI testing, it is conceivable that increased efforts to build new tests (especially for under-assessed facets like expressivity) will result in the fractionation of EI.Although new tests like the GECo and GEMOK correlate highly with established tests (Schlegel and Mortillaro 2019;Schlegel and Scherer 2018), we know that measures within the emotion recognition branch show low intercorrelations (Schlegel et al. 2017), and for the intra-and interpersonal emotion regulation branches, the internal structure is still unknown, as there are only very few existing tests.Thus, we think that Elfenbein and MacCann's (2017) description of ability EI as an umbrella term for a set of related, but distinct, skills may be fitting in the future when more tests are available.It is also likely that the different branches or subfacets differentially predict outcomes.For example, the literature already suggests that emotion management predicts wellbeing (MacCann et al. 2020), whereas this does not seem the case for emotion recognition (Schlegel 2020). But will a fractionation into more branches and subfacets with many tests and potentially different areas of predictive relevance be problematic for the field?We think that having a larger set of branches and/or subfacets under the broad ability EI umbrella need not result in chaos, provided there is a comprehensive theoretical framework to scaffold them, and assuming researchers reference the overarching construct, as well as the branch/facet labels they examine in their research to avoid ambiguity (for a similar discussion on the empathy construct, see Hall and Schwartz 2019).We also urge ability EI researchers to reference research from related domains as described above, and vice versa.Although the literature of individual EI domains like ERA or emotion regulation possesses distinct traditions and theories, we advocate that there is merit in unifying them under a broader EI label to better understand the entire process of emotional communication including its motivational and contextual aspects. Table 1 . Avenues for the development of future ability EI assessments.
2023-11-03T15:05:53.297Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "d9e2fa8b4a71a2b8712aa46d056d32759aa1a713", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-3200/11/11/210/pdf?version=1698830970", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fe97b9e49d651be37264bd4620aea8a51b55e73", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
100329636
pes2o/s2orc
v3-fos-license
The storage ring proton EDM experiment We describe a proposal to search for an intrinsic electric dipole moment (EDM) of the proton with a sensitivity of \targetsens, based on the vertical rotation of the polarization of a stored proton beam. The New Physics reach is of order $10^~3$TeV mass scale. Observation of the proton EDM provides the best probe of CP-violation in the Higgs sector, at a level of sensitivity that may be inaccessible to electron-EDM experiments. The improvement in the sensitivity to $\theta_{QCD}$, a parameter crucial in axion and axion dark matter physics, is about three orders of magnitude. One Page Summary of the Storage Ring Proton EDM Experiment • Proton EDM sensitivity 10 −29 e · cm. • Improves the sensitivity to QCD CP-violation (θ QCD ) by three orders of magnitude, currently set by the neutron EDM experimental limits. • Probes CP-violation in the Higgs sector with best sensitivity [2]. • Highly symmetric, magic momentum storage ring lattice in order to control systematics. -Proton polarimetry peak sensitivity at the magic momentum. -Optimal electric bending and magnetic focusing. -Simultaneously stores longitudinally polarized bunches with positive and negative helicities as well as radially polarized bunches. -Changes sign of the focusing/defocusing quadrupoles within 0.1% of ideal current setting per flip. -Keeps the vertical spin precession rate low when the beam planarity is within 0.1 mm over the whole circumference and the maximum split between the counter-rotating (CR) beams is < 0.01 mm. -Closed orbit automatically compensates spin precession from radial magnetic fields. -Circumference = 800 m with E = 4.4 MV/m, a conservative electric field strength. • 3 -5 years of construction and 2 -3 years (for statistics collection) to first physics publication. History The proposed method has its origins in the measurements of the anomalous magnetic moment of the muon in the 1950-70s at CERN. The CERN I experiment [7] was limited by statistics. The sensitivity breakthrough was to go to a magnetic storage ring. The CERN II result was then limited by the systematics of knowing the magnetic field seen by the muons in the quadrupole magnet. The CERN III experiment [7,8] used an ingenious method to overcome this. It was realized that an electric field at the so-called "magic" momentum does not influence the particle (g − 2) precession. Rather, the electric field precesses the momentum and the spin at exactly the same rate, so the difference is zero. The fact that all electric fields have this feature, opened up the possibility of using electric quadrupoles in the ring to focus the beam, while the magnetic field is kept uniform. The precession rate of the longitudinal component of the spin in a storage ring with electric and magnetic fields is given by: The CERN III experiment used a bending magnetic field with electric quadrupoles for focusing at the "magic" momentum, given by β 2 = 2/g; see Equation (1) electric field term. The CERN III experiment and the BNL version of it, E821 [9], were limited by statistics, not systematics. The recent announcement of the (g − 2) experimental results [10] from Fermilab at 460 ppb has confirmed the BNL results, with similar statistical and smaller systematic errors. We believe that the FNAL E989 final results, at about 140 ppb, will have equal statistical and systematic errors. The storage ring/magic momentum breakthrough gained a factor of 2 × 10 3 in systematic error. BNL E821 set a "parasitic" limit on the EDM of the muon: d µ < 1.9 × 10 −19 e · cm [11]. For FNAL E989, we expect this result to improve by up to two orders of magnitude. The statistical and systematic errors on the muon EDM will then be roughly equal. The dominant systematic error effect is due to radial magnetic fields. For the pEDM experiment, we plan to use a storage ring at the proton magic momentum with electric bending and magnetic focusing, which gives a negligible radial magnetic field systematic effect -see below -while the dominant (main) systematic errors drop out with simultaneous clockwise and counterclockwise storage. For both BNL E821 and FNAL E989, new systematic effects were discovered that were not in the original proposals. Several ways were applied to mitigate these small effects so they are not the limiting factors. For the pEDM experiment we can get 10 11 polarized protons per fill from the BNL LINAC/Booster system, and we use symmetries to handle the systematics down to the level of sensitivity. We expect that at that level we perhaps will also discover new small systematics effects, as in the (g − 2) experiments. Current searches for the EDM of fundamental particles have a large range of experimental sensitivity, as well as New Physics probing strength. Some of the strongest probes of New Physics come from the experimental limits of the electron (inferred indirectly from the atomic ThO EDM limit), the neutron and 199 Hg EDM limits [6,[12][13][14][15][16][17]. Their New Physics reach is the same within one order of magnitude of each other, while the proton EDM at 10 −29 e · cm will bring a more than three orders of magnitude improved sensitivity over the current neutron limit. The current (indirect) experimental limit of the proton at 10 −25 e · cm is derived from the 199 Hg atomic EDM limit. In the last three decades there has been a large effort to develop a stronger ultra-cold-neutron (UCN) source, e.g., see [6,[18][19][20][21], to enhance the probability for a higher sensitivity neutron EDM experiment beyond the few 10 −28 e · cm, the currently best experimental target for the neutron. Figure 1 shows the experimental limits of the neutron by publication year, the indirect proton EDM limits from the 199 Hg atomic EDM limit, and the projected sensitivity levels for the proton and deuteron using the storage ring EDM method. The 3 He sensitivity level is expected to be similar to that of the deuteron. The storage ring EDM method The concept of the storage ring EDM experiment is illustrated in Figure 2. There are three starting requirements: (1) The proton beam must be highly polarized in the ring plane. (2) The momentum of the beam must match the magic value of p = 0.7007 GeV/c, where the ring-plane spin precession is the same as the velocity precession, a condition called "frozen spin." (3) The polarization is initially along the axis of the beam velocity. The electric field acts along the radial direction toward the center of the ring (E). It is perpendicular to the spin axis (p) and therefore perpendicular to the axis of the EDM. In this situation the spin will precess in the vertical plane as shown in Figure 2. The appearance of a vertical polarization component with time is the signal for a non-vanishing EDM. This signal is measured at the polarimeter where a sample of the beam is continuously brought to a carbon target. Elastic proton scattering is measured by two downstream detectors (shown in blue) [22,23]. The rates depend on the polarization component p y because it is connected to the axial vector created from the proton momenta k in × k out . The sign of p y flips between left and right as it follows the changing direction of k out . Thus, the asymmetry in the left-right rates, (L − R)/(L + R) = p y A, is proportional to p y and hence the magnitude of the EDM. The size of the effect at any given scattering angle also depends on the analyzing power A, a property of the scattering process. Having both left and right rates together reduces systematic errors. A limited number of sensitive storage ring EDM experimental methods have been developed with various degrees of sensitivity and levels of systematic error, see Table 1 [1,24]. Here we only address the method based on the hybrid-symmetric ring lattice, which has been studied extensively and shown to perform well, applying presently available technologies. The other methods, although promising, are outside the scope of this document, requiring additional studies and further technical developments. The hybrid-symmetric ring method is built on the all-electric ring method, improving it in a number of critical ways that make it practical with present technology. It replaces electric focusing with alternating gradient magnetic focusing, still allowing simultaneous CW and CCW storage and eliminating the main systematic error source by design. A major improvement in this design is the enhanced ring-lattice symmetry, eliminating the next most-important systematic error source, that of the average vertical beam velocity within the bending sections [1]. Symmetries in the hybrid-symmetric ring with 10 −29 e · cm sensitivity: 1. CW and CCW beam storage simultaneously. 2. Longitudinally polarized beams with both helicities. Figure 2: Diagram of the storage ring EDM concept, with the horizontal spin precession locked to the momentum precession rate ("frozen" spin). The radial electric field acts on the particle EDM for the duration of the storage time. Positive and negative helicity bunches are stored, as well as bunches with their polarization pointing in the radial direction, for systematic error cancellations. In addition, simultaneous clockwise and counterclockwise storage is used to cancel the main systematic errors. The ring circumference is about 800 m. The top inset shows the cross section geometry that is enhanced in parity-conserving Coulomb and nuclear scattering as the EDM signal increases over time. 3. Radially polarized beams with both polarization directions. 4. Current flip of the magnetic quadrupoles. 5. Beam planarity to 0.1 mm and beam splitting of the counter-rotating (CR) beams to < 0.01 mm. Strategy for building a high sensitivity hadronic EDM experiment The storage ring EDM method for the proton and deuteron nuclei with frozen spin provides the potential for a high sensitivity of 10 −29 e · cm, as explained below in the "EDM Statistics" section. The reason for the high potential sensitivity is the availability of high-intensity, highly-polarized proton and deuteron beams with small phase-space emittance, since they are obtained from polarized ion sources, i.e., a primary source. Due to the negative value of the deuteron magnetic anomaly, the fields needed for the deuteron case are more complicated than for the proton and the uncertainties are thus larger [25]. The proton EDM ring, using the hybrid-symmetric ring lattice, has been studied extensively [1], see also [26,27], and the conceptual design report (CDR) will be largely based on it. The cost of the experiment is similar to the muon (g − 2) cost of about $100 M. 2. Study the optimum material, height, and shape of electric field plates with a field strength of 4.4 MV/m for 4 cm plate separation, in order to minimize the highest E-field value. Comment: Small surface area E-field plates made of aluminum and coated with TiN have been developed and used at J-LAB; rather cheap and robust [43][44][45]. Need to expand this technology to large-area, about 20 cm high and 2 m long. 3. Construct a hydraulic level reference system (HLS) able to keep the ring planarity of the stored beam within 0.1 mm. A similar system developed at Fermilab [46,47] would be adequate for the needs of the experiment. 4. Test a magnetometer capable of probing the separation of counter-rotating beams by 10 µm. SQUIDbased magnetometers have demonstrated 10 nm/ √ Hz in the lab [33] much better than needed; cheaper technologies are also available. 5. Develop a magnetic quadrupole prototype with emphasis on systematic error minimization when flipping the currents. 6. Design and construct a combined (hybrid) sextupole system including electric and magnetic fields. 7. Study the application of trim fields, both electric and magnetic, and develop prototypes of both. 8. Produce a detailed study of the RF-cavity, including a choice of the frequency and tunable range. 9. Construct a straight section equal to 1/48th of the ring and operate all elements together to discover any possible interferences. A Highly Symmetric Lattice A highly symmetric lattice is necessary to limit the EDM and dark matter/dark energy systematics, see [1,3]. The 24-fold symmetric ring parameters are given in Table 2. The ring circumference is 800 m, with bending electric field 4.4 MV/m. This circumference is that of the BNL AGS tunnel, which would save tunnel construction costs. E = 4.4 MV/m is conservative. A pEDM experiment at another location could have up to E = 5 MV/m without R&D progress, see [43][44][45] and Figure 3, setting the scale of the required ring circumference. Random misalignment of quadrupoles Random misalignment of quadrupoles in both x, y directions leads to various systematic error sources. The systematic error sources directly caused by it are: • Radial magnetic field. By randomly moving quadrupoles in the x, y direction by various σ amounts we can estimate the effect of such systematics. In addition, by repeating this procedure with multiple random seeds, we can eliminate the possibility of a "lucky configuration" - Figure 4. Spin Coherence Time Spin Coherence Time (SCT), which is also recognized as in-plane polarization (IPP) lifetime, stands for the amount of time that the beam can stay longitudinally polarized. An SCT of around 10 3 s is required for the proton EDM experiment [49]. In order to demonstrate a large SCT, sextupoles with strengths k m 1,2 are placed within (on top of) the magnetic quadrupoles. The sextupole fields are defined as, Effectively, the entire storage ring is now covered with 24 sextupoles of strength k m 1 and 24 sextupoles of strength k m 2 (following the alternating pattern as the quadrupoles). In other words, the quadrupoles in addition to normal operation also act as sextupoles. Although using correct magnetic sextupoles leads to a prolonged SCT, the same set of k m 1,2 does not lead to a long SCT for both CR beams. A natural attempt would be to see how electrical sextupoles k e 1,2 that are similar in strength affect the SCT, where the electric sextupoles are defined as, If we assign magnetic sextupoles strength k m = k m 1 = −k m 2 (alternating in sign like magnetic quadrupoles), and electric sextupoles k e = k e 1 = k e 2 (same in sign like electrostatic deflectors), CW-CCW symmetry should be conserved in principle. By combining magnetic and electric sextupoles-"hybrid sextupoles"-the equivalence of CW-CCW is restored. By using a realistic bunch structure ( Figure 5), we see a large SCT improvement when using the hybrid set of sextupoles - Figure 6. Polarimetry Tests with beams and polarimeters at several laboratories (BNL, KVI, COSY) have consistently demonstrated over more than a decade that the requirements of storage ring EDM search are within reach [23,31,32,[50][51][52][53][54]. Of particular importance, it has been shown that polarimeters based on forward elastic x = ± 5mm, y = ± 5mm, p/p = 10 4 with 6-pole CW with 6-pole CCW without 6-pole scattering offer a way to calibrate and correct geometrical and counting rate systematic errors in real time. Sextupole field adjustments along with electron cooling yield long lifetimes for a ring-plane polarization whose direction may be controlled using polarimeter-based feedback. Given the extensive model-based studies demonstrating that ring designs using the symmetries described above can control EDM systematics at the 10 −29 e · cm level [1], the optimum path forward is to continue these developments on a full-scale hybrid, symmetric-lattice machine. The features of the forward-angle elastic scattering polarimeter are listed below: • Carbon target, observing elastic scattering between 5°and 15°. Target thickness: 2 cm to 4 cm. Angular distributions are shown in Figure 7 from Ref. [22]. • CW and CCW polarimeters share target in middle. Calibrate using vertical polarization. • Efficiency: ∼ 1% of the particles removed from beam become part of the useful data stream. • Full azimuthal coverage and forward/backward polarization allow first-order systematic error monitoring by using the four counting rates denoted by left/right detectors and forward/backward polarization. One combination of these rates is polarization insensitive while measuring a first-order driver of systematic errors. Corrections to the signal may be made to second-order in this driver in real time, which appears successful in correcting the signal at levels below 10 −5 [23]. EDM Statistics The statistical sensitivity of a single measurement, as exemplified by the neutron EDM case, is inversely proportional to the beam polarization, the analyzing power, the spin coherence time (SCT) and the square root of the number of detected events. The advantage of the storage ring method over using neutrons is that high-intensity, highly polarized beams with small values in the relevant phase-space parameters are readily available. As a consequence, it is possible to achieve long SCT with horizontally polarized beams, as was calculated analytically and demonstrated at COSY [31,32]. Under optimized running conditions, where the beam storage duration is for half the SCT, the EDM statistical sensitivity of the method is given by [4], where P 0 (∼ 0.8) is the horizontal beam polarization, A (∼ 0.6) is the asymmetry, E (3.3 MV/m = 4.4 MV/m × 600 m/800 m) is the average radial electric field integrated around the ring, k (1%) is the polarimeter detector efficiency, N cyc (∼ 2 × 10 10 ) is the stored particles per cycle, T exp (1 × 10 8 s) is the total duration of the experiment and τ p (2 × 10 3 s) is the in-plane (horizontal) beam polarization lifetime (equivalent to SCT). The SCT of 2 × 10 3 s, i.e., an optimum storage time of 10 3 s, is assumed here in order to achieve a statistical sensitivity at 10 −29 e · cm level, while assuming the total experiment duration is 80 million seconds (in practice, corresponding to roughly five calendar years). Such a beam storage might require stochastic cooling due to IBS and beam-gas interactions. The estimated SCT of the beam itself (without stochastic cooling) as indicated by preliminary results with high-precision beam/spin-dynamics simulations is greater than 2 × 10 3 s, limited by the simulation speed. Search for Axion-like Dark Matter in Storage Rings Axion-like dark matter (DM) interacts with a nuclear EDM [55,56]: This interaction induces an oscillating EDM, since a is a dynamic field: d n (t) = g d a = d 0 cos(m a t), where m a is the axion mass. Assuming that it makes up 100% of the local dark matter, the QCD axion induces an oscillating EDM of approximately 1 × 10 −34 e · cm [57]. Axion-like particles (ALPs), which also may constitute the local DM, are less constrained than those of the QCD axion, motivating experimental searches even above the QCD axion band in the coupling parameter space. Exploiting the dynamic nature of the nuclear EDM induced by the axion-like DM, proposed experimental approaches aim to enhance the signal using resonances, e.g. nuclear magnetic resonance in the CASPEr experiment [58][59][60] and vertical rotation of the polarization in the storage-ring axion-induced EDM experiment [4,34]. The latter is conceptually similar to the storage-ring proton EDM experiment but it does not require the frozen-spin condition. Figure 8 shows the ALP-EDM coupling parameter space, superimposed by experimentally excluded regions by (blue-filled) the neutron EDM measurement [61] and (orange-filled) the supernova energy loss [57]; theoretically plausible regions by (brown) the QCD axion band; (purple) ALP cogenesis where its lower and upper bounds correspond to c aN N = 1 and 10 [62], respectively; (green) Z N axion when it can account for the entire DM density [63,64], and projected experimental sensitivity for (red-dashed) the storage-ring axion-induced EDM experiment including (magenta-dashed) parasitic measurement in the frozen-spin storage ring EDM experiment [4,34]; and the CASPEr experiments [58][59][60]. For the storagering axion-induced EDM experiment, it assumes a spin coherence time of 10 4 seconds and one year of scientific data accumulation at each frequency, with 100 MV/m effective electric field (E * ≡ E − vB) in the storage ring. There has also appeared a new constraint from the cold neutron-beam experiment at ILL [65]; it has not been included in Fig. 8 since it is not published yet. The storage ring EDM method also allows us to look for ALP-nucleon coupling H ∝ g aN N ∇a ·Ŝ, as proposed in Ref. [3]. This interaction also induces a spin precession proportional to g aN N cos(m a t). A log 10 (m a /eV) [61] and (orange) astronomic constraints from the supernova cooling [57]. The other shaded regions are theoretically motivated regions from (brown) the QCD axion, (purple) the ALP cogenesis when the coupling constant c aN N is between 1 and 10 [62] and (green) Z N axion when it makes up the entire local dark matter density [63,64]. magnitude of the axion field gradient ∇a is boosted significantly when filled by a relativistic particle in a storage ring, providing a promising sensitivity on g aN N with dedicated experimental configurations. Conclusions A storage ring proton EDM experiment offering unprecedented statistical sensitivity to the 10 −29 e · cm level can be built based on present technology. The proposed method is based on the hybrid-symmetric ring lattice, the only lattice that eliminates the main EDM systematic error sources within the capacity of present technology. At the 10 −29 e · cm level, this would be the best EDM experiment using one of the simplest hadrons. The facility would also permit studying the deuteron/ 3 He EDM with about an order of magnitude lower sensitivity. Finally, DM/DE experiments running in parasitic mode could probe previously unexplored parameter space.
2019-04-08T13:12:06.521Z
2014-10-09T00:00:00.000
{ "year": 2022, "sha1": "c519e2cb4ab6b2ebf4062f2ac57efc25c8a69f4e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8af699ba2a694cc0a8d618d87a4017b67e19ac4e", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
12078246
pes2o/s2orc
v3-fos-license
VirusMentha: a new resource for virus-host protein interactions Viral infections often cause diseases by perturbing several cellular processes in the infected host. Viral proteins target host proteins and either form new complexes or modulate the formation of functional host complexes. Describing and understanding the perturbation of the host interactome following viral infection is essential for basic virology and for the development of antiviral therapies. In order to provide a general overview of such interactions, a few years ago we developed VirusMINT. We have now extended the scope and coverage of VirusMINT and established VirusMentha, a new virus–virus and virus–host interaction resource build on the detailed curation protocols of the IMEx consortium and on the integration strategies developed for mentha. VirusMentha is regularly and automatically updated every week by capturing, via the PSICQUIC protocol, interactions curated by five different databases that are part of the IMEx consortium. VirusMentha can be freely browsed at http://virusmentha.uniroma2.it/ and its complete data set is available for download. INTRODUCTION Viruses exploit the molecular machinery of the infected host to support their own life cycle and target host defense mechanisms to escape host resistance. This is achieved by establishing a virus-specific protein interaction network that perturbs cell processes, such as DNA replication, gene expression, growth and differentiation (1,2). A prerequisite for a complete understanding of a virus life cycle is a proteome-wide description of the complexes formed by viral proteins, either on their own or in combination with host proteins. Over the past decades, a growing number of small-or large-scale studies reported evidence for interactions between viral and host proteins. Although this information can in principle be extracted from generic protein interaction databases, 5 years ago we developed VirusMINT (3), a protein-protein interaction (PPI) database focused on viral interactions. This offered two advantages: an increased focus of the MINT (4) curation team on the curation of viral interactions and the development of a dedicated website that could archive information useful for the community of experimental virologists. VirusMINT only offered data curated by the MINT curation team. VirHostNet (5), a second resource dedicated to virus interactions, adopted the strategy of merging various data sources. This database contains re-curated data extracted from primary resources that use different curation strategies. In addition, VirHostNet complements this information with newly curated interactions. This approach has generated a network consisting of nearly 5000 nonredundant interactions between viral and host proteins. However, the integration strategy adopted by this resource implies substantial work making updates infrequent. As a consequence, the VirHostNet data set has not been updated since its publication. As technologies to identify protein interactions evolve, generic or specialized journals regularly report new viral host protein interaction information (6,7). This abundance of data demands for a strategy to capture and integrate this information regularly, with minimal hands-on effort. In order to automate data merging, we recently implemented a resource called mentha that automatically integrates the content of five different PPI databases (8). By exploiting a similar automatic procedure we created VirusMentha, a resource that builds on the work started in VirusMINT by offering the curated data from our group combined with the data captured from the other IMEx (9) partners. Every week, VirusMentha integrates the virus-host interactions curated by the IMEx databases removing redundancy. Differently from other valuable resources, VirusMentha is not exclusively focused on specific organisms or viruses, such as HCVpro (http://cbrc.kaust.edu.sa/hcvpro/) (10) tion with respect to virus strain or to host organism. Virus-Mentha currently contains interactions for 24 viral families. It also offers interactions between viral proteins and different host organisms, such as Homo sapiens, Mus musculus, Arabidopsis thaliana and so on. IMPLEMENTATION Molecular interaction evidence is reported in the scientific literature in natural language format, thus making retrieval and processing a difficult task (12,13). The data contained in VirusMINT is manually curated by our curation team in compliance with IMEx standards. In order to further expand the coverage of the virus-virus and virus-host interactome, we decided to integrate our data with data curated and stored in the databases of the IMEx partners. IMEx is an international consortium whose purpose is to make protein interaction curation more efficient by distributing the curation load and by limiting the duplication of effort. IMEx standards represent a seal of quality among the several PPIs repositories (14). The databases that we chose to integrate are MINT, In-tAct (15), DIP (16), MatrixDB (17) and BioGRID (18). VirHostNet data were not imported because the database does not fully conform to PSI-MI standards and does not provide enough experimental details. The procedure used for data merging is the same as the one developed for mentha. mentha retrieves data from external resources using the PSICQUIC (Proteomics Standard Initiative Common QUery InterfaCe) protocol (19) which provides standard programmatic access to molecular interaction repositories. Using the ontology trees defined by IMEx, the procedure identifies whether two pieces of evidence are actually the same piece of evidence described at different levels of curation depth. VirusMentha exploits the same procedure and offers direct access to evidence of viral protein interactions. The mentha procedure detects redundancy using the PSI-MI TAB file provided by the five source databases listed above. The key information to identify redundancy are supported by every database of the IMEx consortium, i.e. interaction type, experimental method, publication ID. However, these databases do not offer the same version of PSI-MI TAB file. MINT and IntAct are currently the two major databases in the IMEx consortium that offer PSI-MI TAB 2.7 format defined in PSICQUIC. The PSI-MI TAB file format has been updated to version 2.7 to allow curators to add extra information, such as for instance the polypeptide fragment (of viral polyproteins) involved in the interaction. In order to preserve the richness of such format, VirusMentha implements a detailed view if the evidence is curated by databases supporting PSI-MI TAB 2.7. TAXONOMY ORGANIZATION VirusMentha organizes the interactions according to viral families (Figure 1). The classification is based on the Baltimore classification which is based on the nucleic acid content of the virion (20,21). This classification is generally used in conjunction with the ICTV (The International Committee on Taxonomy of Viruses) classification system in order to create a tree of viral species (22). In order to give access to this hierarchy of viral strains and families, we have reconstructed the taxonomy tree of viruses using the classification information provided by UniProt (23). In this taxonomy tree, as show in Figure 1, the first level is defined as per the Baltimore classification while families are defined according to the ICTV classification system. In order to handle this taxonomic hierarchy, the retrieval and integration procedure of VirusMentha analyses each interaction and for each viral protein in the interaction it extracts its taxonomy ID (organism). The mapping procedure is illustrated in Figure 1. Using the taxonomy tree, each virus strain (taxonomy number) is mapped to the most general viral family, without dropping the original strain identifier. DATA GROWTH AND STATISTICS Over the past years, the MINT curation team has curated a large number of articles using a detailed curation policy. The integration of VirusMINT data with other databases on a weekly basis resulted in the most comprehensive and selfupdating resource available so far. On August 2014, Virus-Mentha contained up to 8084 non-redundant interactions supported by 8450 publications (Figure 2). WEB INTERFACE Searches in the VirusMentha data set can be carried out directly from the home page by entering in the search field UniProt IDs, gene names, polypeptide names, keywords or single PMID (PubMed publication ID). To offer direct access to the VirusMentha data set we implemented the possibility of searching and downloading viral interaction evidence from a virus or host perspective. The user can decide to search the entire database or to restrict the search to specific organisms or viral families. The search results are presented as a list of proteins whose gene name, UniProt ID, description or PMID matches the query. The user can select the proteins of interest by adding them to the 'protein bag'. All the interactions related to the selected proteins can be browsed either as a list in a table format or in a network view by starting a graphical applet. The user who wants to compare the same protein from different isolates can use the 'Align' button, to see their similarities; the alignment is performed on the fly using BLOSUM62 as substitution matrix, with the following penalties: open: 10; extend: 0.5. Experimental details linked to each interaction can be accessed by clicking the 'show evidence' button. Together with the link to the original paper, the interaction detection method and the interaction type of the experiment are displayed. Moreover, by clicking to the magnifier icon, a pop-up window opens showing the curation details as per PSI-MI TAB 2.7. In the binary interaction result page it is possible to filter relevant interactions using keywords. For instance, the user can search a specific human protein that interacts with several types of Herpes simplex virus strains and, using the field provided, restrict the list only to the 'Patton' strain. The VirusMentha interface has been developed to facilitate the assembly and modification of networks of interactions. The graphical applet displays viral proteins in cyan. The networks built by the user through various searches and/or network manipulations can be downloaded as a plain text file. Exported networks always report strain number and family number to facilitate clustering by family. Finally, for each protein VirusMentha reports the associated Gene Ontology terms (24), links to antibodies through Antibodypedia (25), cleaved polypeptides and, for each in-teraction, the MINT score and links to papers supporting the interaction are reported (Figure 3). CONCLUSION AND PERSPECTIVES VirusMint was developed as a public repository to capture and organize manually curated information about interactions between viral and host proteins. VirusMentha extends this concept to offer the most comprehensive and self-updating resource for viral interactions available so far. In addition, VirusMentha exploits the curation potential of five different database curation teams by integrating data curated by databases adhering to the IMEx consortium. In addition, VirusMentha continues to benefit from the Virus-MINT curation team that regularly captures data published in peer-reviewed scientific journals. Finally, VirusMentha has the advantage of being regularly and automatically updated every week. According to Google Scholar, VirusMINT has been cited in ∼100 papers (99 in August 2014) testifying the interest of the community of virologists for such a dedicated protein interaction repository. As more relevant information is reported in the scientific literature, we expect the higher coverage and curation depth that characterize VirusMentha to become a valuable resource for more comprehensive analysis of viral mechanisms and interactions.
2015-07-06T21:03:06.000Z
2014-09-12T00:00:00.000
{ "year": 2014, "sha1": "6ca22f9d9130e236f4c7f95ea70b96c83318dc1a", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/43/D1/D588/7330375/gku830.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f995629f61cbf352e6ced686dcd2e20827c9b33", "s2fieldsofstudy": [ "Biology", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
201784952
pes2o/s2orc
v3-fos-license
Comparison of radiomics tools for image analyses and clinical prediction in nasopharyngeal carcinoma Objective: Radiomics pipelines have been developed to extract novel information from radiological images, which may help in phenotypic profiling of tumours that would correlate to prognosis. Here, we compared two publicly available pipelines for radiomics analyses on head and neck CT and MRI in nasopharynx cancer (NPC). Methods and materials: 100 biopsy-proven NPC cases stratified by T- and N-categories were enrolled in this study. Two radiomics pipeline, Moddicom (v. 0.51) and Pyradiomics (v. 2.1.2) were used to extract radiomics features of CT and MRI. Segmentation of primary gross tumour volume was performed using Velocity v. 4.0 by consensus agreement between three radiation oncologists. Intraclass correlation between common features of the two pipelines was analysed by Spearman’s rank correlation. Unsupervised hierarchical clustering was used to determine association between radiomics features and clinical parameters. Results: We observed a high proportion of correlated features in the CT data set, but not for MRI; 76.1% (51 of 67 common between Moddicom and Pyradiomics) of CT features and 28.6% (20 of 70 common) of MRI features were significantly correlated. Of these, 100% were shape-related for both CT and MRI, 100 and 23.5% were first-order-related, 61.9 and 19.0% were texture-related, respectively. This interpipeline heterogeneity affected the downstream clustering with known prognostic clinical parameters of cTN-status and GTVp. Nonetheless, shape features were the most reproducible predictors of clinical parameters among the different radiomics modules. Conclusion: Here, we highlighted significant heterogeneity between two publicly available radiomics pipelines that could affect the downstream association with prognostic clinical factors in NPC Advances in knowledge: The present study emphasized the broader importance of selecting stable radiomics features for disease phenotyping, and it is necessary prior to any investigation of multicentre imaging datasets to validate the stability of CT-related radiomics features for clinical prognostication. BacKgrouNd Nasopharyngeal carcinoma (NPC) is a malignant head and neck cancer that is endemic in the Southeastern parts of Asia and North Africa. 1 This tumour is characterized by its sensitivity to radiation and platinum-based chemotherapy, and as such survival has improved substantially with the advent of radiotherapy advancement and combination chemoradiotherapy in locally advanced cases. 2 Nonetheless, combination therapies also contribute to incremental treatment-related toxicities, and thus there is a push to tailor treatment intensity based on more precise clinical risk stratification. To this end, there is a keen interest to explore novel biomarkers derived objective: Radiomics pipelines have been developed to extract novel information from radiological images, which may help in phenotypic profiling of tumours that would correlate to prognosis. Here, we compared two publicly available pipelines for radiomics analyses on head and neck CT and MRI in nasopharynx cancer (NPC). methods and materials: 100 biopsy-proven NPC cases stratified by T-and N-categories were enrolled in this study. Two radiomics pipeline, Moddicom (v. 0.51) and Pyradiomics (v. 2.1.2) were used to extract radiomics features of CT and MRI. Segmentation of primary gross tumour volume was performed using Velocity v. 4.0 by consensus agreement between three radiation oncologists. Intraclass correlation between common features of the two pipelines was analysed by Spearman's rank correlation. Unsupervised hierarchical clustering was used to determine association between radiomics features and clinical parameters. results: We observed a high proportion of correlated features in the CT data set, but not for MRI; 76.1% (51 of 67 common between Moddicom and Pyradiomics) of CT features and 28.6% (20 of 70 common) of MRI features were significantly correlated. Of these, 100% were shape-related for both CT and MRI, 100 and 23.5% were first-order-related, 61.9 and 19.0% were texture-related, respectively. This interpipeline heterogeneity affected the downstream clustering with known prognostic clinical parameters of cTN-status and GTVp. Nonetheless, shape features were the most reproducible predictors of clinical parameters among the different radiomics modules. conclusion: Here, we highlighted significant heterogeneity between two publicly available radiomics pipelines that could affect the downstream association with prognostic clinical factors in NPC advances in knowledge: The present study emphasized the broader importance of selecting stable radiomics features for disease phenotyping, and it is necessary prior to any investigation of multicentre imaging datasets to validate the stability of CT-related radiomics features for clinical prognostication. from molecular profiling of tumours that could inform on prognosis. More recently, radiomics, which aims to use algorithms to extract deep radiological features not visible to the naked eye, has been explored as a tool to characterize novel radiological phenotypes that are correlated with prognosis in a number of tumour types. [3][4][5][6] Several radiomics pipelines have been developed, most of which are agnostic to the type of imaging modality and anatomical site from which the region of interest (ROI) is localized. 7 It is thus possible that differences in image acquisition and reconstruction methods contributing to variation in image quality, among other factors including accuracy of ROI segmentation, coding variation and baseline clinical characteristics, could affect the downstream feature extraction process in radiomics. Collectively, these caveats motivated a global collaborative effort to harmonize the radiomics analytical processes, [8][9][10] and the first undertaking of the group was the standardization of some publicly available radiomics pipelines. Among them, Pyradiomics is the most widely reported radiomics tool in the literature. It provides a flexible analytical platform with both a simple and convenient front-end interface in 3-dimensional (3D) Slicer-a free open-source platform for medical image computing. 11 It regards the voxel-to-voxel relationship in a 3D manner (default software option), and outputs several radiomics indices relating to first-order, texture modules, and shape. Next, Moddicom is another in-house developed radiomics pipeline, which is able to handle DICOM/DICOM-RT objects; unlike Pyradiomics, it considers two-dimensional voxel-to-voxel relationships for each transverse slice within the ROI, and then generates an aggregated score by using the mean of all the slices for the textural radiomics indices. It was developed primarily to interrogate fractal radiomics in MRIs. 12 Both radiomics tools have been investigated in several human cancers, including oesophageal, lung and head and neck cancers, etc. and specific radiomics features have been highlighted to correlate with prognosis. 5,13,14 All of the common features defined in both radiomics tools are International Image Biomarker Standardization Initiative-compliant. 11,12 However, while both pipelines share common radiomics features and definitions, as aforementioned, the algorithms implementations underlying the feature extraction process differ between them. It is therefore not known if such interpipeline variations could affect the association with clinical parameters. In this background, we investigated the utility of Pyradiomics and Moddicom as radiomics tools to analyze CT and MRI datasets in a cohort of NPC cases. In addition, we included a third radiomics pipeline-Computational Environment for Radiological Research (CERR), 15 for comparison with Pyradiomics and Moddicom. We included tumours of different T-, N-categories, and gross tumour volumes (GTV), and observed significant heterogeneity even for the same features between both tools that affected the correlation with these known prognostic clinical parameters. Study cohort We utilized a data set of 100 patients with biopsy-proven NPC from a single academic institution. All patients fulfilled the following criteria: (1) differentiated or undifferentiated non-keratinizing NPC based on the WHO classification; (2) absence of distant metastasis; (3) and were treated with intensity-modulated radiotherapy (IMRT). Patient demographics, including age, gender and baseline comorbidities were collected. Tumour characteristics including T-, N-category, and GTV of the primary tumour (GTVp) were recorded; all patients were restaged according to the American Joint Committee on Cancer seventh edition/International Union Against Cancer (2010) stage classification system. For the purpose of this study, which was primarily to investigate the implications of inter pipeline heterogeneity, we included equal numbers of patients with T1-4 status NPC. Ethical approval for the study was obtained from the SingHealth Centralised Institutional Review Board (protocol no. 2018/2352). Informed consent was obtained from all living patients. Treatment strategies All patients underwent IMRT as primary treatment of NPC. The IMRT planning and treatment protocol were as previously reported. 16 Briefly, GTVp and clinically involved nodes were outlined, followed by high-risk and low-risk clinical target volumes (CTVs) in the primary tumour region and uninvolved nodal levels. 70, 60 and 54 Gy, delivered as simultaneous boost technique in 33 fractions, were prescribed to the GTV, high-and low-risk CTV, respectively. Dose constraints to critical organs at risk were determined by the standard threshold doses. Additionally, for patients with Stage III-IVb disease, IMRT was given in combination with concurrent chemotherapy of cisplatin (either 40 mg/m 2 weekly or 100 mg/m 2 3-weekly), along with either neoadjuvant or adjuvant platinum-based chemotherapy regimens. Imaging protocol CT perfusion (CTP) scans of the nasopharynx were performed using a 120 kVp 64-slice multidetector scanner (Somatom AS, Siemens Medical Solutions, Forchhein, Germany). It has a field of view (FOV) of 50 cm, slice thickness of 2 mm, and matrix of 512 × 512. The reconstruction kernel used is B40s. 60 ml of non-ionic low-osmolar contrast material was administered intravenously at an injection rate of 1.2 ml s −1 . It was performed after a 62 s injection delay, using following parameters; 120 kVp, 100 mAs, and 2.5 mm contiguous sections with ongoing injection of 30 ml of saline boost at flow rate 1.3 ml s −1 with total scan time of 72 s. All patients underwent nasopharynx and cervical region contrast-enhanced MR examination using head and neck coils with 1.5 T MR scanners (GE HDxt 1.5T, GE Healthcare, Chicago, IL). The MR scanner has a FOV of 23 cm, number of signal average (NSA) of 2 and Acquisition matrix of 320 by 224. T 1 weighted (T 1 W) fast spin-echo images in the axial plane (spacing between slices = 3 mm), T 2 weighted (T 2 W) fast spin echo MR images in the axial plane (spacing between slices = 3 mm) were obtained before contrast was administrated. After bolus injection of contract, axial T 1 weighted fast spin echo sequences were performed (with the same parameters as before contrast). GTVp segmentation Manual segmentation was performed on all CT and MRI images by three experienced radiation oncologists (ZL, LL, and MC). Final GTVp ROI was decided by consensus agreement between the outlined contours. NPC often has a irregularly shaped contour due to its propensity to infiltrate adjacent anatomical barriers in the cranium and parapharynx which results in a high degree of interindividual heterogeneity in tumour outlining. Hence, the consensus agreement between the radiation oncologists in tumour delineation represents a more pragmatic approach towards extracting robust feature values. Delineation on CT was performed using standardised window settings: -99, -60, 88 Hounsfield units (HU; tumour); -132, -176, 257 HU (soft tissue); -99, -176, 1440 HU (bone). Radiomics feature extraction Following anonymization of DICOM images, Pyradiomics (v. 2.1.2) 11 and Moddicom (v. 0.51) 12 were applied for feature extraction from both contrast-enhanced CT and MRI images; only MRI T 2 W images were considered for this study to ensure consistency in the GTVp segmentation and feature extraction processes. For both pipelines, features from the following five radiomics modules were extracted: (1) first-order, (2) grey level co-occurrence matrix (GLCM), (3) grey level run length matrix (GLRLM), (4) grey level size zone matrix (GLSZM) and (5) shape. In addition, two other modules [grey level difference matrix (GLDM) and neighbouring grey tone difference matrix (NGTDM)] were extractable by Pyradiomics. In addition, CERR, as another open-source platform for developing computational radiomics, 15 was also used for feature extraction from MRI T 2 images. Statistical considerations Common radiomics features between Pyradiomics and Moddicom were analysed using Spearman's rank correlation. Correlated features were defined as p-value ≤ 0.05. Unsupervised hierarchical clustering of radiomics features of the two pipelines against known prognostic clinical variables of T-, N-categories, and GTVp was performed in R (using ward.D2 as the agglomeration method). [17][18][19] All statistical analyses were performed on the R statistical package (v. 3.5.2, https://www. r-project. org). A two-sided p ≤ 0.05 was set as the cut-off for statistical significance. The use of the above analysis method instead of conventional regression analysis or random forest for model building is due to the small sample size used in this study (100 patients). Additionally, common radiomics features between Pyradiomics and CERR were also analysed using Spearman's rank correlation for MRI images. Patient characteristics Clinical characteristics of our study cohort are listed in Table 1. Corresponding N-status, GTVp and treatment parameters are summarized for each T-category. Overall, 72 patients were male and 28 were female; median GTVp was 20.5 (IQR = 13.0-34.2) cc. 29 and 71 patients received IMRT and chemo-IMRT, respectively, as treatment for their disease. CT and MRI radiomics features in NPC The extracted radiomics features from the CT and MRI data sets are summarized in Supplementary Table 1. In the CT data set, 105 features were extracted using Pyradiomics; 88 features were extracted using Moddicom, of which 11 features were excluded, as these features yielded an infinite value. Between them, 67 common features were identified: 17 first-order, 18 GLCM, 12 GLRLM, 12 GLSZM and 8 shape ( Figure 1A). 64 patients had available MRI at baseline (3 patients did not have paired MRI images and feature extraction failed in 33 patients). 105 features were extracted using Pyradiomics; 88 features from Moddicom, of which 8 were excluded. This yielded 70 common features: 17 first-order, 21 GLCM, 12 GLRLM, 12 GLSZM, and 8 shape ( Figure 1B). The numerical symbols of radiomics features extracted by Moddicom and Pyradiomics are summarized in Supplementary Table 2. Interpipeline variation for radiomics features For the CT data set, Spearman's rank correlation analyses between Pyradiomics and Moddicom revealed that 51 of 67 common CT features met the p-value ≤ 0.05 (Supplementary figure 1). Among them, all the shape-class features were correlated between the pipelines (Figure 2), indicating that the calculation algorithm for this feature class is comparable between the two pipelines. Firstorder-and GLCM-class features also showed a high proportion of correlation between the common features, while a substantial proportion of GLRLM-and GLSZM-class features showed discordance between the pipelines (Figure 2). Percentages of correlated features for all the feature classes in descending order were as follows: 100% (8/8) for shape,100% (17/17) for firstorder, 88.9% (16/18) for GLCM, 41.7% (5/12) for GLSZM and 41.7% (5/12) for GLRLM. Impact of interpipeline heterogeneity on clustering of clinical variables Next, we interrogated the effects of the inter pipeline variation in the extracted CT and MRI features on their association with known prognostic clinical variables of T-, N-categories and GTVp. For the CT features, we observed consistent clustering with clinical parameters for the feature classes that showed high proportion of interpipeline consistency (first-order, GLCM and shape; Figure 4A, B and E), whereas it was unsurprising that clustering patterns differed between pipelines for GLRLM and GLSZM features ( Figure 4C and D). To summarize, 13 for firstorder (firstorder_02-09,12-14,16,17), 8 for GLCM (glcm_01-04,07,08,17,18), 3 for GLRLM (glrlm_07,09,11), 3 for GLSZM (glszm_01,07,11) and 6 for shape (shape_03-08) of the common features showed the consistent clustering for cT-category and GTVp, while 2 for GLRLM (glrlm_02,05), and 4 for GLSZM (glszm_02-04,10) showed the opposite clustering for cT-category and GTVp. No common shape features showed similar clustering with cN-category ( Figure 4F). First-order and shape feature classes were highly correlated (100%) between pipelines, which explains the higher number of consistently clustered features and lower numbers of oppositely clustered features. Textural features (GLRLM and GLSZM) are generally poorly correlated (Figure 2) which results in a lower numbers of consistently clustered features and higher number of oppositely clustered features. For the MRI features, we observed consistent clustering with clinical parameters only for shape-related features, which was expected given that this class of features showed the least inter pipeline variation ( Figure 5E). However, for the other classes of features that showed significant inter pipeline heterogeneity, clustering patterns differed between Moddicom and Pyradiomics for first-order, GLCM, GLRLM and GLSZM features ( Figure 5A-D). Clustering outcomes were as such for MRI; five common shape (shape_03-05, 07, 08) showed the same clustering for cT-status, cN-status and GTVp ( Figure 5F). In contrast to the clustering result of CT features, only shape features (which give perfect correlation between pipeline) give a higher numbers of consistently clustered features. The rest of the features are poorly correlated (Figure 3) and results in little consistently clustered features. discussioN Radiomics refers to the comprehensive quantification of radiological phenotypes using data characterization algorithms. Through this scientific method, we potentially harbour a new paradigm of interrogating imaging data sets, which allows us to add information beyond quantification of tumour volume, number and locality. The latter are conventional indices that could potentially inform on tumour aggression, but nonetheless, few, if any, of these factors are being used to guide treatment in the clinic. It is therefore envisioned that characterization of deeper radiological phenotypes would help to enhance the prediction of tumour biology and improve clinical stratification of cancer patients. Moreover, its appeal stems from the advantage that radiomics relies on profiling of images as opposed to molecular profiling of biopsied tumour specimens, and thus offers a non-invasive method for surveillance of tumour response to treatment. On this note, Pyradiomics and Moddicom are published opensource platforms for radiomics analyses, which have been investigated in several human cancers. 11,12 While several signatures of tumour aggression and treatment response have been reported to date, the community remains cynical due to uncertainty on the reproducibility of these radiomics signatures. Several queries, not limited to data input, image processing, feature extraction, and ultimately sensitivity of the radiomics workflow to inter population heterogeneity have not been addressed; all of which can influence the robustness of radiomics as a clinical tool. To partly address this conundrum, we embarked on an important study to compare the output of Pyradiomics and Moddicom on CT and MRI imaging datasets in NPC, which is a common viral-associated head and neck cancer in East and South-Eastern Asia. 20 There are other works on comparison of other radiomics tools, 21,22 but this is the first work comparing Pyradiomics and Moddicom in NPC context. We made several key observations: (1) significantly more radiomics features extracted from CT data sets were comparable between Pyradiomics and Moddicom compared to MRI (76.1% vs 28.6%); (2) consequently, CT-based radiomics features were significantly more stable and pipeline-agnostic in terms of association with clinical parameters; and (3) finally, it is interesting that that among the different features classes, several shape-related features were associated with GTVp ( Figure 4E). These findings are crucial in instructing the workflow for future radiomics work in NPC. The results on clustering and Spearman correlation show that first-order and shape features are more robust to interpipeline heterogeneity compared to textural features such as GLCM, GLRLM and GLSZM. This is because textural features calculations are sensitive to pre-processing steps: (1) interpolation methods (important when having data with different slice and pixel spacings); (2) two-dimensional against 3D methods for textural features extractions; (3) aggregation method for obtaining scalar value from textural matrices; (4) quantization of voxel values for textural matrix computation. The poor correlation between pipelines for MR-based features (especially firstorder) can be explained by noting that Moddicom 12 performed a re-scaling of voxel values to account for variation in physics acquisition settings for MRI sequences. The results on interpipeline correlation in this work are also mirrored in our additional study with a different radiomics software-CERR 15 (Supplementary figure 3), where first-order and shape features tend to be a more robust features compared to textural one. Hence, this work shows that apart from features calculation being International Image Biomarker Standardization Initiative-compliant, it is important to understand and perhaps standardisze the pre-processing method prior to features extraction to achieve robust radiomics phenotyping. Perhaps, it is interesting to note that in our cohort, CT-derived images were more stable and reproducible than MRI images. This may be in part due to the complexity of the image acquisition protocols between the imaging modalities. MRI image quality is dependent on spin echo sequences of electromagnetic waves, as opposed to CT that relies on the photoelectric effect of kV-strength ionizing radiation passing through tissues of differing atomic number. The discrepancy in our study perhaps highlights the importance of image processing prior to data input in the radiomics workflow. At present, it is not known if different normalization protocols are needed for radiomics analyses of CT and MRI data sets, and much work is needed in this domain going forward. Next, we observed that several shape features were correlated with known clinical prognostic variables in NPC such as cTand cN-status and GTVp, which would suggest that various aspects of the tumour shape geometry and surface irregularities may be linked to the tumour burden. For example, shape_03 ~ 08 of CT, and shape_03 ~ 05, 07, 08 of MRI showed the same direction of correlation with GTVp in both Pyradiomics and Moddicom. Previous studies also showed shape features might correlate with GTVp of breast cancer and glioblastoma. 14,23 The reason why shape features are stable may be that few differences existed for the parameters of the shape information between CT and MRI regardless of pipelines. Shape_03 (shape_LeastAxis) yields the smallest axis length of the ROI-enclosing ellipsoid while shape_07 (shape_surfaceArea) yields the surface area of the ROI which appears to be consistent as the tumour volume or ROI is similar between CT and MRI. As such, these two features should be observed in future studies to confirm clinical prognostic correlations. Finally, we acknowledge that a main limitation of this present study relates to the small sample size of our cohort. Nonetheless, we contest that such a preliminary analysis is necessary prior to any large-scale multicohort radiomics study in NPC. Though there exist more commonly used machine learning techniques (lasso, regression, support vector machine, random forest) in predicting the end points, our motivation for going with the unsupervised clustering method is to reduce the number of tested features and yet circumvent the problem of limited sample size in analysis. We will compare between both statistical (supervised against unsupervised) approaches in a larger cohort going forward. In addition, the fact that significant inter pipeline variation is detectable for a proportion of the feature classes and across imaging modalities, even in a limited cohort of 100 patients support our hypothesis that radiomics phenotyping is highly heterogeneous, and feature reproducibility is crucial for clinical prediction. Hence, our next phase of study will include interrogation of multi centre imaging data sets to validate the stability of CT-related radiomics features for clinical prognostication. In addition, we aim to investigate for the biological correlates of these radiomics indices, as previously described in non-small cell lung cancer. 24 coNclusioN Here, we report on the significant heterogeneity in radiomics phenotyping between two publicly available feature extraction tools in NPC. The degree of inter pipeline variation differs by feature classes and imaging modality. This has a downstream impact on association with prognostic clinical parameters in NPC such as tumour volume and extent of infiltration. Collectively, our findings emphasize the broader importance of selecting stable radiomics features for disease phenotyping, so as to lead to the development of a robust radiomics-based biomarker for clinical implementation. acKNowledgmeNt We thank the members of the joint Soo and Chua laboratory for the valuable scientific inputs and comments on the manuscript. We thank Prof Paul C. Boutros (UCLA) for his constructive comments on the manuscript. competiNg iNterests There is no conflict of interest for the submitted work. MC reports personal fees from Astellas, personal fees from Janssen, grants and personal fees from Ferring, non-financial support from Astrazeneca, personal fees and non-financial support from Varian, grants from Sanofi Canada, grants from GenomeDx Biosciences, non-financial support from Medlever, outside the submitted work. ethics approVal Ethical approval for the study was obtained from the SingHealth Centralised Institutional Review Board (protocol no. 2018/2352). coNflict of iNterests aNd disclosures: There is no conflict of interest for the submitted work. MC reports personal fees from Astellas, personal fees from Janssen, grants and personal fees from Ferring, non-financial support from Astrazeneca, personal fees and non-financial support from Varian, grants from Sanofi Canada, grants from GenomeDx
2019-08-28T13:05:40.909Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "54f57ece3299dc6e676faec35a8032fae021fd82", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1259/bjr.20190271", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9b4758d2e32bed22417fd2a67a00da06dac02e05", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16050609
pes2o/s2orc
v3-fos-license
Conserved Linking in Single- and Double-Stranded Polymers We demonstrate a variant of the Bond Fluctuation lattice Monte Carlo model in which moves through cis conformations are forbidden. Ring polymers in this model have a conserved quantity that amounts to a topological linking number. Increased linking number reduces the radius of gyration mildly. A linking number of order 0.2 per bond leads to an eight-percent reduction of the radius for 128-bond chains. This percentage appears to rise with increasing chain length, contrary to expectation. For ring chains evolving without the conservation of linking number, we demonstrate a substantial anti-correlation between the twist and writhe variables whose sum yields the linking number. We raise the possibility that our observed anti-correlations may have counterparts in the most important practical polymer that conserves linking number, DNA. I. Introduction One may distinguish two types of flexible, linear polymer chains, as pictured in Figure 1: those that relax to equilibrium after being twisted (like polyethylene), and those that do not (like DNA). These non-relaxing chains obey a conservation law. For each chain configuration one may define a "linking number", and this linking number cannot change with time. The linking number of any closed, double-stranded chain is defined as the number of times one strand must be passed through the other in order to separate the strands. The consequences of conserved linking for DNA and for macroscopic cords are well appreciated. [1][2][3][4][5][6][7] A large imposed linking number leads such a chain to twist upon itself. This is the phenomenon of supercoiling. Supercoiling is a way of effecting global change in the properties of a chain by changing local mechanical structure in a small region. As such, it represents a powerful mechanism for controlling the chain---a mechanism that may be important for biological processes. Such far-reaching consequences are to be expected for any chain with conserved linking. This paper aims to explore the essential consequences of conserved linking by considering a primitive realization. Our realization is a lattice model that embodies conserved linking with a single strand and without the worm-like rigidity of DNA. The model resembles a hydrocarbon chain with a certain local restriction on the rotation of its backbone bonds. This chain proves to have properties quite different from those of DNA. It has strong excluded-volume swelling and thus shows simultaneous effects of swelling and supercoiling. As expected for polymers that conserve linking number, the chain contracts upon twisting. However, this contraction does not scale in the expected way with chain length. Another anomalous feature appears in the partitioning of imposed linking number into "twist" and "writhe". The linking number of any chain may be decomposed into twist, a locally defined quantity, and writhe, a quantity that depends only on the backbone configuration. Our model shows a strong anti-correlation of twist with writhe, which is not generally anticipated in DNA. [8][9][10][11][12] The presence of this correlation in our model raises the possibility of such correlations in DNA. The statistics of twist and writhe in self-avoiding walks have been much explored in the literature. [13][14][15][16][17][18] The influence of linking number on DNA has been studied extensively via continuum models . [19][20][21][22][23] Recently a powerful correspondence has been worked out between these continuum models and rigid-body dynamics. 24 Our work is rather in the alternative spirit of the work of Orlandini, Whittington and their collaborators. 13 We begin by describing the model and our Monte Carlo simulation embodying it. We report on the performance of the simulation and describe the crosschecks we used to test its validity. Next we recall the definitions of twist and writhe and report on their statistical behavior in our chain. Twist, writhe and linking number all have mean-squared averages that grow linearly with chain length, as expected. In addition our chain shows twist-writhe correlations even when the linking number is allowed to vary freely. These correlations correctly predict the response to an imposed linking number, via the fluctuation-dissipation theorem. Next we report how the size and shape of the chain respond to imposed changes in linking number, showing unexpected scaling with chain length. Then in the discussion section we explore the implications of our results. We show how a twist-writhe correlation naturally arises in our model from a mechanicallyinduced coupling of twist with local torsion. II. Description of Model and Simulation We model the polymer chain as a self-avoiding walk on a lattice, in which we allow nearest neighbor, next nearest neighbor, and further local steps as described below. We include the additional restriction that no two adjacent segments may be collinear. This restriction is realistic for hydrocarbon polymers, and guarantees that any segment and its successor define a unique plane, which in turn has a non-degenerate normal vector. The relative orientation of two adjacent normal vectors establishes whether a sequence of three segments is in the so-called cis configuration ( Figure 2). All configurations cost zero energy, except for the cis configuration, which is forbidden. Likewise, Monte Carlo moves which traverse a cis configuration are forbidden. If the ends of such a chain are joined to form a ring, this ring has a linking number that is conserved by the Monte Carlo dynamics. Thus the simulated ring conserves linking number. To see this, we define a partner strand by connecting the tips of the normal vectors for a given ring chain configuration. One may readily verify that rotating a given For each chain configuration, the partner strand is assigned by taking the cross product of the two adjacent vectors at each vertex, and placing the partner node a small, fixed distance along this perpendicular vector. For each successive node, the cross product is reversed. Thus if τ τ i is the vector pointing from the (i-1)th node to the ith node, the displacement vector u i between the ith node and its counterpart on the partner strand is defined to be in the direction (-1) i τ τ i × τ τ i+1 . Then the partner nodes are connected to form the partner strand, as shown in Figure 2a. For this staircase or "trans" configuration, the partner strand forms a staircase parallel to the chain. The chain is evolved by randomly choosing a node along the chain, and then selecting a random nearest-neighbor site for that node. The node is moved to the selected site if this move is allowed. The allowed bond vectors are shown in Figure 3. A lookup table streamlines the testing for forbidden moves. Local intersections, collinearity of adjacent segments, invalid bond vectors, and rotations through the cis configuration are all a property of at most three adjacent segments and the candidate move of one of their nodes. The lookup table is constructed by checking every possible move of each node in every possible three-segment sequence. During simulation, moves are checked by simply looking at the entry appropriate for that move and sequence in the table. The only feature of the chain that the simulation must calculate as the chain evolves is non-local intersection of nodes. This is accelerated by use of a memory image of the lattice so that only one check is required for a given candidate move. An additional benefit of the lookup table method is that tables with different restrictions can be employed. These tables, along with a toggle on the global intersection test, allow simple changes between a variety of simulation rules. In addition to the table describing our link-conserving polymer, we employed tables with no twist constraints, using both random walk statistics and self-avoiding walk statistics. We measured autocorrelation functions for the radius of gyration to determine the relaxation time for link-conserving and non-conserving chains. A link-conserving chain with 64 bonds requires roughly 2x10 6 attempted moves to attain a statistically independent configuration. Without the link-conserving constraint, the number of attempts decreases by about 25%. A typical self-avoiding walk ring chain which conserves link is shown in Figure 4, with its partner strand. Our Monte Carlo procedure obeys detailed balance just as the original Carmesin Kremer procedure does. Our procedure is the same as this one except for the exclusion of certain moves: namely, moves through a cis configuration. Since the exclusion is the same in either direction, it does not disturb the detailed balance. Despite this detailed balance our simulation does not explore all self-avoiding configurations with a given linking number. For example it does not explore knotted configurations of the backbone. We did extensive tests of both random walks and self-avoiding walks to ensure that the simulation behaved correctly. We simulated both open and ring chains. Figure 5 shows the radius of gyration and the root-mean-squared end-to-end length as a function of chain length for an open random walk chain. Figure 6 shows the corresponding quantities for a self-avoiding walk. Similar results were found for ring chains. Figures 7 and 8 show the structure factor versus wave-vector q for a variety of chain lengths, for random walk and self-avoiding walk chains respectively. The inverse length q is normalized so that the data collapse onto one curve. In all cases, the scaling was as expected, and consistent with results of other simulations. 27 Again, the correct scaling was also found for ring chains. III. Link, Twist, and Writhe Statistics As with DNA, we can describe our chain and associated partner strand using the linking number Lk, and its decomposition into twist Tw and writhe Wr using White's theorem 31 . To define these quantities, we adopt a continuum representation with the chain described Tw is given by where û is the unit vector perpendicular to t which points from the chain to the partner strand. Twist is the integrated rotation of the projection of the vector from the backbone to the partner strand along the plane perpendicular to the backbone at each point, divided by 2π. Writhe is given by where now the integrals are over the same curve. Writhe can be described as the sum of signed self-intersections of the projection of the chain averaged over all possible views. 32 In our simulations we calculated the linking number as the sum of the signed intersections of the projection of the two strands onto a plane, as described above. We chose a projection direction incommensurate with the lattice, so that all intersections occur at non-zero angles. The twist was calculated using (3). We calculated the writhe for many configurations as a consistency check using (4 Deviations in twist were calculated for these chains as well. The mean-squared twist for all the chains is shown is in Figure 11 as a function of N. We find that We find that 0 2 Wr = .0231 N as shown in Figure 11. We note that our scaling for the RMS twist, writhe, and linking number is consistent with that of the ribbon model of Orlandini and Whittington. [13][14][15][16][17][18] Finally, we calculated the twist-writhe correlation using The observed twist-writhe correlation ( Figure 11) is a substantial fraction of IV. Dependence of Gyration Radius on Chain Length and Imposed Linking Number The radius of gyration as a function of linking number for a chain of length N = 128 is shown in Figure 12. Linking numbers greater than 10 were reached by manually winding the chain to the desired Lk. For all chains studied from N = 32 to 256, the gyration radius decreases parabolically with linking number, R g = R g0 (1 -a(Lk) 2 ). The radius of gyration at Lk = 0 scales robustly as N .6 ,as in Figure 6. The coefficient a as a function of N is plotted in Figure 13. The scaling here is between N -1.25 and N -1.5 . Even linking numbers significantly greater than those encountered in equilibrium did not alter the size of the chain significantly. It is typically claimed 1 that increasing the linking number by about one unit per persistence length significantly alters the equilibrium chain configurations and thus should alter R g by an appreciable factor. This criterion corresponds to a ~ N -2 . This scaling is not inconsistent with the data. Seeing no gross distortions, we decided to investigate the effect of the imposed linking number on the aspect ratio of the chain. The moment of inertia matrix was calculated and diagonalized. The principal moments were then sorted and averaged. Typical ratios were approximately 2.57:2.07:1. Changing the linking number over the range ^N .5 resulted in no significant change in this ratio. 13 The coupling of twist and writhe accounts for these observations to some extent. Because of the coupling, most of the imposed link goes into twist. Since writhe describes contortion of the chain, a necessary condition for significant distortion beyond that present in thermal fluctuations is that However, as longer chains are examined, this highly writhed regime can be explored. V. Discussion The most striking result of our simple model is the strong correlation between twist and writhe. Writhe and twist are typically assumed to be independent degrees of freedom, correlated only when a linking number constraint is imposed. This statistical independence is often exploited for the sake of efficiency in computation by simulating only the backbone of the chain, giving information about the non-local quantity writhe. Twist is then introduced analytically using White's theorem and the assumption of independence with writhe. Clearly this approach would not be applicable to our lattice chains. Our twist-writhe correlation can be understood naturally by considering 3-bond segments of our chain. This is the minimum length segment for which both twist and writhe are defined. Considering a planar, "staircase" segment, it is apparent that both twist and writhe equal zero. Introducing twist by rotating an end segment necessarily makes the configuration non-planar, inducing writhe. Twists of the end segment create helices of the opposite sense, and so introduce writhe of the opposite sign ( Figure 14). The correlation is an inevitable consequence of the identification of the binormal of our backbone curve with the twist vector from the backbone to the partner strand. We suggest that energetic constraints in DNA could result in a similar correlation between the binormal and twist vector and result in a twist-writhe correlation. VI. Conclusion This work demonstrates two novel aspects of constrained linking number in a ring polymer. First, we showed that this constraint is meaningful in the context of singlebackbone chains such as polyethylene. We showed that any chain in which cis We see no general reason that would rule out such correlations in important linkconserving polymers like DNA, even though current models of DNA lack these correlations. We are investigating the minimal additions to the current models that would allow such correlations to be described. We are also investigating the available measurements of DNA chains to determine how much twist-writhe correlation might be present in practice. Our simulations of twisted ring chains show surprising conformational properties. These are the first simulations showing full excluded-volume swelling and conserved linking simultaneously, to our knowledge. Our rings appear to shrink in response to twisting more than existing theories have anticipated. This may be due to an interaction between writhe and excluded volume yet to be understood. Log-log plot of radius of gyration R g (lower set) and root-mean-squared endto-end distance R 0 (upper set) versus number of segments for a random walk chain. Straight lines indicate power-law behavior with exponent ν = .5 The offset between the two lines indicates a ratio R 0 /R g ~ 2.4, in agreement with other simulations 27 and the exact 28 asymptotic value of 6 .5 . Figure 6. Log-log plot of radius of gyration R g (lower set) and root-mean-squared endto-end distance R 0 (upper set) versus number of segments for a self-avoiding walk chain. Straight lines indicate power-law behavior with exponent ν = .59 The offset between the two lines indicates a ratio R 0 /R g ~ 2.5, in agreement with other simulations 27 and theory 29 .
2000-02-09T06:28:49.000Z
1999-09-25T00:00:00.000
{ "year": 2000, "sha1": "e7a161e9dfd5800689f9c961966083e2669daa4d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9909367", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e7a161e9dfd5800689f9c961966083e2669daa4d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Biology", "Chemistry" ] }
5772228
pes2o/s2orc
v3-fos-license
Evidence that Gag facilitates HIV-1 envelope association both in GPI-enriched plasma membrane and detergent resistant membranes and facilitates envelope incorporation onto virions in primary CD4+ T cells HIV-1 particle assembly mediated by viral Gag protein occurs predominantly at plasma membrane. While colocalization of HIV-1 envelope with lipid rich microenvironment have been shown in T cells, the significance of viral proteins modulating envelope association in such microdomains in plasma membrane enriched in glycosylphosphatidylinositol-anchored proteins in primary CD4+ T cells that are natural targets of HIV-1 is poorly understood. Here we show that in primary CD4+ T cells that are natural targets of HIV-1 in vivo, Gag modulates HIV-1 envelope association with GM1 ganglioside and CD59 rich cellular compartments as well as with detergent resistant membranes. Our data strengthen evidence that Gag-Env interaction is important in envelope association with lipid rafts containing GPI-anchored proteins for efficient assembly onto mature virions resulting in productive infection of primary CD4+ T cells. glycoprotein while abundant incorporation of raft lipid components such as ganglioside GM1, glycosylphosphatidylinositol (GPI)-anchored proteins Thy-1 and CD59 strongly suggest that HIV-1 specifically buds from rafts [5,11]. A stable interaction between intracellular Pr55Gag and the gp41 cytoplasmic domain of envelope [12] was shown to be important for envelope association with detergent resistant membranes, incorporation onto virions and infectivity [13,14]. The precise sequence by which envelope utilizes cellular machinery in migrating towards the site of viral assembly is not clearly understood. Glycoproteins of several enveloped viruses, have been found to contain lipid moieties [15,16] and has generated notion on the importance of lipid rafts as a docking site for the assembly of enveloped viruses [17][18][19][20][21][22]. Association of HIV-1 envelope with polarized lipid raft markers GM1 and CD59 was shown to influence transmission between T cells [10]. Gag has been shown to play an important role in envelope assembly onto virions, notably by interaction of its p17 matrix domain with gp41 cytoplasmic domain of envelope [14,[23][24][25]. While HIV Gag intrinsically associates with detergent resistant membranes (DRMs) [5,26], influenza virus M1 protein transport to DRMs depends on coexpression of HA and NA glycoproteins [27]. Likewise, the association of Sendai virus M protein in DRM is dependent on expression of F or HN protein [28], while the Rous sarcoma virus M protein requires expression with the F protein for DRM association [29]. In contrast, the Measles virus M protein has been shown to associate DRMs intrinsically independent of other viral proteins [30]. Motifs in gp41 cytoplasmic domain regulating association of HIV-1 envelope protein with DRM [8] and this phenomenon is Gag dependent in a T cell line [13] was previously reported. However, it was not known whether this phenomenon is cell type-dependent and if it differs between cell lines and those that are natural targets in vivo. Moreover, whether DRM association of envelope corroborates with their ability to traffic to classical lipid rafts was also not known. In the present study we investigated the role of Gag in intracellular transport of HIV-1 envelope into well defined GPI-anchored proteins such as CD59 and GM1 (monosialotetrahexosylganglioside) and its relevance of envelope assembly onto budding virions in primary CD4 + T cells. Precisely, we examined whether a point mutation (L30E) in matrix domain of Gag known for disrupting Env incorporation [23,31] affects envelope trafficking to CD59+ compartment in primary CD4 + T cells and if this phenomenon has any association with cellfree infectivity in primary CD4 + T cells. We first examined whether a point mutation (L30E) in matrix domain of Gag previously reported to abrogate envelope incorporation, infectivity [23] and DRM association [13] in cell lines affect infectivity and modulate envelope association with CD59-enriched compartment in primary CD4 + T cells which are predominantly the natural targets in vivo during entire course of HIV-1 infection. CD59 marker was selected as it is linked with phosphatidylinositol and segregates into rafts. Primary CD4 + T cells were purified by negative selection from whole blood using RosetteSep® Kit (Stem Cell Technologies Inc.) following manufacturer's protocol. Briefly, CD4 + T Cell Enrichment Cocktail was added at a final concentration of 50 μl/ml of whole blood and incubated at room temperature for 20 min. The mixture was diluted with RPMI/2% Fetal Bovine Serum (GIBCO, Inc), layered onto Ficoll Hypaque (Sigma, Inc) and centrifuged for 20 min at 1200 × g at room temperature. The enriched CD4 + T cells in the interface of density medium and plasma yielded approximately 98% purity; subsequently stimulated with phytohemagglutinin (0.5 ug/ml) and interleukin-2 (10-20 U/ml) before infection with virus stocks. Primary CD4 + T cells were infected with equal infectivity titers of VSV-G pseudotyped pNL4.3 WT and pNL4.3 L30E viruses and the cell lysates made from CD4 + T cells infected with wild type and L30E expressing equal amounts of p24 and gp41 were incubated with anti-gp41 monoclonal antibodies to immuno-precipitate Env. Briefly, CD4 + T cells were lyzed with 1% Triton X-100 in PBS (pH 7.4), and incubated with 1:1 mixture of gp41 monoclonal antibodies 2F5 and 4E10 (1:1000 dilution) for 4 hours at 4°C followed by additional 1 hour incubation with Protein A/G Figure 1 A. L30E Gag confers defective association of Env with CD59-enriched membranes in primary CD4 + T cells. Primary CD4 + T cells infected with VSV-G pseudotyped pNL4.3 and pNL4.3 (L30E) were lyzed and intracellular p55, gp41 and CD59 contents were measured in total cell lysates (as shown here in upper panels). Cell lysates expressing comparable p55 and gp41 were immunoprecipitated with gp41 antibodies 2F5 and 4E10 as described in the Materials and Methods resolved in 12.5% SDS-PAGE and electrophoretically transferred onto PVDF membrane. The level of CD59 co-immunoprecipitated with envelope was detected by Western blotting using monoclonal anti-CD59 antibody. B. Defective incorporation of HIV-1 envelope proteins onto virus particles due to L30E Gag substitution. Equal amounts of cell free virus particles were resolved in SDS-PAGE and Western blotting done by monoclonal antibodies to gp41 and p24 as described in the text. Note that with L30E gag mutation, envelope incorporation onto virions is severely abrogated. beads (Pierce Inc). Lysates were further washed with cold PBS and were resolved in 12% SDS-PAGE under denaturing conditions. Equal amounts of immunoprecipitated materials were subjected to 12% SDS-PAGE under denaturing conditions [32], transferred onto PVDF membranes and subsequently Western blot assay done with anti-human CD59 antibody (at 1: 1000 dilution) to assess the ability of Env association with CD59enriched compartment. As shown in Figure 1A, due to L30E mutation in Gag, Env failed to recruit CD59 in contrast to pNL4.3 wild type. Our data indicate that the mutation in Gag MA region (L30E) restricts the interaction between Gag and Env resulting in down modulation of envelope trafficking to GPI-anchored membranes in primary CD4 + T cells such as CD59. Moreover, in order to assess the effect of L30E mutation in p17 gag on HIV-1 envelope incorporation on virions in primary CD4 + T cells, cell-free virus pellets were obtained by centrifugation as described previously [13]. Equal amounts of virus particles (p24) were resolved in SDS-PAGE under denaturing conditions followed by Western blotting using monoclonal antibodies to p24 (183-H12-5C) and gp41 (1:1 of 2F5 and 4E10). As shown in Figure 1B, L30E substitution in p17 Gag was found to drastically affect envelope incorporation onto virus particles in CD4 + T cells as expected. We next assessed if there is any correlation between disruptions of envelope association with CD59+ compartment in primary CD4 + T cells and its association with DRM of the same cell type. The DRM assay was Figure 2 Envelope association with DRM in primary CD4 + T cells. A. CD4 + T cells infected with VSV-G pseudotyped pNL4.3 and pNL4.3 (L30E) were treated with cold Triton-X 100 and fractionated in sucrose density gradients as described in the text. Gradient fractions were subsequently probed for envelope with gp41 antibodies 2F5 and 4E10, anti-human CD59 for CD59 and CTxB-HRP for GM1 by SDS-PAGE followed by Western blotting. B. Density (g/cm 3 ) of fractions showing DRM and DSM (detergent soluble membrane) fractions [39]. carried out essentially as described previously [8]. Briefly, primary CD4 + T cells infected with VSV-G pseudotyped pNL4.3 WT and pNL4.3 (L30E) were lyzed with cold 0.5% Triton-X-100 and fractionated through sucrose density gradient centrifugation in an ultracentrifuge (Beckman Coulter Inc) at 1,00,000 × g for 8-12 hours at 4°C. Ten equal fractions were collected and were examined for presence of gp160, CD59 and GM-1 ganglioside by Western blot using human anti-gp41 monoclonal antibody (2F5) [31] and 4E10 [33], mouse anti-human CD59 (BD Biosciences Inc) and cholera toxin conjugated to horse radish peroxidase (HRP) (Sigma, Inc.) respectively. As shown in Figure 2, L30E mutation in Gag restricted envelope association with DRM fractions of CD4 + T cells in contrast to the wild type. Our results were further substantiated by the presence of both CD59 and GM1 in DRM fractions under same experimental conditions. In summary, we show the modulatory role of Gag on envelope association with lipid enriched micro domain that play an important role in transmission by modulating envelope assembly onto virus particles in primary CD4 + T cells. Despite its close proximity, DRMs isolated from cells may not necessarily represent the preexisting rafts in living cells. [34,35]. Hence, GPI anchored proteins because of their raftofilcity often are regarded as best choice for studying raft association and targeting [36]. In our present study, in addition to investigate role of gag in envelope association with DRM, we attempted to look into GPI-anchored proteins for their high affinity towards lipid raft association [35,37]. The lipidbinding B subunit of cholera toxin (CTxB), recognizing GM1 at cell surface in plasma membrane, was used to study the envelope transport into lipid rafts. Like other GPI-anchored proteins, these markers were also known to be associated with DRM [38]. Our data showed that abrogation of Gag-Env interaction down modulated envelope transport into CD59 positive compartment in primary CD4 + T cells and also failed to associate with DRM fractions. While viral precursors Gag and Gag-Pol are synthesized by polysomes in cellular cytoplasm, the oligomeric envelope protein is synthesized in endoplasmic reticulum and post-translationally modulated in the Golgi apparatus, traverses into the secretory pathway towards assembling onto budding virions. We envisage that Gag by acting as cargo transport intermediate carry envelope protein through trans-golgi route sorting into domains of lipid rich plasma membrane enable them assemble onto budding virions in primary CD4 + T cells.
2014-10-01T00:00:00.000Z
2010-01-08T00:00:00.000
{ "year": 2010, "sha1": "c41cd165529d51d5418e2628ceda43970f8ca399", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-7-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c41cd165529d51d5418e2628ceda43970f8ca399", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Biology" ] }
118643071
pes2o/s2orc
v3-fos-license
Interaction between catalytic micro motors Starting from a microscopic model for a spherically symmetric active Janus particle, we study the interactions between two such active motors. The ambient fluid mediates a long range hydrodynamic interaction between two motors. This interaction has both direct and indirect hydrodynamic contributions. The direct contribution is due to the propagation of fluid flow that originated from a moving motor and affects the motion of the other motor. The indirect contribution emerges from the re-distribution of the ionic concentrations in the presence of both motors. Electric force exerted on the fluid from this ionic solution enhances the flow pattern and subsequently changes the motion of both motors. By formulating a perturbation method for very far separated motors, we derive analytic results for the transnational and rotational dynamics of the motors. We show that the overall interaction at the leading order, modifies the translational and rotational speeds of motors which scale as ${\cal O}\left([1/D]^3\right)$ and ${\cal O}\left([1/D]^4\right)$ with their separation, respectively. Our findings open up the way for studying the collective dynamics of synthetic micro motors. I. INTRODUCTION Designing the synthetic micro propelling systems with ability to navigate in predefined and controllable trajectories are the aim of many researchers both in chemistry and physics. 1 Delivery of drug at living systems and construction of manipulating tools for lab on chip experiments are among the main applications of such systems. Hydrodynamic swimmers, 2-4 single DNA molecule propeller, 5 light mediated motion of Janus particles in binary mixtures and colloidal systems 6,7 and phoretic propulsion of spheroidal particles 8 are most recent proposed designs for micro machines. Apart from their potential applications as listed above, the physics of directed motion at the scale of micrometer is also a challenging issue in physics. 9,10 This is mainly due to the inertia-less condition that constrains the physics at this scale. At macroscopic scale of the daily life, inertia provides a mechanism for movements, but at microscopic scale, the life is dominated by dissipation. As a result of this ambiguity, a backward ejection of a high-speed jet of molecules is not able to propel a micron size boat. For driving a micrometer boat, we need to go beyond our macroscopic feeling of motion and use nontrivial mechanisms for swimming strategies. 2,11,12 Janus particles with surface chemical activity are potential proposals for cargo delivery machines at the scale of micrometer. Originated from a very interesting experiment by Paxton, et. al., 13,14 a great deal of the researcher's attention has been attracted by the idea of generating directed motion by surface reactions. [15][16][17][18] As a recent example, Janus particles made from spherical Pt insulator have been studied extensively. 19,20 Drug delivery [21][22][23][24][25][26] and ability for entering into cells by catalytic Janus particles 27 have been examined experimena) Electronic mail: najafi@znu.ac.ir tally. Another interesting application of catalytic micromotors includes water purification that, has been successfully tested. 28,29 Also it is shown that using Janus particles, one can make a chemical sensor. 30 Understanding and predicting the physical behavior of a single or many such motors constitute the core of many recent researches. 31 The physics of a single Janus particle propeller has been investigated theoretically 32 and numerically. [33][34][35] Recently, motion of Janus self propellers in different conditions have been considered. This include dynamic of a particle confined by a planar wall and also motion in shear flow. It is shown that depending on the initial state of a Janus particle, a rigid and electrically neutral wall can both attract or repel a nearby Janus motor. 36,37 The dynamical response of a Janus motor moving in an ambient shear flow have a crucial dependence on the strength of thermal fluctuations of the ions. 38 For most of the expected applications of micro machines, it is reasonable to use a collection of them to achieve maximum efficiency. Along this task, one need to have an understanding of the physics of mutual interaction between two or many of Janus motors. Such theoretical knowledge will allow the researchers to predict the collective behavior of a suspension of many Janus motors system. So far and up to our knowledge, all theoretical works are limited to the physics of individual motors. In this article we address the problem of interaction between two self propelled Janus motors. A number of interesting phenomena in a system of two or many hydrodynamical swimmers have been observed. These includes a class of phenomena ranging from coherent motion of two coupled swimmers [39][40][41] to pattern formation and reduction of effective viscosity in suspensions. [42][43][44][45][46] Inspiring from these hydrodynamical systems, we expect to see a rich physical behavior in the case of Janus particles those are electro-hydrodynamical and in addition to hydrodynamic effects, the presence of long range electrostatic forces should also be considered. The rest of this article organizes as follow: In section II we introduce the system and write the basic governing equations. Sections III and IV are devoted to develop the approximations that we will use. Analytic results for a single motor are collected in section V and the problem of interacting particles is presented in section VI. Finally, discussion about our results is presented in section VII. II. GOVERNING EQUATIONS We start by analyzing the physics of a single motor that, benefits surface chemical reactions to propel itself. A schematic view of the model system that we are interested to analyze, is shown in Figure 1. A charged colloidal particle with radius a and electrostatic surface potential given by ψ s , is immersed in an electrolyte solution with electric permeability ε r and hydrodynamic viscosity η. Solution consists of two ionic species, cations and anions with valances given by Z ± , respectively (throughout this paper, + refers to cations and − refers to anions). We consider the simple case where the electrolyte is symmetric and single valence with Z + = −Z − = 1. Driving force of this motor emerges from an asymmetric surface chemical activity. We assume that the surface properties of the motor, allows it to absorb and emit chemical species in an asymmetric way. As shown in figure, the north hemisphere of the Janus particle can emit ionic particles, either cations and anions. The south hemisphere of the Janus particle can absorb the ions with the same rate given at the emitting part. Such a simple modeling can take into account the physics of most experimentally realized Janus motors. We use dimensionless units to introduce the dynamical equations. In addition to simplifying the notations, dimensionless form of the equations will help us to introduce our approximate scheme for solving the equations. We will use the radius of the motor a, potential associated with thermal energy ψ 0 = (k B T /e) and equilibrium concentration of ions at infinity n ∞ , to make non dimensional form for all length scales, electric potential and concentrations, respectively. A velocity scale given by v 0 = (k B T /e) 2 (ε r /ηa) and a characteristic pressure p 0 = v 0 η/a will be used to make the velocities and pressures non dimensional. Denoting the ionic fluxes of cations and anions by j ± (x), we consider a prescribed surface activity given by the following condition on the fluxes: where j ± (n) = j s ± , shows the current densities given on the surface of the spherical Janus particle. In the above equation,Q is a constant ionic rate, θ is the azimuthal angle with respect to a fixed z−axes co-moving with the motor andn represents a unit vector that is normal to the surface. For the above model of surface activity, total number of both ionic species in the solution are fixed. In addition to the catalytic realization of our model, it can also be considered as a motor that works base on an osmotic pressure difference. One can consider a sphere that is constructed by a semi-permeable membrane. An internal active compartment inside the motor, provides an angle dependent osmotic pressure difference between inside and outside of the motor. Due to this pressure difference, the above surface ionic flow can appear in the system. As a result of this asymmetric surface property, the motor will achieve a constant steady state propulsion velocity. We would like to calculate this propulsion velocity as a function of the physical properties of the motor and the electrolyte characteristics. Simultaneous solution to the hydrodynamic equations and the electrostatic equations for the ionic concentrations, will reveal the propulsion velocity. These two sets of equations are coupled via the hydrodynamic body force that appears in the hydrodynamic equations. As discussed before, the hydrodynamics of a micron scale system should be described by governing equations at very small inertia condition. Neglecting the inertial effects, the Stokes equation governs the dynamics of the fluid at fully dissipative limit: 47 (2) where u(r) and P (r) stand for velocity and pressure field of the fluid. Assuming that the fluid is in-compressible, the velocity field should satisfy a continuity equation as: ∇ · u(r) = 0. Right hand side of the Stokes equation represents an electric body force acting on the fluid that comes from the ions, where n ± (r) and ψ(r) are ionic concentrations and electric potential of the ions. The electrostatic potential satisfies the Poisson-Boltzmann equation: Continuity equations for the ionic currents are another equations that should be satisfied. The continuity equations for ions read: Thermal fluctuations of the ions, drift due to the electric forces and convection due to the flow of fluid are different sources for the ionic currents. Collecting all these terms, we can write the following phenomenological relations for the ionic currents as: j ± (r) = −∇n ± (r) ∓ n ± (r)∇ψ(r) + Pen ± (r)u(r). (5) where the phenomenological contribution from the fluctuations is expressed in terms of the concentration gradient. Two important dimensionless numbers, δ and Pe that are appeared in governing equations, are given by: Debye screening length δ, measures the equilibrium thickness of ionic cloud around a colloid which is immersed in an ionic solution. This is essentially a length beyond which the electric effects of the colloid screened. Peclet number Pe, measures how the convection is effective in comparison with thermal diffusion. For very small Peclet number, the current due to the thermal fluctuations is dominated over the current from convection. In our description of the system, and for mathematical simplifications, the diffusion constants for both ions are assumed to be equal and we denote both of them by D. In general the ionic diffusion constants depend on the size of ions, that result a different values for cations and anions. As we are not interested about the phenomena that could result from such asymmetry between cations and anions, we restrict ourselves to the symmetric case with equal diffusion constants. In addition to the boundary condition given by equation 1, there are other boundary conditions that should be considered. On the surface of motor and in a comoving frame, the boundary conditions for the fluid velocity and ionic potential read: Very far from the motor, at r → ∞, the boundary conditions read: where U, is the propulsion velocity of the motor, that needs to be determined by solving the above equations. As the motor propulsion is not due to any external force, total force acting on the motor vanishes. Collecting both the hydrodynamic and electrostatic forces acting on the spherical motor, the force free condition can be written as: where I refers to the 3 × 3 unit matrix, and superscript T refers to transpose of a square matrix. For a typical experiment that we are interested, A micron sized particle with a ∼ 1µm, moves in an electrolyte solution with η ∼ 10 −3 Pa sec, D = ×10 −9 m 2 /sec and n ∞ = 10 23 m −3 (data are given for 0.001 molar solution of KCL at room temperature). 48 In this case, we will have δ ∼ 10 −3 and Pe ∼ 10 −1 . For a typical system with these numerical values for physical parameters, we can proceed by applying the condition δ ≪ 1 to the dynamical equations and obtain approximate analytic results. In the limit of strong electrostatic screening and also small convection, we expect to have great simplifications in dynamical equations. FIG. 1. A spherical micron size particle benefits surface reactions to propel itself. Surface activity of the particle, allow both production and absorption of ionic molecules. With respect to a co-moving reference frame, an azimuthally symmetric but polar pattern of surface activity, is able to produce a net propulsion. III. THIN DEBYE LAYER, κa ≫ 1 In the limit of thin Debye layer with δ → 0, it can be concluded from Equation 3 that, As a result of the singularity at the limit of δ = 0, the above electro-neutrality condition can be applied only at the outside of Debye layer. The Debye layer, is a thin screening layer adjacent to the surface of Janus particle. At this condition and to study the physical properties of the system, we use a macroscale description developed in by Yariv and coworkers. 49,50 In such description, by decomposing the space into the Debye-layer and the region outside of it (bulk region), one aims to extract an effective macroscale properties of the electro-neutral bulk region. Such effective fields will provide approximations to the real bulk properties of the system. The physics inside the Debye layer is governed by the equilibrium Boltzmann distribution for the ionic concentrations. Effective physical properties at the bulk, can be achieved by applying effective proper boundary conditions on outer surface of the Debye layer on macroscale fields (instead of the application of boundary conditions on the particle surface). Denoting the effective bulk fields by capital letters, we need to solve the following governing equations for hydrodynamics in the bulk: and also the ionic properties are given by the solutions to the following equations: The effective fields, obviously satisfy the same boundary conditions as real microscopic fields at infinity. However, the boundary conditions on the particle surface will change to the boundary conditions given on the Debye layer. Combining the boundary condition given in Equation 1 by the definition for the current density given by the Equation 5, we arrive at the following boundary conditions on the surface (r =n): where ζ, is the electric potential drop between the particle surface and the outer surface of Debye layer and ψ s is the surface potential of the particle. For full screening limit, ζ = ψ s . A significant and most important part of the boundary conditions on outer surface of Debye layer, is that of a slip velocity condition that is known as the Dukhin-Derjaguin slip velocity on the effective velocity field given by: 51 where ∂ ∂n =n · ∇ and ∇ S = (I −nn) · ∇ is the surface gradient. Here S shows the outer surface of Debye layer. In the passive case, where the surface of particle is not active,Q = 0 and the above equations have trivial equilibrium solutions given by: Obviously for a passive particle, the propulsion velocity vanishes U = 0. We expect to obtain non-zero selfpropulsion velocity for a particle with surface activity. IV. SMALL SURFACE ACTIVITY Although the thin Debye layer approximation makes the equations very simpler, but they are still highly coupled and it is not possible to present analytic solutions. This difficulty can be overcome by considering the case where the surface activity of the motor is weak. For very small value ofQ, we can present a systematic expansion in powers ofQ. Expanding all variables in terms ofQ, we have: and Up to the first order in small quantityQ, the dynamical equations read: These equations should be solved provided the following boundary equations at the outer surface of Debye layer, r =n, The boundary conditions at r → ∞, are given by: One should note that the force free condition at the first order ofQ reads: As a result of the above calculations, one can see that in the limit that we work, the electrostatic effects have no contribution in the total force. In the following parts, we first derive the propulsion velocity and also the velocity filed due to a single motor. Then the problem of interacting motors will be addressed in details. V. SINGLE JANUS PARTICLE Here, we calculate the properties of a single motor in the limits that described before. As N ′ and Ψ ′ simply satisfy the Poisson equation, their azimuthal symmetric solutions can be written as an expansion in terms of Legendre polynomials: Applying the boundary conditions and up to the leading order ofQ, the following unique solutions can be derived: Having in hand the ionic concentration, we can proceed to calculate the hydrodynamical variables as well. As a result of symmetry considerations, We assume that the self propelled velocity of the particle points along theẑ direction. So we can put U ′ = U ′ẑ and search for the value of U ′ . Using the above result for the concentration profile, the slip velocity on the particle surface can be written down as: In addition to the above conditions, the force free condition should also be considered. In order to evaluate the particle velocity U ′ , we proceed by applying the well known reciprocal theorem of low Reynolds hydrodynamics. 52 The Lorentz reciprocal theorem, relates the solutions of two distinct Stocks flow problems which share the same geometry but having different boundary conditions. According to this theorem, the velocity fields V I and V II and also the corresponding stresses, σ σ σ I and σ σ σ II of two problems are related by a surface integral over domain boundaries as: The integral is over a surfaces that defines the boundary. The velocity profiles, V I and V II are subjected to different boundary conditions on the surface and both of them are assumed to vanish at infinity. Here and to use the reciprocal theorem for extracting the swimming velocity of Janus particle, we choose the problems I and II as follows. For case I, we consider V I as the velocity field of a translating sphere with an arbitrary velocity given by: u Iẑ . This translating sphere is subjected to no slip boundary condition on its surface. On the surface of the sphere we have: V I = u Iẑ . As a very well known result, for this translating sphere, the hydrodynamic force has a simple form given by: F I = −6πu Iẑ . For case II, we choose the velocity profile of our main problem, the problem of a propelling Janus particle with slip velocity. We consider the problem of this propelling Janus particle in the laboratory reference frame and put V II = V ′ + U ′ . The slip condition on the particle surface is given by: where V ′ S is the slip velocity from Equation 22. One should note that the hydrodynamic problems of both cases, I and II, vanish at infinity. Streamlines of the above two cases are plotted in Figure 2. After defining the problems I and II, we can easily see that the left-hand-side of Equation 23 reads as: The last result comes from the force free condition of a self propelling Janus particle. Then from the right-handside of Equation 23 we have: where we have used the fact that the force exerted on the particle in our first problem is given by: F I = −6πu Iẑ . Substituting Equation 22 in the above equation, we will have: sin θθ θ θ · σ σ σ I ·ndS (27) Noting that for a spherical particle, we haven · σ σ σ I = − 3 2 u Iẑ , the propulsion velocity can be obtained as: In Figure 3, we have plotted the ionic density profile and the streamlines of the resulting fluid flow around the self propelled Janus particle. Asymmetric distribution of the ions around the Janus particle emerges from the asymmetric surface activity given on the surface of motor. Such an asymmetry when combines with the sleep velocity condition given in the macro-scale description, provides an essential physical element for producing a finite self propulsion. After returning the physical dimensions, the speed of the Janus particle is given by: The Janus particle moves in a direction that is preferred by the asymmetry of the surface reactions. As one can see from the above result, both electric effect of Janus particle, that is given by its surface potential ζ 0 , and also the strength of thermal fluctuations k B T , have dominant influence on the functionality of a single motor. In the case of small Peclet number and for large temperature k B T ≫ eζ 0 , the case that we are interested in a typical system, the swimming speed can be approximated as: where we have used the relations D = k B T /ξ ion and ξ ion = 6πηa ion with the size of ionic molecules given by a ion . As one can see, the thermal fluctuations have negative influence on the functionality of Janus motor. For a typical case with ε r = 80ǫ 0 ,Q = 10 7 µm −2 s −1 , a ion = 1nm and ζ 0 = 0.01V, we arrive at a speed like: U ∼ 1µm s −1 . Such speed is completely reasonable to have a functional motor at the scale of micrometer. FIG. 2. Two different hydrodynamic problems are used in reciprocal theorem to achieve the swimming velocity of a Janus particle. a) Problem I: the velocity field of a translating sphere with velocity uIẑ, b) Problem II: the velocity field of a self propelling Janus particle with slip velocity given by Equation 22, in laboratory reference frame. After calculating the propulsion velocity, we can investigate the velocity field due to the self propulsion of this single motor. The above approach works only for evaluating the particle speed. In order to obtain the velocity field we should solve the Stocks equation with proper boundary conditions. A direct solution to the hydrodynamic equations, presented at appendix A, reveals that the velocity field of a self propelled Janus particle in the laboratory frame reads as: One should note that the above velocity field, resembles the velocity field due to a dipole of sink and source of potential flow. As a result of the force free condition, we had this expectation from the beginning, that in a multipole expansion of the velocity field, the source dipole should have the dominant effect. FIG. 3. Ionic density profile and fluid velocity streamlines of an electrokinetic self propelled Janus particle. Asymmetric distribution of ions and slip velocity on the surface are essential elements that cause the Janus particle to move. VI. TWO INTERACTING JANUS PARTICLES As shown in Figure 4, let us consider two spherically symmetric Janus particles with radii a 1 and a 2 . These two particles are separated by a center to center vector denoted by D. The position of a general point in space with respect to each motor is given by r 1(2) , respectively. Intrinsic propulsion velocities of Janus particles point along the directions given byt 1 (2) . In reference frames that are locally connected to each spheres, the polar angles are measured with respect tot 1,2 and are denoted by θ 1,2 and ϕ 1,2 . The surface activity of the Janus particles are given by: wheren 1 andn 2 represent the unit vectors that are normal to the spheres and the ionic production rates at the surfaces of motors are denoted byQ 1 andQ 2 . As a result of the calculation given at previous section, the intrinsic propulsion speed of motors, which are the speed of isolated motors, are given by Equation 28. We want to calculate the influences of a Janus particle on the speed of a nearby Janus particle. At the limit of very small Debye layer, the case that we are interested here, the electric effects of particles are screened and the direct electrostatic interaction between the particles can be neglected. In this case, the electrohydrodynamic forces should be considered. Neglecting the direct electrostatic interaction between particles, two types of effects can mediate the electrohydrodynamic forces. Both the direct hydrodynamic interaction between moving particles immersed in the fluid medium and also the change of fluid pattern due to the rearrangement of ionic species which are subjected to instantaneous boundary condition on both particles, will eventually lead to coupling between Janus particles. For simplicity, we call the former case by direct and denote the latter case by indirect contributions. The direct (indirect) interaction, is due to the instantaneous appearance of particle positions in the boundary conditions of hydrodynamic (electrostatic) equations. It is very important to note that both kinds of the above interactions are of hydrodynamics in nature and it is the fluid medium that mediates both types of the interactions. As an approximation, we assume that these two contributions are additive. The validity of this approximation is guaranteed for very far separated particles and we will clarify it in more details at the following sections. Denoting the overall translational and rotational velocities of each particles by U i and Ω i , we can write them as: where the intrinsic translational and rotational propul- sion velocities of the i ′ th Janus particle are given by: From here, we use the radius of the first sphere a 1 for making dimensionless lengths. This means that our results will depend on a new dimensionless number e = (a 2 /a 1 ). We aim to calculate the direct and indirect contributions. As described before, the coupling between two particles arises from the boundary conditions that take into account the positions of two particles. The ionic concentration field satisfies the Poisson equation and it is subjected to boundary conditions on the outer part of the Debye layer of both Janus particles. In the limit of thin Debye layer with δ → 0, the Poisson equation simplifies to ∇ 2 N = 0, where N (r 1 , r 2 ) shows the ionic concentration outside the Debye layer (effective properties of bulk). The ionic concentration satisfies the following boundary conditions: In addition to the above boundary conditions, the velocity field also should satisfy the Dukhin-Derjaguin slip velocity on the outer surface of Debye layer of each particles. The hydrodynamic boundary conditions read: 51 where the tangential gradient operator is denoted by ∇ Si . Simultaneous application of two boundary conditions given by Equation 33 and Equation 34, is the main difficulty that makes it impossible to present a full analytic solution. In the limit of very far motors, D ≫ 1, e ∼ 1, we can proceed with a perturbation method. We first assume that the motors are hydrodynamically uncoupled and assume that the coupling is only due to the boundary condition on the ionic concentration. This will give us the indirect contribution. Then to obtain an approximation for the direct hydrodynamic contribution, we assume that the motors are decoupled with respect to ionic boundary conditions, and investigate the hydrodynamic boundary conditions separately. This scheme will provide us a systematic way to expand the interaction in powers of 1/D. In the following, we first calculate the indirect contribution and then the direct hydrodynamic contribution is also calculated. A. Indirect contribution To obtain the indirect contribution, we should take into account the boundary condition on concentration field that is given by Equation 33. For very far particles (D ≫ 1), we denote the concentration profiles for isolated particles by: N 0 1 and N 0 2 . In this case we can write the real concentration field that obeys the full boundary conditions as: where the deviations from isolated particle solution is denoted by: ∆N . As have been calculated in previous sections, the concentration profiles for isolated particles are given by: Writing the deviation from single particle profile as: we can easily see that the new fields satisfy the Laplace equation as ∇ 2 N α i = 0 with the boundary conditions given by: Such hierarchical description of the effects of motors, allow us to develop a systematic expansion in powers of 1/D. At the leading order of calculations, we can proceed by considering the zero order term. Having in hand the zero order concentration profile, we can use Equation 34 and evaluate the slip velocities on the surface of each motors. Performing such calculations, we can arrive at the following relations for the slip velocities. On the surface of first Janus particle, and in the laboratory reference frame, the velocity reads as: and on the surface of second Janus particle, the slip velocity reads as: As one can see, the slip velocity on each particle has two contributions. As an example and, for the first particle, these two contributions are the term that is proportional to U 0 1 and the terms that are proportional to U 0 2 . The first term denotes the intrinsic asymmetry of the particle while, the other parts are due to the asymmetry of the second Janus particle. As a result of the above surface slip velocities, the velocity filed in the medium deviates from its value for the isolated Janus particles. Now to obtain the full changes in the velocity field, we need to apply the full hydrodynamic boundary conditions on both particle. Here, we want to neglect such complexities and simply assume that the velocity profile is still due to the isolated Janus particles but with modified slip velocities given by the above relations. This is the core of our direct-indirect separation of the effects and what we obtain with this assumption, contains the indirect contribution. At the next section, we will again come back to this point and take into account the effects that we neglected here. We write the fluid velocity field in the laboratory frame as: where the partial flows due to each Janus particles can be written as: where the velocity contributions that the particles achieved from indirect interaction are denoted by: and Ω ind i . The above relations are essentially the velocity profiles of isolated Janus particles given in equation 30, but with modified swimming velocities. Two important conditions of zero total force and zero total torque, are essential points that we should apply to the equations for obtaining U ind i and Ω ind i . Similar to the case of a single Janus particle, we can use the Lorentz reciprocal theorem and extract the required results. The details of such calculations are collected in appendix B. For the first particle, the final results read: and for the second particle, we will have: The above results, present the zero order indirect contributions, regarding the perturbation expansion introduced in equation 35. As one can see, the results for transnational velocity decays like (1/D) 3 . To see the effects of the next order terms, we need to solve the Laplace equation for N 1 i with proper boundary condition given by Equation 38. As the boundary condition decays like (1/D) 2 and the governing equation is also linear, we will expect such a similar decay for N 1 i . Now for calculating the effects of such first order term in the velocities, we should repeat the same procedure as described for zero order term. Continuing the calculations, we will eventually receive at a velocity correction for Janus particles that decays like (1/D) 3 × (1/D) 2 . At the next part, we will show that the direct hydrodynamic contribution give corrections that are more effective than these corrections. So we will ignore the higher order corrections due to the concentration profile and keep only the zero order contribution. B. Direct hydrodynamic contribution In the previous section, we have neglected the complexities associated with simultaneous applications of the slip velocity condition on both particles. We have simply assumed that the velocity profile corresponds to the isolated Janus particles but with modified slip velocities. Here, we want to go beyond this simplifications and obtain the corrections associated to such complexities. To achieve the overall velocity field V, associated to the complete problem that takes into account both direct and indirect interactions, a Stokes equation with the following boundary conditions on the surface of Janus particles should be solved: where we have assumed that as a result of hydrodynamic interaction between the particles, each Janus particle achieve an additional change in its velocities given by: and Ω dir i . Again a proper application of Lorentz reciprocal theorem, will help us to extract the required velocities. The details of calculations are presented in appendix C, and here we write the final results. For the first Janus particle and up to the order O(1/D) 6 , we will have: the first term that is proportional to U 0 2 , is the velocity field produced by the second Janus particle and calculated at the position of first Janus particle. This is a result that we expected to see from the Faxen theorem for a colloidal particle immersed in external velocity field. Calculations show that, the dominant part of the rotational velocity induced by direct interaction, will behave like: (1/D) 9 and, we neglect it here. Similar expression can be obtained for the second Janus particle that reads as: Very interestingly, the dominant part of both direct and indirect contributions behave similarly for D ≫ 1. Therefore the dominant part of the total translational and rotational velocity of the first Janus particle is given by: , As a result of symmetry, the corresponding velocities of the second Janus particle can be written as: , In the next part, we show how such interactions will modify the trajectories of Janus particles. VII. RESULTS AND DISCUSSION We have shown that as a result of coupling between hydrodynamic and electrostatic effects, a long-range interaction between particles have been mediated. For a very thin Debye layer κa ≫ 1 and a ≪ D, the electrostatic effects of Janus particles are screened and we have neglected the electric interaction of the Janus particles. In this case all the interactions between particles have hydrodynamical origin that is long range. Such long range interaction, affects the translational velocity of each motors, and it also introduces a rotational velocity for motors. For very far Janus particles, the leading order of translational speed scales as O [1/D] 3 and angular velocity scales as O [1/D] 4 . The interaction modifies the trajectories of self propelling Janus particles. To have a qualitative feeling of the interaction, some typical examples of the trajectories are presented in Figure 5. As one can distinguish from figures, the overall effect of the interaction depends on the initial states of two Janus particles. In all trajectories, the dashed-line corresponds to the trajectory of an isolated Janus particle. For the cases where the particles move in same directions (the first two trajectories shown in figure), the interaction has a repulsive signature. But for a case where the particles move in anti-parallel directions (the third trajectory shown in Figure 5), the interaction tends to decrease the relative speed of two particles. We have used the dominant part of the interactions but please note that by using the method that we have developed in this paper, we are able to systematically consider all of the orders of perturbation terms. Along this work, we are currently working on the dynamics of a collection of active Janus particles. It is a known fact that hydrodynamic interaction near a rough wall will introduce nontrivial effects, 53 in this case we are also analyzing the motion of a single Janus motor near a wall with surface roughness. Appendix A: Velocity field due to a single active Janus particle In this appendix, we present the detail of the calculations for the velocity field produced by a moving single Janus motor. Along this task, we should solve the Stocks FIG. 5. Trajectories of two interacting Janus particles are shown for different initial states of the motors. In all figures the intrinsic speed of the motors are assumed to be similar. As one can see, for the case of two parallel motors, the interaction tends to repel the Janus particles. equation, with boundary conditions given as follows: To proceed, we eliminate the pressure field from the Stokes equation by evaluating the curl of this equation. This will gives us: In terms of stream function H(r, θ), the velocity field can be written as: where the incompressibility condition ∇ · V ′ = 0, is assumed. According to the boundary condition at infinity, one can suggest a solution for H as: H(r, θ) = f (r) sin 2 θ. using the above ansatz, the curl of velocity field reads as: 2 r 3 f . Now we can write: So we have following equation for A(r): and its solution is: where B ′ 1 , B ′ 2 are integration constants. Function f satisfies the following differential equation: which has a general solution like: Collecting all the above results, we can write the fluid velocity field as: where B 1 , B 2 , C 1 and C 2 are constants that can be easily determined by applying the boundary conditions. Finally, the fluid velocity field due to a moving Janus particle in a co-moving reference frame reads as: Appendix B: Indirect contribution to the interaction In this appendix we present the calculations that give the indirect contribution to the interaction between two Janus particles. We just consider the contribution to the velocity of the first motor, then the symmetry arguments will help us to write the velocity change of the second motor as well. As in the case of a single Janus particle, we will benefit the Lorentz reciprocal theorem to extract the indirect contribution. To use the reciprocal theorem, we need to define the set of two hydrodynamic problems which share a common geometry but with different boundary conditions. Let's start with defining the problem II, first. The case II, corresponds to our main problem defined in the section "indirect contribution". We put V II = V ind and it is subjected to the following boundary conditions: where we would like to find U ind 1 and Ω ind 1 and note that the calculations should be done in laboratory frame. Here V ind S1 is the slip velocity given by Equation 39, and U 0 1 is the first particle velocity from Equation 32. As we are looking for 6 unknown variables (the components of U ind 1 and Ω ind 1 ), we can consider 6 different choices for problem I. To evaluate 3 components for U ind 1 , we choose V I as the velocity field of a translating particle with velocity u I in the three major Cartesian directions, and in order to evaluate 3 components of Ω ind 1 , we choose the velocity field of a rotating particle with velocity ω I in the three major directions. On the surface of the first particle, we have V I | S1 = u I + ω ω ω I × r 1 , where: where j = 1, 2, 3 denote the cases of translation along the directionsx,ŷ andẑ, respectively, and j = 4, 5, 6 denote the corresponding cases for rotations alongx,ŷ and z. Note that for all of the above 6 cases, on the surface of the second particle we have: V I | S2 = 0. Substituting these choices into the left-hand-side of Equation 23, we will obtain: V I · σ σ σ II ·n dS = S1 (u I + ω ω ω I × r 1 ) · σ σ σ II ·n dS, σ σ σ II ·n dS + ω ω ω I · S1 r 1 × σ σ σ II ·n dS, where F II and L II are the force and torque exerted on the particle 1 and are equal to zero. Then from the righthand-side of Equation 23 we have: σ σ σ I ·n dS + Ω ind 1 · S1 r 1 × σ σ σ I ·n dS, + S1 V ind S1 · σ σ σ I ·n dS = 0, where F I = S1 σ σ σ I ·n dS and L I = S1 r 1 × σ σ σ I ·n dS are the force and torque exerted on the particle in the problem I. Now for the first three problems (j = 1, 2, 3) which: ω ω ω I = 0, F I = −6πu I , L I = 0, σ σ σ I ·n = − 3 2 u I , we have: − 6π U 0 1 + U ind 1 · u I + S1 V ind S1 · σ σ σ I ·n dS = 0, and S1 V ind S1 · σ σ σ I ·n dS = − 3 2 S1 V ind S1 dS · u I , Thus the desired velocity due to indirect interaction is obtained as follows: With similar calculations for the other three problems (j = 3, 4, 5), we can find Ω ind 1 as follows: Similarly, for the second particle we have: , Ω ind 2 = 9 2 e D 4 U 0 1 t 1 ×D . Appendix C: Direct contribution to the interaction We have denoted the overall velocity field of the full problem of interacting Janus particles by V and, it is constrained to the boundary conditions given by Equation 45. Here we apply the reciprocal theorem to extract the direct hydrodynamic interaction between Janus particles. In the absence of direct hydrodynamic interaction, we denote the velocity field produced by the second Janus particle as: u ∞ (r). The first Janus particle is floated in this filed and it is subjected to a proper boundary conditions. To use the reciprocal theorem, we consider the case of problem II as V II = V−u ∞ (r) and it is subjected to the following conditions: where u ∞ (0) is the flow field due to the second particle given at the center of the first particle. U ind 1 and Ω ind 1 are the first janus particle velocities due to the indirect hydrodynamic interaction and V ind S1 is the slip condition that is given by Equation 39. Velocities U dir 1 and Ω dir 1 are unknowns that we want to find. So we have six unknown components and we should apply Lorentz theorem six times, as we did in appendix B. We consider the problem of case I, as a spherical particle which moves with constant translational and rotational velocities given by: u I and ω ω ω I . This sphere is immersed in an external velocity field given by: u ∞ (r) and it is subjected to no slip boundary condition. Therefore, in the laboratory reference frame, on the surface of Janus particles we have: V I | S1 = u I + ω ω ω I × r 1 − u ∞ (0) and V I | S2 = 0. To evaluate the different components of unknown velocities, we will choose six different choices for u I and ω ω ω I as have been chosen in appendix B. Now we consider the three problems j = 1, 2, 3 (the problems that have been defined in appendix B). So the right-hand-side terms are calculated as: and V ind S1 · σ σ σ I ·n dS = − 9 4 U 0 1 + U ind 1 · S1 (rr − I) d cos θ dϕ · u I = 6π U 0 1 + U ind 1 · u I , now collecting the above results, we have: The final result for U dir 1 can be written as: Now for evaluating Ω dir , we consider three problems given as j = 4, 5, 6 in appendix B. For these cases we have: u I = 0, F I = 0, L I = −8πω ω ω I , σ σ σ I ·n = −3 ω ω ω I ×r 1 . So the right-hand-side terms of the reciprocal integral are
2015-10-27T13:01:15.000Z
2015-10-27T00:00:00.000
{ "year": 2015, "sha1": "5115430cdc6dd2713ae6a4321650f13a43588684", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1510.07891", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8d832412fb5404e217c801adef208b5f7f72a427", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
248146681
pes2o/s2orc
v3-fos-license
Prevalence of Infection and Changing Pattern of Organisms Causing Infections in Childhood Nephrotic Syndrome Background: Infection remains an important complication of children with nephrotic syndrome. It results in significant morbidity and may also be responsible for a poor response to steroid therapy or induce relapse in child who has already attained remission. Objectives: This study was conducted to find out pattern of infection and type of organisms causing infections in nephrotic syndrome. Methods: This cross sectional study was conducted in the Paediatric Nephrology Department of Dhaka Shishu (Children) Hospital from January 2010 to November 2010. One hundred fifteen (115) cases of nephrotic syndrome, age between 1 to 13 years were enrolled according to the inclusion criteria. Along with routine investigations, urine, blood and throat swab culture and sensitivity and MT test were done. Risk factors of infection were also determined. Statistical analysis was done by SPSS version 12. Level of significance was taken as <0.05. Results: Prevalence of infection in nephrotic syndrome was 54.78%. Infections were more common in childhood nephrotic syndrome below 6 years of age. Infections encountered in nephrotic syndrome were UTI 51(44.34%), septicemia 4(3.47%), pneumonia 5(4.34%), peritonitis 1(0.87%), cellulitis 1(0.87%) and tuberculosis 1(0.87%). Statistically significant risk factors associated with infection were generalized edema, steroid dependence, steroid resistance, persistent proteinuria and high spot urine protein creatinine ratio. In case of UTI E. coli was the commonest 27(52.9%) organism followed by Morganella and Pseudomonas 5(9.8%). Conclusion: Prevalence of infection in nephrotic syndrome is very high and E. coli is the commonest organism found in this study. Generalized edema, persistent proteinuria, hypoalbuminemia, dependence, resistance are important risk factors for infection. Introduction About one third of every nephrotic syndrome admission was due to an infection. In a study the rate of infection was 38%. 1 A recent study by Moorani et al 2 points to URTI, cellulitis, diarrhoea UTI and peritonitis as the most frequent infections. In the various published results, the type of infection is variable. 3 In a study conducted by Senguttuvan et al 4 found UTI is the commonest infection followed by peritonitis, acute RTI and tuberculosis. Regarding bacteriology of UTI, 76.7% urine cultures are positive and 23.3% are culture negative. Regarding causative organisms, E.coli was the commonest organisms (36.6%) followed by Klebsiella (27.5%). This varies from the study by Gulati et al. where E. coli was responsible for 60% cases and non E. coli organisms accounted for 39% of the culture isolates in UTI. 5 In children with nephrotic syndrome serious infection may be acute and fulminate and manifest with vague or nonspecific features, which may delay an early diagnosis. 6 Because fever and physical findings may be minimal in the presence of corticosteroid therapy. A high index of suspicion, prompt evaluation and early initiation of antibiotic therapy are critical. 7 Occult infections may manifest as a steroid non response or relapse in a child who has already attained remission. Therefore it is essential to know the current trend of prevalence of infection in children with nephrotic syndrome, the organisms prevalent in our set up to decide about appropriate antibiotics, duration of treatment, and to observe the response of treatment and how quick patients enter into remission. The knowledge of the etiological profile of infection in nephrotic children will help to raise the awareness level of treating physicians so that avoidable infectious process could be minimized. It is important to know the infectious agents and their sensitivity pattern among the nephrotic patients who are already immunocompromized and also need treatment with immunosuppressive agents. Different study conducted in both developed and developing countries showing different prevalence rate for different type of infections and changing organisms are also found. There is also a changing pattern of antibiotics sensitivity. There is overcrowding, malnutrition, increased prevalence of infections in Bangladesh and patients with nephrotic syndrome are immunocompromised. So the study is designed to find out the type of infections and their changing pattern of sensitivity to antibiotics to reduce the morbidity and mortality. Materials and Methods This cross sectional study was conducted in the Pediatric Nephrology Department of Dhaka Shishu (Children) Hospital from January 2010 to November 2010. Prior to the commencement of this study the research was approved by the Institutional Ethical Review Committee. All Nephrotic Syndrome patients admitted in hospital during study period were included in the study. Critically ill patients having respiratory distress, children with ARF/CRF/Urogenital anomalies were excluded from the study. Risk factors of infection were also determined. Thorough history taking and elaborate clinical examination was performed and recorded on an appropriate questionnaire. Routine investigations like urine microscopy, urine culture, spot urine protein creatinine ratio, lipid profile, complete blood count (CBC) with examination of the blood film, platelet count, ESR, serum total protein (STP), serum albumin, serum electrolytes, blood urea, serum creatinine and ultra sonography of KUB region were done in all patients. Renal biopsy was done in cases where there were indications like: persistent hematuria, persistant hypertension, hypocomplementemia, impaired renal functions, frequently relapsing nephrotic syndrome (FRNS) with steroid toxicity, FRNS with steroid dependence, steroid non responders. HBsAg and anti HCV was performed in all patients by ELISA. These children were screened for other infections by one or more of the following investigations as needed: peritoneal fluid and cerebrospinal fluid examination (Gram stain, cytology, biochemistry and culture); chest X-ray. Along with routine investigations, urine culture and sensitivity, blood culture and sensitivity, throat swab culture and sensitivity, MT test was done. Urine specimens were collected under the direct supervision of staff nurse or doctor after proper cleaning of the genital area with soap and water thoroughly. In case of boys the glance was washed by retracting the foreskin and in case of girls urine was collected after washing the vulva with their legs and labia apart. Clean catch, freshly voided, midstream urine was collected in a sterile container and were transferred to the laboratory and was subjected to microscopy as early as possible. Urine and blood culture was done on blood agar and McConkeys agar media. A positive urine culture was defined as midstream clean voided specimens with isolation of !10 5 CFU/ml of a single organism. When the colony count was <10 4 organisms/ml, or when there was mixed growth, culture was repeated. All investigations were done in the microbiology, pathology and biochemistry department of Dhaka Shishu (Children) Hospital. Radiological investigations i.e., chest X-ray and ultrasonography (USG) of the kidney, ureter and bladder (KUB) region was done at the Department of Radiology, Dhaka Shishu (Children) Hospital. Data entry and analysis was done by using SPSS version 12. In addition to descriptive statistics such as frequency tabulation, mean, standard deviation, statistical test such as 't' test was applied accordingly to determine statistically significant association. Level of significance was taken as <0.05. Results A total of 115 nephrotic syndrome children were included in this study and among them 63(54.78%) presented with infection (Fig.-1). Infection in nephrotic syndrome was common (63.49%) in children age between 2-6 years (Table I). Statistically significant risk factors associated with infection were generalized edema, steroid dependence, steroid resistance, persistent proteinuria and high spot urine protein creatinine ratio (Table-II). 5 found the mean age to be 6.2 years. In this study increased susceptibility to UTI in lower age group nephrotic children may be due to significant hypoalbuminemia, this factor may have a pathophysiological role in predisposing lower age group children to infection. Moreover, nephrotic syndrome in childhood is common between two to six years of ages, and they were found more immunodeficient during active disease and more prone to bacterial infection. 7 Majority of our study population were from rural areas (73.19%). Our study subjects were mostly from poor and average social class (52.17%). It had a definite reason because the patients from different parts of the country were referred to Dhaka Shishu (Children) Hospital and treatment cost was minimum. Most of the mothers were illiterate or with primary education. So they had very poor knowledge regarding hygiene, sanitation and ifection control. In this study we could not find measles or varicella infection but tuberculosis cases were detected. It reflects the effect of successful immunization programme against those diseases. Conclusion Prevalence of infection in nephrotic syndrome is very high and E coli is the commonest organism found in this study. Early intervention with relapse is important as generalized edema, persistent proteinuria, hypoalbuminemia, steroid dependence, steroid resistance are important risk factors for infection.
2022-04-14T15:08:19.599Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b91cdd3d0caf0032bb8424c35622fabd089addf1", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/DSHJ/article/download/59090/40796", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4515c517a23c095e5f261214fc700adedf800e42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2016031
pes2o/s2orc
v3-fos-license
Effect of endurance training on retinol‐binding protein 4 gene expression and its protein level in adipose tissue and the liver in diabetic rats induced by a high‐fat diet and streptozotocin Abstract Aims/Introduction The present study was designed to investigate from which tissues the decrease in retinol‐binding protein 4 (RBP4) expression could contribute to the improvement of serum RBP4 and insulin resistance (IR) after endurance training. Materials and Methods Male 7‐week‐old Wistar rats were randomly assigned into four groups including control (C), trained (T), diabetic control (DC) and trained diabetic (TD). At 8 weeks‐of‐age, diabetes was induced by a high‐fat diet and intraperitoneal injection of low‐dose streptozotocin (STZ; 35 mg/kg). Rats in the T and TD groups carried out a 7‐week exercise program on a motorized treadmill (15–20 m/min for 20 min/day for 5 weeks), whereas the C and DC remained sedentary in their cages. Tissues gene expression and protein levels of RBP4 were assessed by using real‐time polymerase chain reaction and western blot, respectively, while serum RBP4 was measured using an enzyme‐linked immunosorbent assay kit. Results Exercise significantly improved IR and reduced serum concentration of RBP4 in the TD group. This reduction of serum RBP4 was accompanied by decreased RBP4 protein expression in visceral fat tissue. In contrast, exercise had no significant effect on RBP4 expression in liver and subcutaneous fat tissue in the TD group. Exercise also significantly decreased RBP4 gene expression in visceral fat tissue and muscle, whereas the effect of exercise on liver RBP4 messenger ribonucleic acid expression was not significant. Conclusions The present study showed that the mechanism for RBP4 reducing the effect of endurance training could involve decreased RBP4 messenger ribonucleic acid expression and its protein level in adipose tissue in STZ‐induced diabetic rats. INTRODUCTION The beneficial effect of exercise training on reduced insulin resistance in streptozotocin (STZ)-induced diabetes 1 could be partly mediated by changing the secretion of adipokines. One of these adipokines is retinol-binding protein 4 (RBP4), which is suggested to be involved in the development of insulin resistance [2][3][4][5][6][7] . Based on the evidence, increased serum RBP4 concen-tration reduces insulin-dependent glucose uptake by muscle tissue through reducing phosphoinositide-3-kinase (PI-3-kinase) activity and subsequent phosphorylation of the insulin receptor substrate-1 (IRS-1), which are necessary components of the insulin signaling pathway 2 . In contrast, in the liver, RBP4 increases the expression of the enzyme, phosphoenolpyruvate carboxylase (PEPCK), which eventually leads to increased hepatic glucose output that serves to raise blood glucose 3 . In addition, a negative effect of RBP4 on the secreting function of b-cells has been suggested 6,8 . Some studies consider the impact of exercise on insulin resistance and serum RBP4. The results of these studies showed that the insulin sensitizing effect of exercise was accompanied with reduced RBP4 concentration. A significant decrease of serum RBP4 and an improvement of insulin sensitivity were observed in a wide range of subjects with insulin resistance 3 . Similar findings have also been observed in obese children after exercise 9 . However some studies reported that normalization of insulin sensitivity was not mediated by changes in serum RBP4 in an exercised animal model 10 . Although serum RBP4 seems to be decreased by exercise in most cases, some studies could not find any association between RBP4 and insulin resistance 10,11 . Therefore, further investigation of the effect of exercise on insulin resistance and serum RBP4 is still necessary. In addition, the effects of exercise on RBP4 expression in organs, such as the liver and adipose tissue, have not been reported and it is not clear from which tissue decrease in RBP4 expression could contribute to the improvement of serum RBP4 and insulin resistance after endurance training. Thus, in the present study, we investigated whether endurance exercise could change circulating RBP4 in STZ-induced diabetic rats, and whether these changes would be accompanied by changes in RBP4 messenger gene expression and its protein level in the liver, adipose tissue and muscles. Finally, we investigated in which tissue the degree of changes in RBP4 expression was higher. Animals Male 5-week-old Wistar rats were purchased from the Pasteur Institute (Tehran, Iran), and animals were maintained in an airconditioned room (temperature 22 -3°C) with a 12-h light-dark cycle. All rats were fed chow diet and water ad libitum during the 2-week acclimation period while their bodyweights were monitored daily. At the age of 7 weeks, the rats were randomly assigned to four groups including controls (C), trained (T), diabetic (DC) and trained diabetic (TD). The protocol of this experimental study was approved by the Ethical Committees on Animal Care at the Endocrinology and Metabolism Research Center of Tehran University of Medical Sciences. Diabetes Induction Method At the beginning of the rats' eighth week of age, all diabetic rats were fed a high-fat diet (Table 1) for 2 weeks, and then diabetes was induced by intraperitoneal injection of low-dose STZ (35 mg/kg) dissolved in citrate buffer (0.01 mol/L, pH 4.5). Blood samples were obtained from the orbital sinus and analyzed for non-fasting glucose. Blood was collected through a heparinized tube and plasma separation was carried out by centrifuge at 3,000 g for 10 min at 4°C. Plasma glucose concentration was determined by the glucose oxidase method using an autoanalyzer 12 (Hitachi, Hitachi, Japan). Animals with non-fasting plasma glucose level of ≥300 mg/dL were considered diabetic, then housed in a separate cage and fed a high-fat diet (HFD) for 7 weeks. The rats from the C and T groups were fed a conventional pellet diet. Training Intervention Exercise training was started at 10 weeks-of-age. Endurance training was carried out every day for 7 weeks. Initially, the T and TD groups were familiarized with a motor-driven treadmill running at low speeds (15-20 m/min) for 20 min/day for the first 5 days. Thereafter, the duration was increased gradually over the 7-week period, until the animals were running for 35 min/day at 30 m/min for the last 2 weeks. Electrical shock was not used to force the rats to run. The C and D rats remained sedentary in their cages for the duration of the 7-week training program. Collection of Blood and Tissue Samples At the end of the experimental period, the animals were fasted overnight, but they were allowed to take water. The next morning, animals were anesthetized (ketamin [90 mg/kg] and xylazin [10 mg/kg]), and the soleus and extensor digitorum longus (EDL) muscles, visceral and subcutaneous fat, and liver were excised rapidly, frozen in liquid nitrogen and stored at -80°C for further analysis. Blood samples were withdrawn by heart puncture, plasma or serum separated as aforementioned and stored at -80°C. These samples were used for measurement of fasting blood glucose, fasting plasma insulin, fasting serum RBP4 concentration and plasma lipid profile. Insulin Resistance Confirmation Type 2 diabetic rats were selected according to the following insulin resistance criteria: (i) fasting plasma insulin level above 60 pmol/L; and (ii) homeostasis model assessment of insulin resistance >2.5 13 . Biochemical Measurement Plasma glucose was determined by the glucose oxidase method 12 . Serum RBP4 was quantitatively measured by enzyme-linked immunosorbent assay (cat: RB0642EK; Adipo-Gen Inc., Seoul, Korea). The sensitivity of the assay was 60 pg/ mL, and the intra-assay and interassay coefficients were lower than 10%. Plasma insulin was measured by mouse/rat insulin enzyme-linked immunosorbent assay kit (cat: EZRMI-13K; Millipore, Billerica, MA, USA). The sensitivity of the assay was 0.1 ng/mL, and the intra-assay and interassay coefficients were lower than 10%. Total cholesterol and triglycerides were determined by standard enzymatic procedures using an autoanalyzer (Hitachi). Western Blotting Approximately 150-200 mg of each tissue was powdered with a cold mortar and pestle in liquid nitrogen. Muscle and liver were homogenized in 1 mL of phosphate-buffered saline (PBS) and centrifuged at 12,000 g for 15 min at 4°C, and supernatant was removed. Fat samples were lysed in Tris buffer (20 mmol/L Tris, 1% NP-40; 137 mmol/L NaCl; 1 mmol/L CaCl2; 1 mmol/L MgCl2; 10% (v/v) glycerol; 1 mmol/L dithiothretol; 1 mmol/L phenylmethanesulphonyl fluoride; 2 mmol/ L Na3VO4). Total protein was determined by the Bradford method using bovine serum albumin (BSA) as a standard. A total of 20 lg total protein of each sample was loaded and separated on 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred by electroblotting onto polyvinylidenedifluoride (PVDF) membranes (Amersham, Pharmacia Biotech, Upsala, Sweden). Membranes were incubated for 1 h at room temperature on an orbital shaker in blocking solution (150 mmol/L NaCl, 0.1% Tween 20 and 50 mmol/L Tris, pH 7.5 [TTBS], 5% skimmed milk) and incubated in primary antibody overnight at 4°C in Tris-buffered saline. Membranes were washed once for 15 min and then twice for 5 min in TTBS, and then incubated for 90 min at room temperature with secondary antibody in Tris-buffered saline. Membranes were washed as aforementioned, and protein expression was then detected by enhanced chemiluminescence according to the manufacturer's instructions. Autoradiographic film was exposed to membranes and developed. Molecular weight standards were used to identify appropriate antibody binding. Band densities were determined with image densitometer software. Rat RBP4 protein was used as a positive control and to fix an arbitrary unit to allow comparison between experiments (1 equals the RBP4 signal generated by 5 ng of RBP4 protein). Real Time Polymerase Chain Reaction Tissue was powdered with a cold mortar and pestle, and total ribonucleic acid (RNA) was isolated using Isol RNA-Lysis reagent. Approximately 50 mg of powdered tissue was added to 1 mL ice-cold Isol and homogenized. Homogenates were centrifuged at 12,000 g for 10 min at 4°C to remove the pellet. Chloroform (200 lL) was added to the supernatant fraction and shaken vigorously for 15 s. The organic and aqueous phases were separated by centrifugation at 12,000 g for 15 min. The aqueous phase was removed and 600 lL isopropanol was added, and RNA was isolated according to the manufacturer's instructions. RNA concentration and purity was estimated by OD 260/280. Complementary deoxyribonucleic acid (cDNA) synthesis was carried out with 1 lg RNA in a total reaction volume of 20 lL using random hexamer oligonucleotides. Reverse transcriptase reactions were carried out according to the manufacturer's instructions. Quantitative real-time polymerase chain reaction (PCR) was carried out using a 7300 Real Time PCR System (Applied Biosystems, Step One, Weiterstadt, Germany). The PCR reaction was carried out using Syber Green II, and ROX was used as a reference dye. The concentration of each primer and cDNA were 100 pm and 100 ng, respectively. The thermocycling conditions were: 10 min at 95°C, followed by 40 cycles at 95°C for 15 s and 60°C for 60 s. Table 2 shows the primer sequences used for real-time PCR. Gene expressions were expressed relative to the expression of the 18S housekeeping gene. To avoid detection of non-specific PCR products, the purity of each amplified product was confirmed using a melting curve analysis. Data quantification was carried out using the 2-ΔΔCT method 14 . Primer amplification efficiencies were determined using serial cDNA dilutions, and were determined to be approximately equal. Statistical Analysis All data are shown as meansstandard deviation. The differences were considered to be significant if P ≤ 0.05. Comparisons of variables between study groups were carried out by using one-way analysis of variance (ANOVA) test. When a significant effect was found, post-hoc Scheff e analysis was carried out to determine which mean differences were significant. All analysis was carried out using SPSS version 16 (SPSS, Chicago, IL, USA). RESULTS Induction of diabetes caused a significant increase in bodyweight, fasting glucose, insulin and homeostasis model assessment index. Diabetes also resulted in increased levels of triglyceride and cholesterol ( Figure 1, Table 3). Exercise caused a significant decrease in bodyweight in the TD group. Fasting insulin and glucose were significantly lower in the TD group compared with the D group; however, they were still significantly higher compared with the C group. Lipid profile also significantly decreased in the TD group. The values of lipid profile in the TD group were still much higher than those in the C and T groups ( Table 3). Change of Serum Concentration of RBP4 and RBP4 Protein Expression Diabetes induced a significant increase in serum level of RBP4 (almost a threefold increase in the DC group compared with the C group). RBP4 expression in the liver, adipose tissues and muscle were significantly higher in the DC group than the other groups. Among them, the liver showed the highest increase in RBP4 expression (almost a 3.5-fold increase in DC compared with the C group; Table 4). Exercise led to a reduction in serum concentration of RBP4 in the T group. However, a significantly higher reduction occurred in the TD group ( Figure 2). Exercise also significantly decreased RBP4 protein expression in visceral fat tissue in the TD group compared with the diabetic group. However, these reductions did not reach the levels of the C group. In contrast, exercise had no significant effect on RBP4 expression in the liver, muscle and subcutaneous fat tissue in the TD group (Table 4). RBP4 Gene Expression Diabetes induced RBP4 gene expression in adipose tissue, the liver and muscle. The highest increase was observed in visceral fat and the liver, respectively, whereas the muscle showed the lowest reduction (Figure 3). These findings were consistent with RBP4 expression in the liver, fat tissues and muscle. Exercise significantly decreased RBP4 gene expression in fat tissues, whereas the highest change was observed in visceral fat. Exercise also induced a significant change in muscle RBP4 expression. However the effect of exercise on liver RBP4 mRNA expression was not significant (Figure 3). DISCUSSION In the present study, we tried to answer the question: "reduction in which of the three tissues, liver, adipose tissue or muscle RBP4 expression" will be accompanied with a decrease in serum RBP4 concentration in response to endurance training in type 2 diabetic rats?" In addition, simultaneous measurement of RBP4 expression in the three aforementioned tissues was carried out to investigate which tissue is a predominant source of serum RBP4 in the insulin resistant state. It is notable that this is the first report showing the differential expression pattern of RBP4 between adipose tissue, liver and muscle in response to exercise training. In the present study, the rat model for type 2 diabetes was developed by combining a HFD, which produced insulin resistance, and a low dose of STZ injection, which caused the initial b-cell dysfunction. This kind of rat model could closely simulate the metabolic disturbances that occur in type 2 diabetes 15 . Thus, this model could be suitable for evaluating the effect of endurance exercise (which acts through modifying insulin resistance and serum RBP4) on mRNA RBP4 and its protein expression in different tissues. In addition, the fat we used to feed the rats in this experiment was tallow, which mainly consists of saturated (43%), monounsaturated fatty acids (50%) and polyunsaturated fatty acids (5%) 16 . The present findings showed that induction of diabetes caused a significant increase in bodyweight. In contrast, endurance training normalized bodyweight and improved lipid profile. We believe improvement of mitochondrial function and increased energy expenditure in different tissues is one of the mechanisms 17,18 by which the 7-week endurance exercise caused these changes in trained diabetic rats. We found that induction of diabetes significantly enhanced insulin resistance and serum RBP4. These increases are accompanied with increased RBP4 expression at both a transcriptional and translational level in the liver, adipose tissues and muscle. The increased serum level of RBP4 in diabetic rats might be a consequence of its increased mRNA and protein expression in both the liver and adipose tissues, with the highest expression level in the liver followed by the second rate in the adipose tissues. Thus, in the present study, increased circulating RBP4 (10.21 in DC vs 3.52 in C, P < 0.000) could be mediated by increased RBP4 mRNA expression and its protein level mainly in the liver ( Table 2). This finding is supported by a study that reported RBP4 is predominantly expressed in the liver in rodents and rats 19 . However, our findings, which showed that adipose tissue is not the main organ to produce RBP4 in an insulin resistant state, are inconsistent with those showing that visceral fat is the major source of RBP4 production in an insulin resistant state and type 2 diabetes 2,9,[20][21][22] . One possible explanation could be related to the rat model we developed in the present study. Although this diabetic model was closely similar to the metabolic characteristics of type 2 diabetes, the question remains as to whether the RBP4 tissue expression pattern in an insulin resistant state as a result of induction of diabetes is still completely similar to real type 2 diabetes. Endurance exercise significantly improved insulin resistance, and this reduction was accompanied by dramatically reduced RBP4 in the circulation in diabetic trained rats compared with diabetic rats (10.22 vs 6.49, P < 0.000). These results support previous findings reporting interventions as the cause for reduction in serum RBP4 2,3,9,23 , and suggesting that a decrease of serum RBP4 could be one of the mechanisms underlying the improvement of insulin resistance with exercise 24 . However, apart from RBP4, exercise also changed the serum level of other adipokines, such as adiponectin and leptin, which influence insulin resistance 24 . Perhaps measuring these adipokines in comparison with RBP4 would have given us more information in this regard. Therefore, the findings presented here should be considered with this limitation. The most important and novel finding of the present study was that the 7-week exercise resulted in a significant reduction in RBP4 expression in visceral adipose tissue of HFD-STZ induced diabetic rats. This reduction was accompanied by a parallel decrease in RBP4 expression in this tissue and serum RBP4 concentration. In contrast, endurance training was unable to change RBP4 mRNA level and its protein expression in the liver of diabetic rats. Thus, the effect of exercise on RBP4 protein expression was specific to visceral fat tissue and did not occur in the liver. These findings suggest that decreased adipose tissue RBP4 expression after endurance training might cause a reduction in serum RBP4, which in turn might cause a decrease in insulin resistance through different reported pathways 24 . It could be speculated that one mechanism by which endurance exercise reduced RBP4 expression in visceral fat tissue might be related to the reduced adipocyte size and fat mass, which was not measured in the present study. Consistent with our suggestion, previous studies showed that weight loss and exercise led to reduced adipocyte size, which in turn is an important determinant for the secretion of several adipokines [25][26][27] . As the liver is the major tissue to produce and secrete RBP4, it was expected that endurance training should decrease liver RBP4 expression. However, this effect was not observed in the present study. One possible explanation for this finding is that the time and duration of the exercise session might not have been sufficient to induce significant changes in liver RBP4 protein expression. Based on the literature, the training effect is tissue specific, and any adaptive response to exercise training depends on the degree of involvement of tissue during exercise. As the liver is an inactive tissue during exercise, its involvement in exercise is limited to long duration workouts, and takes over 90 min to produce energy through the gluconeogenesis process. Because our exercise session took 40 min, the liver might not have been influenced. In contrast, visceral tissue is very active during this endurance exercise, in which the major substrate for energy supply is free fatty acid with two primary sources of visceral and subcutaneous fat tissues 28 . Furthermore, recent reports have shown that elevated serum RBP4 is closely associated with fatty liver 29,30 . Increased plasma RBP4 level might be a consequence of fat accumulation in the liver as a result of the induction of diabetes, and continuation of a HFD in diabetic rats and this period of exercise is not enough to reverse this metabolic disturbance. Our suggestion was supported by Vieira et al. 31 who showed that just 12 weeks of exercise could completely eliminate hepatic esteatosis regardless of continuing the HFD. Thus, it might be possible that changes in liver RBP4 expression might present themselves during a longer exercise period. In contrast to diabetic rats, training did not change serum RBP4 concentration in trained rats. In other words, we showed that RBP4 levels were decreased after exercise only in the rats with a higher basic level of RBP4. This finding suggests that the influence of exercise on serum RBP4 concentration is dependent on its initial levels. It is conceivable that the basal RBP4 level in trained diabetic rats, but not in trained animals, might be high enough to be changed by 7 weeks of exercise training. This is also supported by other studies that showed that the effect of exercise seems to be more evident in those with high initial RBP4 2,32 . Along with this evidence, in a previous study we also showed that the basic level of RBP4 was the only predictor of the after-exercise plasma RBP4 concentration 33 . Although our 7-week endurance training significantly improved insulin resistance and reduced serum RBP4, it did not fully reverse them. In addition, mRNA RBP4 and its protein level in adipose tissue of trained diabetic rats did not return to the level of control rats. We suggest that the protective effect of exercise on adipose tissue RBP4 expression and serum RBP4 might be mediated through a reduction in visceral fat or reduction in adipose tissue inflammation. It has been reported that serum RBP4 was significantly correlated with visceral fat mass and visceral adipose RBP4 expression 34 . In addition, a decrease in serum RBP4 level achieved by exercise training predicts the improvement in insulin sensitivity 35 . Thus, the degree of reduction of body fat mass and adiposity, which was not measured in the present study, might be important in this regard. We assumed that the second mechanism by which exercise reduces adipose tissue RBP4 expression and serum RBP4 might involve improvement in the inflammation induced by diabetes and a HFD. This assumption was supported by evidence that showed a close connection between RBP4 expression and its protein level, and inflammatory markers 36-41 . In addition, recent studies confirmed that the period of exercise intervention is a determinant factor in reducing inflammatory markers expression induced by a long-term HFD 10,42 . We suggest that under the condition of a HFD, the impact of endurance training on RBP4 expression was moderate. Thus, a longer period of exercise program might be necessary to completely reverse the effect of continued consumption of HFD and induction of diabetes on mRNA RBP4 expression and its protein level in visceral fat tissue of diabetic mice in this experiment. Training intervention also resulted in a significant decrease in RBP4 mRNA level in subcutaneous fat tissue and muscle. However, the degree of changes in RBP4 mRNA expression was much higher in visceral fat than other tissues in the trained diabetic group. In this regard, the reduction in RBP4 mRNA in subcutaneous fat tissue and muscle was not reflected at the RBP4 protein level in these tissues. In other words, exercise induces muscle and subcutaneous tissues to decrease RBP4 mRNA expression without a decrease in its protein level. Several unknown regulatory mechanisms might be a reason for affecting the post-transcriptional of RBP4 43 . Furthermore, according to previous reports, gene expression levels do not necessarily correspond to protein expression 44 . Finally, the current study showed that 7-week endurance training was able to reduce insulin resistance and serum RBP4, confirming its potential effect on RBP4 mRNA and protein expression in visceral fat tissue. In conclusion, we found that the RBP4-reducing effect of endurance training is predominantly mediated by reduction of RBP4 mRNA expression and its protein level in adipose tissue in diabetic rats induced by HFD and streptozotocin. In fact, the beneficial effect of 7-week endurance training on insulin resistance and serum RBP4 might not be reflected by changes in RBP4 mRNA expression and its protein level in the liver.
2018-04-03T02:42:28.265Z
2014-02-11T00:00:00.000
{ "year": 2014, "sha1": "280365af9882dd056ea1591ab21317c99566328c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1111/jdi.12186", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba6c4066dacb5b9f2f4662431e5e9e5ca624c768", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258395033
pes2o/s2orc
v3-fos-license
Student-Centered Education: A Meta-Analysis of Its Effects on Non-Academic Achievements Non-academic achievement refers to the positive learning quality, personality, and social adaptability students develop during the learning process, which is essential for growth and social development. Can popular student-centered education assume the responsibility of cultivating learners with excellent learning qualities and extraordinary social competencies, as expected by educators globally? Using a meta-analysis, we reviewed and summarized 65 effect sizes from 31 quantitative research papers on the impact of student-centered education on students’ non-academic achievements, published from January 2010 to April 2021. The results showed that student-centered education had a positive impact on students’ non-academic achievements. The impact was greatest at the secondary and higher education levels and over a 3-month experimental period; however, no significant differences were found in curricula, teaching models, teaching strategies, or learning forms. Our results confirm that student-centered education should be widely adopted and accepted in the long term, at secondary and higher education levels. Introduction The concept of ''midwifery'' from the ancient Greek philosopher Socrates (Plato & Rouse, 2002) and the perspective of ''teaching students according to their aptitude'' from Confucius (Confucius, 2000) have been inspirational for teachers in advancing the role of students in education.Student-centered education is a teaching method in which students take a dominant position in the teaching process, influence the content, form, and progress of teaching, and learn independently under the guidance of teachers (Prince, 1992).This teaching method enables students to perform both independent and cooperative study, which was advanced by the school of progressivism, as pioneered by Dewey (1916).This theory was also supported by humanistic psychologists such as Jean Piaget (a Swiss child psychologist), Lev Vygotsky (a Soviet psychologist), and Carl Rogers (an American psychologist).Piaget (1970) believed that teaching must follow the general stages of children's psychological and cognitive development and that children's level of cognitive maturity should be determined first, before their cognitive development.However, Vygotsky (1979) focused on the relationship between teaching and development and emphasized that teaching should be based on students' development.Later, Rogers (1969) redefined the role of teachers as learning facilitators for developing a ''whole person'' who would integrate emotion and cognition and believed the center of teaching must be shifted from teachers to students.The focus should be on cultivating students' positive and active learning qualities, and developing healthy personalities that integrate value, attitude, and emotion.Therefore, a certain consensus has emerged regarding the shifting of the teaching center from the teacher to the student. The counter to student-centered education is teachercentered education.The latter is believed by many to have originated with Johann Friedrich Herbart. Herbart (1806) did not neglect students' subjectivity, although he was mindful of the teachers' role.For example, he believed that textbooks should include the content that arouses students' interest and motivation (Herbart, 1806).In the late 19th century, the Herbart School embraced the theory of teacher-centered teaching.Since then, the traditional education theory consisting of teacher-centered, teaching-material-centered, and classroom-centered has dominated.In addition, Buber (2013) considers ''I-Thou'' as a basic relationship.Based on this philosophical theory, the subject theory of pedagogy was called ''double subjects,'' meaning that both teachers and students were the subjects, as was reflected by ''relation ontology.''Opponents of student-centered education misunderstood student-centered as individualist and teachercentered as socialist, which put the two ideologies into sharp opposition.They believed that promoting studentcentered education tended to breed individualistic tendencies among students, which led to a weakening of their self-control, disruption of classroom discipline, and a decrease in teaching efficiency (Dewey, 1899).Both the teacher-centered and double-subjects principles ignore that learning is constructed by students themselves.Information-based societies require lifelong learning, and good learning qualities and the distinctive characteristics of the learner are the impetus for the sustainable development of human life. Student-centered education focuses more on the process of teaching and learning than on the outcome.It emphasizes the development of students' nonacademic achievements, rather than the enhancement of their academic achievements.Students' nonacademic achievements mainly include personal competencies and social competencies.The former comprises curiosity, interest, initiative, persistence, attention, creativity, motivation, and learning attitude, among others.The latter includes cohesiveness, selfefficacy, interpersonal skills, team spirit, and adaptability.As a dependent variable, the non-academic achievement is more difficult to detect quantitatively and operationalize; therefore, it provides fewer relevant research outcomes, compared with the academic achievement.Most researchers have suggested that student-centered education can significantly improve students' non-academic achievements, based on existing research results.These include studies by Lencioni (2013), K. O ¨zt€ urk and Akkas x (2013), and C x. O ¨zt€ urk and Korkmaz (2020), who conducted teaching experiments in different subjects using college, primary school, and middle school students, respectively.The results show that student-centered education can significantly enhance students' learning motivation.C x. O ¨zt€ urk and Korkmaz's (2020) experiments also referred to love, effectiveness, interest, trust, attitude, cooperation, and other non-academic achievements; each experimental group's improvement was significant.However, additional experimental research shows different results.X. L. Ma et al. (2016) conducted a flipped classroom experiment on a college computer course and obtained experimental data on autonomous learning ability and cooperation ability using the Learning and Study Skills Inventory.They found that when the course content was more logical and systematic, the concentration of students in the experimental group was significantly lower than that in the control group.In addition, after K. O ¨zt€ urk and Akkas x (2013) adopted the cooperative learning teaching strategy, the students' learning motivation decreased.Therefore, whether the student-centered teaching model has a positive impact on the improvement of learners' qualities needs to be further confirmed. Furthermore, quantitative research based on a small sample size and qualitative research based on subjective opinions cannot answer the overall effect of studentcentered education on students' non-academic achievements.Quantitative research can show the teaching effect accurately and objectively; however, choices of experimental object, teaching model, and teaching strategy are often restricted to a single factor, given the limitations on the experimenter's energy and resources.In contrast, even though general qualitative analysis has been based on many published experimental conclusions, it is too subjective to form a definite conclusion.For example, Ding (2005) summarized several researchers' arguments by way of an overview: ''The results were mixed, and even if there was a boost, the effect might not seem obvious.''Regarding general qualitative analysis, this would be a vague, but the plausible conclusion. Overall, there has been no definitive conclusion regarding the effect of student-centered education on students' non-academic achievements.This study used a meta-analysis of published data to analyze and evaluate the overall effect of student-centered education on their non-academic achievements, and the effect of different moderators.Strong evidence regarding the need for and ways to implement student-centered education and suggestions for researchers and teachers are provided in this article.Three questions, which have perplexed educational researchers, educational policymakers, and frontline teachers, were posed in this study: ( A meta-analysis was performed using data in the literature that met the inclusion criteria, and we calculated the effect sizes, drew funnel plots, examined funnel plot asymmetry, and performed a heterogeneity test, influence analysis, meta-regression, and subgroup analysis in the meta-package (version 4.18-2) using R 4.1.0. Search Strategy A variety of resources (CNKI, CQVIP, Wanfang Data, and EBSCO in the e-library of Zhejiang Normal University (http://lib.zjnu.edu.cn);Google Scholar (https://scholar.google.com))were used to find quantitative papers and dissertations on the effects of studentcentered education on students' academic achievements, published from January 2010 to April 2021.When choosing keywords, we considered that student-centered education is usually implemented by cooperative learning, a flipped classroom, autonomous learning, or experiential learning.Based on these conditions, the keywords included: (subjectivity); (teaching); (empirical research); (cooperative learning); (group study); (mutual study); (flipped classroom); (autonomous learning); and (experiential learning).First, the keywords were used to index literature in the corresponding database, and quantitative research papers consistent with the subject were downloaded.Further, when a document could not be downloaded in full from these databases, academic help-seeking methods were used in the supplementary retrieval.Finally, the method of literature backtracking was used for secondary supplementary retrieval. Criteria for the Selection of Studies After the papers were retrieved, they were re-screened according to the following inclusion criteria: (1) the content of the paper must conform to the subject of studentcentered education's impact on students' non-academic achievements; (2) the paper must adopt a quantitative research method; (3) the paper must have been published between January 2010 and April 2021; (4) the teaching experiment in the paper must have been conducted on school education; and (5) data such as mean values of a dependent variable measured in an experiment, sample sizes, and standard deviations or standard errors must have been provided.According to the nine keywords, 8,579 papers were obtained preliminarily.After screening for duplicates, 1,360 papers remained.Subsequently, excluding the full-text literature without relevant outcomes, lack of experimental data, and inadequate sample size, 134 full-text papers were retained.Finally, after strict re-screening according to the inclusion criteria, 31 papers containing valid data for meta-analysis were used in this study (Figure 1).Some of the included papers contain multiple effects, and a total of 65 effect sizes were available for this study (Supplemental Appendix 1). Data Management and Coding The coding in this study was generated by combining the precoding of two researchers, and inconsistent precoding was addressed through deliberation and verification by the two researchers.Documents that met the inclusion criteria were encoded in terms of documents, dependent variables, and moderating variables, all in the form of characters.Papers were numbered; the dependent variable was students' non-academic achievements and the moderating variables included methodological features (year of publication and study design), and substantive features (classification of non-academic achievements, curriculum, education level, experimental period, teaching model, teaching strategy, and autonomous learning form).The basic information of the study included author, year, sample size of the experimental group, sample size of the control group, means and standard deviations from pre-test/post-test of the experimental group, means and standard deviations from pre-test/post-test of the control group, t-values, and p-values.The study design was coded as comparison group design and onegroup pre-post design; classifications of non-academic achievements were coded as personal competencies and social competencies; the educational period was coded as primary, secondary, or higher; the experimental level was coded as fewer than 3 months, 3 to 6 months, or more than 6 months; teaching model was divided into offline (physical classrooms) and online-offline (mixed virtual and physical classrooms); teaching strategy and autonomous learning form were classified based on Pei's (2000) report.The teaching strategies included subject participation (autonomous learning, flipped classroom, role-playing, visiting, and drama performance), cooperative learning (group cooperation and collective cooperation), and autonomous learning form, all of which were coded as inquiry learning (an autonomous learning process that engages students to acquire knowledge through exploration and high-level questioning) and receptive learning (an autonomous learning process in which students acquire knowledge by reading and listening).The codes are as follows: Study design: 1 = comparison group design; 2 = one-group pre-post design.Classification of non-academic achievements: 1 = personal competencies, 2 = social competencies. Selection of Effect Value Standardized mean difference (SMD) was used as the effect size to evaluate the effects of student-centered education, and that of various regulatory variables on students' non-academic achievements.According to Cohen (1992), the classification of effect size based on the SMD was interpreted, so that when the SMD is approximately 0.2, 0.5, or 0.8, it is accepted as small, medium, or large, respectively (cited in Strelan et al., 2020).The differences were considered statistically significant at p \ .05. Overall Effect Size, Heterogeneity Test, and Influence Analysis It was hypothesized that the 65 effect sizes were different in many aspects (e.g., curriculum, education level, and experimental period).The Q-test and I 2 -statistics confirmed the hypothesis and showed that there are significant differences between these studies (Q = 22,791.00,df = 64, p \ .0001),and that the heterogeneity is due to a real difference in the effect value (I 2 = 99.7%);only 0.3% was due to the error.This difference might be due to factors such as publication time, experimental cycle, and the uneven basic level of the items tested in the selected samples.Therefore, a random-effects model was used to evaluate the effect of student-centered education on achievements (Hedges & Vevea, 1998). Based on the random-effects model, the overall effect size of student-centered education on students' nonacademic achievements and related data were obtained, as shown in Table 1.The overall effect size of studentcentered education on students' non-academic achievements was 0.7158, which had a large and significant effect (p \ .001).This indicates that student-centered education could significantly improve students' nonacademic achievement levels. An influence analysis was used to test the stability of the overall effect size.When the included effect sizes were omitted one by one, the significance of the overall effect size (mean SMD ranged from 0.683 to 0.7321) did not disappear (all p \ .001),indicating that there was no extreme effect size in the included studies. To explore the reasons for variations in effect sizes, the following sections will introduce key methodological features and substantive features of the studies as moderators. Methodological Features of the Studies To determine the reasons for the heterogeneity in this collection of effect sizes, two key methodological features were examined: year of publication and study design. Year of Publication. A meta-regression was used to analyze the effect of the year of publication on the 65 collected effect sizes from January 2010 to April 2021.The findings indicate that the year of publication was not a regulatory factor for the overall effect size.There was no significant difference in the impact of studentcentered education on learners' non-academic achievements in different publication years (r = .0132,intercept = 27.3136,Q = 0.5618, df = 1, p ..05). Study Design.In terms of effect sizes, there were 57 effect sizes in 31 papers from the comparison group design; the remaining eight effect sizes (e.g., AbuSeileek, 2012;Cheng, 2016;Lencioni, 2013;Liu, 2013; L. J. Ma, 2013;Xie & Zhou, 2015) were from a one-group pre-post design.As indicated in Table 2a, the effect size of the comparison group design (SMD = 0.7113) was similar to that of the one-group pre-post design (SMD = 0.7425), and they were not significantly heterogeneous (Q = 0.01, df = 1, p ..05). Publication Bias Publication bias can have a great impact on the results of the meta-analysis, making the conclusions biased or even wrong.We used funnel plots and Egger's regression to evaluate publication bias in this study.As shown in Figure 2, the plot was an inverted funnel-shaped symmetry with scattered points evenly distributed on both sides of the mean effect value.This indicates a small publication bias.The lower distribution of the bottom points of the funnel plot was due to studies with small sample sizes and high dispersion that were not searched for or published.Egger's regression showed that there was no publication bias in this study (t = 1.8311, df = 63, p ..05). Classification of Non-Academic Achievements.The 65 effect sizes were classified into two non-academic achievement categories: personal competencies (SMD = 0.6160) and social competencies (SMD = 0.8456).The findings indicated that the effect sizes did not differ by different nonacademic achievement categories (Q = 2.25, df = 1, p ..05;Table 2b). Curriculum.The SMD values in the curricula of natural science and humanities and social sciences were essentially the same, with values of 0.7900 and 0.6469, respectively, and there was no significant difference between them (Q = 1.08, df = 1, p ..05;Table 2c). Education Level.Table 2d summarizes the results based on education level.The SMD values in secondary education, higher education, and primary education were 0.9776, 0.7169, and 0.2488, respectively.There was a significant difference between primary education, and secondary and higher education, in terms of students' nonacademic achievements (Q = 18.14, df = 2, p \ .001;Table 2d).In addition, the effect of primary education on students' non-academic achievement was significantly different from that of secondary education (Q = 13.81,df = 1, p \ .001) or higher education (Q = 11.75, df = 1, p \ .001),and there was no significant difference between secondary education and higher education (Q = 1.94, df = 1, p ..05). Experimental Period.Table 2e shows that the maximum SMD was more than 6 months (SMD = 1.0819), and the SMD of 3 to 6 months and less than 3 months were 0.6292 and 0.7221, respectively.The influence on students' non-academic achievements was significant between over 6 months and 6 months or less (Q = 8.36, df = 2, p \ .05). Teaching Model.The SMD of offline, and online-offline models of teaching were essentially the same, 0.7048 and 0.7340, respectively.The result of the heterogeneous test was not significant (Q = 0.04, df = 1, p ..05;Table 2f). Teaching Strategy.The SMDs of cooperative learning and subject participation were 0.7627 and 0.6896, respectively.The influence of different teaching strategies on students' non-academic achievements was not significant (Q = 0.18, df = 1, p ..05;Table 2g). Autonomous Learning Form.The SMD of inquiry learning and receptive learning were 0.8072 and 0.5939, respectively, and there was no significant difference between them (Q = 0.18, df = 1, p ..05;Table 2h). Discussion The results of our meta-analysis show that studentcentered education has a large impact on students' nonacademic achievements, meaning that it could significantly improve students' academic achievement.This result was consistent with the findings of the included studies (e.g., Cui & Fang, 2018;Jdaitawi, 2021;Tao, 2013;Xu, 2012;L. J. Zhang, 2010); however, these intervention experiments have some disadvantages, such as small sample size, single impact factor, and one-sided result, because of the limitations of the cost and the experimenter's energy.The meta-analysis method based on a large sample size overcame these shortcomings and resulted in more credible and referential results.The effect of student-centered education on academic achievements (SMD = 0.5446) was also positive (Li et al., 2021), however, its effect size was lower than that on non-academic achievement (SMD = 0.7158).We observed that student-centered education had positive significance for the cultivation of personal and social competencies.Students could perfect their non-academic achievement from the experience of being the teaching subject to adapt to the challenges of study and life.Additionally, differences in experimental design and classification of achievements did not diverge on the positive impacts of student-centered education, indicating that the combination of these data in this study was appropriate and did not bias the final results. The positive effect of student-centered education on non-academic achievements at the secondary and higher education stages was significantly greater than that at the primary education stage.Student-centered education requires a high degree of autonomy, learning ability, team spirit, and participation.The intellectual and psychological level of students in secondary and higher education has developed into formal operations (Piaget, 1970), which allows for stronger learning ability and autonomy among them, compared to students in primary education; moreover, they are better able to adapt to and implement student-centered education.Lu (2019) conducted student-centered education in social courses for 13 weeks and found that students in higher education improved significantly in interest and initiative, communication and interpersonal relationships, self-efficiency, logic, and theoretical thinking.However, Bursa and Kose's (2020) 10-week student-centered education program of social science courses in primary schools, did not significantly improve students' non-academic achievements.Through a comparison of the data, we found that the experimental period at the secondary and higher education levels generally lasted 3 months or more, while at the primary education level, the study period was generally shorter than 3 months.The length of the experimental period might be limited by the acceptance degree of different students and other practical difficulties, but the length of the experimental period might indirectly affect the impact of this regulatory variable (educational level). The longer the experimental period, the greater the influence on students' non-academic achievements.Most of the effects for more than 6 months were significant positive effects (e.g., AbuSeileek, 2012;Liu, 2013;Lu, 2019), differing from those with periods of fewer than 6 months (e.g., Bursa & Kose, 2020;Hava, 2021).An intercomparison of the three groups showed that there was no significant difference between the effects of 3 to 6 months and that of more than 6 months, but there was a significant difference between the effects of 3 to 6 months and that of more than 6 months.The results indicated that student-centered education had the maximum positive effect on students' non-academic achievements when the experimental period was longer than 6 months.It also showed that student-centered education needed to be implemented consistently for a long time and that the effect on students' non-academic achievements increased with an increase in the experimental period (the longest experimental period included in this study was 18 months).Reasons may include the following two aspects.On the one hand, there is the factor of student adaptability.It is difficult for students to adapt to changes in teaching methods-from traditional teacher-centered to student-centered-in a short time.Students' subjectivity has not been brought into full play, which has affected the results of the experiment.The effect of student-centered education can be displayed when students have adapted to the new teaching model.On the other hand, there is the consideration of teaching method suitability.Only when the teaching mode is suitable for students and teachers can its results be seen.If we force teachers to use methods of studentcentered education, the results are unlikely to be better (or may be even worse) than when they use traditional methods, since they do not trust in student-centered methods and therefore unconsciously sabotage positive effects.However, if students' non-academic achievements showed a downward trend because of their boredom and confusion with the progress of the experiment, student-centered education would not be suitable for daily teaching.One problem that the finding might not help is that the courses students take in high school and college are generally less than 6 months. There were no significant differences in the degree of positive effects of student-centered education on students' non-academic achievements by subject, teaching model, teaching strategy, and learning form; however, they all had medium or large positive effects on students' non-academic achievements.Student-centered education could promote non-academic achievement in all subjects.Although the effect of the teaching model on nonacademic achievements was not significant, the SMD value of mixed online and offline learning was larger than that of offline learning.Thus, it is necessary to make rational use of the Internet to cultivate students' learning autonomy. Subject participation mainly included flipped classrooms, autonomous learning, and experiential learning.Cooperative learning included a group study and mutual study.However, there was no significant difference between these forms of participation in improving students' nonacademic accomplishments.Compared with the teaching strategy of subject participation, the traditional teaching strategy generally focuses on lecture and demonstration.Although the advantage of the subject participation teaching lies in the full excavation of students' initiative, the type of teaching strategy should be adopted based on the teaching content and actual teaching conditions. Inquiry learning involves helping students find problems independently and solve them cooperatively, which can stimulate students' internal motivation, such as exploration desire and competence.Although there was no significant difference between the inquiry and receptive learning in this study, the SMD value of inquiry learning was larger than that of receptive learning.This result is more significant in case studies.For example, the result of a teaching experiment by J. Wang (2021) showed that inquiry learning could significantly improve learners' non-academic achievements, while some teaching experiments using receptive learning had no significant effect on non-academic achievements (e.g., Altas & Mede, 2021;Shi & Ji, 2010).However, the student-centered teaching model had different effects on academic achievement in different teaching models, teaching strategies, and learning forms (Li et al., 2021).This indicated that the development of students' non-academic achievements may not be related to teaching models and strategies adopted.If the teaching processes adhere to the concept of the studentcentered model and selected appropriate teaching media, teaching strategies, and learning forms, students' nonacademic achievements will improve. Conclusions and Contributions Based on our analysis, student-centered education had a positive effect on students' non-academic achievements, which did not diverge across experimental designs and achievement classifications.Moreover, there were positive effects on students' non-academic achievements in different curricula, education phases, experimental periods, teaching models, teaching strategies, and learning forms.Therefore, this study's findings support studentcentered education as an effective teaching tool for the development of students' competencies.Based on these findings, we make the following recommendations: First, student-centered education should be widely adopted in secondary and higher education.Student-centered education highlights students' subjectivity, which is in line with the inner needs of secondary and higher education students in their pursuit of individuality and social interaction.Students build self-cognition while demonstrating the power of subjectivity to strengthen their positive values.Once a positive trend is determined, it will promote both learning and teaching.Teachers must provide more help and arrange activities suitable for students' mental development to ensure their participation and learning confidence when implementing studentcentered teaching in primary education. Second, student-centered education should be normalized and encouraged.This is in line with personal instincts to explore the unknown.Finding and solving problems from the standpoint of the subject is the premise for people to explore the unknown, by using scientific research methods.Teaching follows the law of scientific research aimed at helping students develop skepticism and a scientific spirit that best reflects people's subjective initiative.Additionally, children and adolescents have a strong curiosity to explore the unknown, which is the intrinsic motivation of students' subjective learning (Dewey, 1910).Owing to these two points, student-centered education has become an educational idea that corresponds with the nature of students to seek knowledge independently.Encouraging students' subjectivity and individual personalities will help to improve their innovation (and therefore society's innovation), which is a powerful tool for improving national competitiveness. Third, diversified teaching strategies and learning forms should be adopted to implement student-centered education.Considering that all the teaching strategies and learning forms of student-centered education showed positive effects and there were no significant differences in the effect of which teaching strategies and learning forms were adopted for students' non-academic achievements, the key to enhancing it is to adopt strategies reasonably according to the teaching needs and to guide students in selecting appropriate learning methods, without too much consideration of the impact of these regulatory factors.Three points should be considered when choosing teaching strategies and learning methods: (1) suitability of teaching content, (2) matching students' mental development level, and (3) integration with the teaching models. Fourth, the education administration should provide more training opportunities for teachers.The key to implementing student-centered education lies with teachers.Based on the principle of ''congruence'' described by Rogers (1975), teachers need to be authentic and genuine during the teaching process.Whether teachers have a firm grasp of student-centered education, role awareness as leaders and helpers, and good professional teaching knowledge of student-centered education, will directly affect its process and results.We should not force teachers to use student-centered teaching strategies until they fully realize the value of student-centered education.In addition, teaching strategies, teaching aids, and organizational forms of student-centered education are constantly developing.Relevant training should be arranged at regular intervals, to enrich teachers' knowledge about teaching and strengthen their teaching confidence, thereby enabling them to better adhere to the implementation of student-centered teaching strategies; this, in turn, will firmly implement student-centered education. Limitations The results of the overall effect of student-centered education and the impact of substantive characteristics on students' non-academic achievements were obtained by meta-analysis in this study.The results were comprehensive and of the reference value.However, there are still three deficiencies as follows.First of all, the regional deviation was inevitable in this study.Several important kinds of literature might have been omitted due to the issues of search technology and limits of authority.For example, there were few papers in the European/U.S. sector, which might cause some deviations.After that, due to differences in technological and teaching support resources in different countries (Sahonero & Calderon, 2018), the implementation of student-centered educational strategies might bring different teaching effects.Finally, the data is too discrete in several studies (e.g., Capodieci et al., 2019;Hava, 2021), which may be caused by the relatively poor control of irrelevant variables or insufficient experimental duration during the intervention experiment.Therefore, these limitations should be addressed in future research to obtain better conclusions for student-centered education. Figure 1 . Figure 1.Literature search process with number of records considered. Table 1 . Overall Effect Size, Test of Mean, and Heterogeneity Test. Table 2 . Effect Sizes of Student-Centered Education in Study Design, Classification of Non-Academic Achievements, Curriculum, Education Level, Experimental Period, Teaching
2023-04-29T15:05:52.024Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "2f21d0e07f01acf87b8f7ecf98a782d6a4bcac1c", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440231168792", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "45d9fd72950119366fb77be37d393f071bf8771b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
211988932
pes2o/s2orc
v3-fos-license
Evidence for Disk Truncation at Low Accretion States of the Black Hole Binary MAXI J1820+070 Observed by NuSTAR and XMM-Newton We present results from NuSTAR and XMM-Newton observations of the new black hole X-ray binary MAXI J1820+070 at low accretion rates (below 1% of the Eddington luminosity). We detect a narrow Fe K$\alpha$ emission line, in contrast to the broad and asymmetric Fe K$\alpha$ line profiles commonly present in black hole binaries at high accretion rates. The narrow line, with weak relativistic broadening, indicates that the Fe K$\alpha$ line is produced at a large disk radius. Fitting with disk reflection models assuming standard disk emissivity finds a large disk truncation radius (a few tens to a few hundreds of gravitational radii, depending on the disk inclination). In addition, we detect a quasi-periodic oscillation (QPO) varying in frequency between $11.6\pm0.2$~mHz and $2.8\pm0.1$~mHz. The very low QPO frequencies suggest a large size for the optically-thin Comptonization region according to the Lense-Thirring precession model, supporting that the accretion disk recedes from the ISCO and is replaced by advection-dominated accretion flow at low accretion rates. We also discuss the possibility of an alternative accretion geometry that the narrow Fe K$\alpha$ line is produced by a lamppost corona with a large height illuminating the disk. INTRODUCTION Low-mass black hole X-ray binaries contain stellarremnant black holes accreting from donor stars with a mass of < 1M that transfer mass by Roche lobe overflow. Most known Galactic black hole X-ray binaries are low-mass X-ray binaries (LMXBs) and are discovered as transients. These systems exhibit recurrent outbursts on year to decade timescales, during which their flux increases by several orders of magnitude in the optical and X-ray bands (e.g., Corral-Santana et al. 2016). During a typical outburst, lasting for a few months, a black hole binary transitions between different X-ray spectral states, and displays distinct X-ray spectral and timing properties (see Remillard & McClintock 2006, for a review). MAXI J1820+070 (=ASASSN-18ey) is a new transient black hole X-ray binary discovered in 2018. The outburst was first reported in optical by the All-Sky Automated Survey for SuperNovae on UT 2018 March 06.58 (ASAS-SN; Tucker et al. 2018), and subsequently in the X-ray band by MAXI a week later (Kawamuro et al. 2018). The source reached a peak X-ray luminosity (2-20 keV) of about 2 Crab, becoming one of the brightest X-ray nova discovered. Its outburst was well covered by multi-wavelength observations from the radio to the gamma-ray band (e.g., Bright et al. 2018;Shidatsu et al. 2018;Paice et al. 2019;Hoang et al. 2019). The distance of MAXI J1820+070 is estimated as 5.1 ± 2.7 kpc or 4.4 ± 2.4 kpc (Atri et al. 2020), and 3.46 +2.18 −1.03 kpc ) based on different distance priors from the Gaia DR2 parallax (Gaia Collaboration et al. 2018). From radio parallax, the distance is measured to be 2.96 ± 0.03 kpc (Atri et al. 2020). MAXI J1820+070 is recently dynamically confirmed as a black hole binary (BHB) accreting from a K3-5 type donor star with a 0.68 day orbit, and the mass estimate of the central black hole is 7-8 M (Torres et al. 2019). Due to its high X-ray flux, MAXI J1820+070 is an ideal new target for the study of the inner accretion flow properties around black holes via X-ray spectral and timing analyses. X-ray timing analysis of NICER observations provides clues about the evolution in the coronal geometry (Kara et al. 2019). NuSTAR observations of MAXI J1820+070 during the bright phases of the hard state display relativistic disk reflection features, including a broad Fe Kα line that is nearly invariant in the line profile over multiple observations at different fluxes. Detailed modeling of the reflection spectra reveals that the inner edge of the accretion disk remains stable at about 5 gravitational radii, which indicates that the central black hole is likely to have a low spin (Bharali et al. 2019;Buisson et al. 2019). In addition, a low-frequency QPO varying in frequency is detected in MAXI J1820+070 in both optical and X-ray bands (e.g., Yu et al. 2018;Buisson et al. 2019). Black hole accretion is generally studied in two regimes, cold accretion flows of optically-thick material at high mass accretion rates; and optically-thin hot accretion flows at low accretion rates. The latter is thought to contain an advectiondominated accretion flow (ADAF and its variants, see Yuan & Narayan 2014 for a review). The ADAF model explains the hard X-ray emission from black holes accreting at low accretion rates, e.g., BHBs in quiescence and low/hard states, low luminosity AGNs (LLAGNs), and also Sgr A*. In BHBs, the accretion geometry is deduced to be in the form of an optically-thick accretion disk at large disk radii, which evaporates and is replaced by ADAF close to the black hole (e.g., Esin et al. 1997). The transitional radius, or the truncation radius of the optically-thick accretion disk, is predicted to increase with decreasing accretion rate (e.g., Meyer et al. 2000;Taam et al. 2012). Observationally, the inner radius of the accretion disk in BHBs can be measured by modeling the strength and temper-ature of the thermal disk emission component (Zhang et al. 1997), or modeling the profile of Fe Kα emission line that originates from reflection of hard X-ray photons from the corona by the accretion disk (Fabian et al. 1989). The line profile becomes broad and asymmetric, with an extended red wing originating from the vicinity of the black hole due to combinational effects of Doppler shift, relativistic beaming and gravitational redshift (see Miller 2007, for a review). Recent observations of several bright BHBs by NuSTAR reveal clearly broad Fe Kα lines, indicating that the accretion disk extends all the way down to the ISCO in the bright hard state (BHS) (e.g., Miller et al. 2015;Xu et al. 2018a,b;Buisson et al. 2019). In the low hard state (LHS), the detection of a narrow Fe Kα line, the evidence for disk truncation, is usually hindered by the faintness of the targets, the weakness of the reflection features, and the limited instrumental spectral resolution. So far, narrow Fe Kα emission has only been convincingly detected in the BHB candidate GX 339-4 from a long Suzaku observation taken in 2008 (Tomsick et al. 2009). OBSERVATION AND DATA REDUCTION We triggered NuSTAR (Harrison et al. 2013) and XMM-Newton (Jansen et al. 2001) observations of MAXI J1820+070 during the second rebrightening period after the the end of its 2018 main outburst (Xu et al. 2019). During this rebrightening period, the source stayed in the LHS at a low accretion rate (below 1% of the Eddington limit), but reached an X-ray flux level high enough to enable the detection of the Fe Kα emission line. We show the time of the NuSTAR and XMM-Newton observations in the Swift/BAT (Barthelmy et al. 2005) and XRT (Burrows et al. 2005) monitoring light curves of MAXI J1820+070, see Figure 1. The Swift/BAT light curve was obtained from the Swift/BAT Hard X-ray Transient Monitor 1 (Krimm et al. 2013). We reduced the Swift/XRT data using xrtpipeline v.0.13.5 with CALDB v20190910. We extracted source spectra from a circular region with a radius of 60 , and the background was extracted from an annulus with inner and outer radii of 160 and 300 . The X-ray flux in 2-10 keV measured by XRT was estimated by fitting the spectra in the energy range of 0.3-10 keV with an absorbed power-law model. NuSTAR NuSTAR observed MAXI J1820+070 on Aug 26, 2019 starting from UT 07:16:09 (ObsID: 90501337002). We reduced the data using NuSTARDAS pipeline v.1.8.0 and CALDB v20191008. The source spectra were extracted from a circular region with the radius of 180 from the two focal plane modules (FPMA and FPMB). Corresponding background spectra were extracted using polygonal regions from source-free areas. We also extracted spectra from mode 06 data to maximize the available exposure time following the method outlined in Walton et al. (2016). The resulting exposure times are 48.6 ks and 48.4 ks for FPMA and FPMB, respectively. We coadded the FPMA and FPMB spectra using the addspec tool in HEASOFT. The NuSTAR spectra were grouped to have a signal-to-noise ratio (S/N) of at least 10 per bin. We applied barycenter corrections to the event files, transferring the photon arrival times to the barycenter of the solar system using JPL Planetary Ephemeris DE-200, and ex- tracted source light curve from the same region as the energy spectra. XMM-Newton The XMM-Newton observation of MAXI J1820+070 (Ob-sID: 0851181301) started on September 20, 2019 from UT 22:45:46. Data reduction is performed using the XMM-Newton Science Analysis System v17.0.0 following standard procedures. EPIC-MOS1, MOS2 and EPIC-pn operated in the timing mode, but EPIC-MOS1 experienced full scientific buffer during the whole observation. EPIC-pn (Strüder et al. 2001) is the prime instrument we use due to its large effective area in the Fe K band. The net exposure time of EPIC-pn is 56 ks after filtering out periods of high background flaring activity. The data is free from pile-up effects at a mean count rate of ∼20 cnts s −1 . We selected events with pattern 4 (singles and doubles) and quality flag = 0. The source spectrum and light curve were extracted from the columns of 27 RAWX 47, and the corresponding background was extracted from 58 RAWX 60. We used rmfgen and arfgen to generate the redistribution matrix files and ancillary response files. We grouped the EPIC-pn spectrum to have a minimum S/N of 10 for spectral modeling. The collected source light curve was barycenter corrected using the barycen tool, and corrected for instrumental effects by epiclccorr. SPECTRAL ANALYSIS In this work, we perform all spectral modeling in XSPEC v12.10.1f (Arnaud 1996). We use the cross-sections from Verner et al. (1996) and abundances from Wilms et al. (2000) in the TBabs neutral absorption model. All uncertainties are reported at the 90% confidence level unless otherwise clarified. We fit NuSTAR spectra from 3.5-79 keV, and XMM-Newton EPIC-pn spectrum from 0.6-10 keV. We first model the NuSTAR (=EPOCH 1) and XMM-Newton (=EPOCH 2) spectra with an absorbed power-law model, TBabs * powerlaw, in XSPEC. The NuSTAR and XMM- Newton observations were taken in two epochs separated by 25 days, thus we fit the spectra individually with no linked parameters. We freeze the absorption column density, N H , at zero when modeling the NuSTAR spectra, as the small amount of absorption towards MAXI J1820+070 does not affect energies above 3 keV. The absorbed power-law model leaves systematic residuals, with a reduced chi-squared of χ 2 /ν EPOCH1 = 1557.1/941 = 1.65 and χ 2 /ν EPOCH2 = 1248.3/1127 = 1.11. Reflection features are evident in the spectral residuals (see Figure 2, panel (b) and (e)): an Fe Kα emission line is clearly detected by both NuSTAR and XMM-Newton centered around 6.4-6.5 keV, and a Compton reflection hump is evident in the NuSTAR spectra peaking around 30 keV. The prominent Compton reflection hump confirms that the Fe Kα line originates from reflection by cold optically-thick gas (e.g., Lightman & White 1988). Accounting for the Fe Kα line with a Gaussian model improves the fit by ∆χ 2 = 384 for the NuSTAR spectrum, and ∆χ 2 = 88 for XMM-Newton. The best-fit Gaussian model indicates that the Fe Kα line is narrow, with a line width of σ EPOCH1 = 0.29 +0.05 −0.04 keV measured by NuSTAR and σ EPOCH2 = 0.24 +0.09 −0.06 keV measured by XMM-Newton, and equivalent width (EW) is constrained to be 64 ± 7 eV and 98 +20 −21 eV (see Table 1 for details). The line width and EW are significantly smaller than those measured with XMM-Newton EPIC-pn during the BHS (σ BHS ≈ 0.9 keV and EW BHS ≈ 270 ± 30 eV; Kajava et al. 2019). The narrow and symmetric line profile we observe here indicates weak relativistic effects, which is direct evidence for the Fe Kα line being produced far from the central black hole. In order to physically model the reflection features, and to get a constraint of the disk truncation radius, we fit the spectra with the relxill relativistic disk reflection model (relxill v1.3.3, Dauser et al. 2014;García et al. 2014). We fix the disk emissivity indices, q in,out at 3, the value expected for the outer part of a Shakura & Sunyaev disk (Laor 1991;Dauser et al. 2013). This is a reasonable assumption, considering that the reflection features are produced at a large distance from the black hole. The emissivity indices cannot be constrained if left free. We fix the black hole spin, a * , at the default value of 0.998. This parameter is irrelevant here as the inner disk is truncated outside of the ISCO. The outer disk radius, R out , is fixed at the maximum value of the model at 1000 r g (r g ≡ GM/c 2 is the gravitational radius). As the F 2−10 keV (erg cm −2 s −1 ) 5.0 × 10 −10 3.9 × 10 −11 L 0.1−100 keV (erg s −1 ) 2.6 × 10 36 1.9 × 10 35 L Edd (%) 0.25 0.018 Note. -Fixed parameters are marked with the superscript f . In this work, we adopt the distance estimate of 3 kpc and black hole mass estimate of 8 M when calculating the source luminosity and Eddington ratio of MAXI J1820+070. majority of the X-ray flux comes from the inner part of the accretion disk, the spectral modeling is not sensitive to R out . The relxill model includes a coronal illuminating continuum in the shape of a power-law with an exponential cutoff at high energies, parameterized by the power-law index, Γ, and the high energy cutoff, E cut . Other free model parameters are the inner disk radius, R in , the disk inclination, i, the ionization parameter, ξ, the iron abundance, A Fe , and the reflection fraction, R ref . The model, TBabs * relxill in XSPEC notation, describes the data well, leaving no systematic structures in the residuals with χ 2 /ν EPOCH1 = 1033.8/936 = 1.10 and χ 2 /ν EPOCH2 = 1135.4/1122 = 1.01 (see Figure 2(c) and (f)). We measure a low absorption column density, N H = (1.03 ± 0.04) × 10 21 cm −2 with XMM-Newton, consistent with the value obtained early-on during the outburst (Shidatsu et al. 2018;Kajava et al. 2019). The spectral continuum is well described by a power-law with a hard photon-index of Γ ≈ 1.6 − 1.8, with no prominent high energy cutoff required (see Table 1 for best-fit parameters). The observed flux of MAXI J1820+070 in 2-10 keV decreased from 5.0 × 10 −10 erg cm −2 s −1 to 3.9 × 10 −11 erg cm −2 s −1 between EPOCH 1 and EPOCH 2, corresponding to a change in Eddington ratio of 0.35% to 0.026%. Adding an extra thermal disk component modeled by diskbb does not improve the fit, implying that thermal emission from the accretion disk is either too weak to be detected, or the disk is sufficiently cool that the peak of the disk blackbody distribution moves below the XMM-Newton band. . Confidence contours of the inner disk radius, R in , and the disk inclination, i. 1σ, 2σ, and 3σ corresponds to ∆χ 2 of 1, 4, and 9, respectively. The fitting results indicate that the disk is moderately ionized, with an ionization parameter of log(ξ) ≈ 3. We measure the truncation radius of the optically-thick accretion disk to be R in, EPOCH1 = 27 +10 −6 r g , and R in, EPOCH2 > 38 r g . Based on the definition of the ionization parameter 2 , we estimate the density of the accretion disk, n, at the radius where the Fe K line is generated, decreases from ∼ 10 18 cm −3 to ∼ 10 16 cm −3 from EPOCH 1 to EPOCH 2. The best fit prefers a low inclination angle of the accretion disk, and only upper limits are obtained: i EPOCH1 < 20 • and i EPOCH2 < 39 • . This is expected given the presence of a narrow Fe Kα line, as line broadening caused by relativistic effects would become less apparent when the disk is viewed close to face-on (Laor 1991). R in and i are degenerate, as they are both related to the width of the Fe Kα line. Without a more complicated line profile like the broad and asymmetric lines detected during the BHS, the two parameters cannot be uniquely constrained. To investigate the correlation between R in and i, we plot the ∆χ 2 contour in Figure 3. As shown in panel (b), similar to case of GX 339-4 discussed in Tomsick et al. (2009), the model tends to prefer a larger inner disk radius with increasing disk inclination. Therefore, we note that by letting both R in and i vary freely, we are quoting a conservatively small value for the disk truncation radius. In addition, the best fit leads to a super-solar iron abundance, similar to that found in the BHS (Bharali et al. 2019;Buisson et al. 2019). It is currently uncertain whether the high iron abundance, frequently found when performing spectral modeling with ionized disk reflection models represents the true elemental abundance in the accretion disk, or is an overestimation resulting from physical processes overlooked in the calculation of the reflection models. There is evidence that this issue might be mitigated by using reflection models assuming high disk density (e.g., Tomsick et al. 2018;Jiang et al. 2019). We note that the iron abundance is known to be mostly related to the line strength rather than the line width, thus it is unlikely to have a significant effect on the estimate of the disk truncation radius here. The best-fit reflection fraction is R ref ≈ 0.06, significantly lower than that measured in the BHS of BHBs, which often requires a reflection fraction greater than unity. This reflection fraction in the BHS is believed to be enhanced by strong lightbending effects near the black hole (e.g., Miniutti & Fabian 2004;Reis & Miller 2013;Xu et al. 2018a,b). The reflection fraction parameter in the relxill model is defined as the ratio of the coronal intensity illuminating the disk to that reaching the observer. The extremely low value we find here in the LHS of MAXI J1820+070 indicates that solid angle extended by the reflector is small, which is consistent with the scenario that the inner accretion disk is significantly truncated. TIMING ANALYSIS We produced the power spectral density (PSD) from the NuSTAR (EPOCH 1) and XMM-Newton EPIC-pn (EPOCH 2) light curves, in the energy band of 3-79 keV and 0.6-10 keV. The NuSTAR light curves of FPMA and FPMB are added using the lcmath tool in XRONOS. We produce the PSD from light curves with the time bins of 0.5 s, averaged in intervals of 2 13 bins. The PSD is calculated in the rms normalization using powspec, with white noise subtracted. The NuSTAR and XMM-Newton PSD is geometrically rebinned by a factor of 1.03 and 1.05, respectively, to reach nearly equally spaced frequency bins in logarithmic scale. We fit the PSD in XSPEC with a multi-Lorentzian model using a unity response file: several zero-centered broad Lorentzians for the band-limited noise continuum, one narrow Lorentzian for the QPO and one for its possible sub-harmonic. As shown in Figure 4, we find a QPO in the NuSTAR and XMM-Newton PSD at the frequency of ν EPOCH1 = 11.6 ± 0.2 mHz and ν EPOCH2 = 2.8 ± 0.1 mHz, detected at 5.3σ and 3.2σ via the F-test. The QPO has rms variability of 13 ± 1% and 11 ± 3% in EPOCH 1 and EPOCH 2, respectively. The QPO is detected in the mHz range, lower than the typical frequency range of low-frequency QPOs found in BHBs (0.1-30 Hz). But the shape of the noise continuum and the fact that QPO is located close to the low-frequency break are consistent with type-C QPOs commonly found in BHBs (Belloni & Motta 2016). The PSD is similar to that detected in MAXI J1820+070 during the BHS (Buisson et al. 2019), only with the QPO and the low-frequency break extending to even lower frequencies in the LHS. DISCUSSION AND CONCLUSION We detect a narrow Fe Kα emission line with high S/N from NuSTAR and XMM-Newton observations of the black hole Xray binary MAXI J1820+070 during the second rebrightening period after its 2018 main outburst. The X-ray spectral and timing properties indicate that the source was in the LHS during the time of the observations. Spectral modeling reveals a very low absorption column density, combined with the moderate ionization state of the reflection material, confirming that the Fe Kα line is produced from reflection by the accretion disk rather than that by torus-like Compton thick obscuring material commonly found in AGNs (Hickox & Alexander 2018) and in the BHB V404 Cygni (Motta et al. 2017). The line is visibly narrow and lacks significant relativistic broadening, in contrast to the broad line profile observed during the BHS (see Figure 5(a) for a comparison of the line profile 3 ), providing direct evidence for significant truncation of the inner accretion disk at low accretion rates in a BHB. There are disparities in the literature about the estimate of the disk inclination in MAXI J1820+070. X-ray dips were observed during early phases of the outburst (Kajava et al. 2019), and a sharp increase in the Hα emission line EW was reported and interpreted as a grazing eclipse of the accretion disk (Torres et al. 2019), suggesting a high inclination of i ≈ 60 • − 80 • for the outer part of the accretion disk. We note that the inclination of this system is also unlikely to be very low (e.g, < 10 • ) because radial velocity (RV) measurements are significant in amplitude (Torres et al. 2019). The measured jet inclination angle of 62 ± 3 • also indicates that the system is viewed at high inclination (Atri et al. 2020). In contrast, modeling the relativistic reflection spectra during the BHS yields a low inclination of i ≈ 30 • for the inner part of the accretion disk (Buisson et al. 2019;Bharali et al. 2019). If both inclination estimates are robust, this implies a strong disk warp of ∼ 30 • − 50 • . Our spectral fitting of the narrow Fe Kα line in the LHS also prefers a very low inclination. However, as discussed above, it is expected for the spectral modeling to bias towards low inclinations in the presence of a narrow line profile. Without strong relativistic distortion effects, the incli-nation is poorly constrained as only upper limits are obtained. Therefore, we tried fitting the spectra with a fixed disk inclination angle instead, and still assuming a disk emissivity index of q = 3. This results in a larger disk truncation radius and slightly degraded quality of the fits: when i is frozen at 30 • , we get R in, EPOCH1 = 54 +27 −17 r g and R in, EPOCH2 > 88 r g with ∆χ 2 EPOCH1 = 11.7 and ∆χ 2 EPOCH2 = 1.6; when i is fixed at 70 • , the constraint on the inner disk radius becomes R in, EPOCH1 = 312 +226 −114 r g and R in, EPOCH2 > 297 r g with ∆χ 2 EPOCH1 = 9.9 and ∆χ 2 EPOCH2 = 5.2. These fits, although statistically slightly worse, leave no clear residuals with physical implications, thus may still be considered acceptable within calibration uncertainties. There have been a number of observational campaigns aiming at investigating the evolution of the inner disk radius with the accretion rate in BHBs via the reflection method (e.g., Petrucci et al. 2014;Fürst et al. 2016). However, the results are often poorly constrained or highly model dependent due to low statistics, especially at low flux states. In this work, we find evidence for a large inner disk radius in MAXI J1820+070 at low accretion rates (a few tens to a few hundreds of r g , depending on the disk inclination). Combined with earlier measurements during the BHS by NuSTAR (Buisson et al. 2019;Bharali et al. 2019), it suggests that the inner edge of the accretion disk remains stable around the ISCO when the accretion rate is high, and starts to recede from the ISCO as the luminosity drops below ∼ 1% of the Eddington luminosity (see Figure 5(b)). The critical accretion rate when significant disk truncation occurs is consistent with that measured in the well studied source GX 339-4 via the reflection method (Tomsick et al. 2009), and that found based on the study of several BHBs by systematically modeling their thermal disk components (Cabanac et al. 2009). In terms of the disk recessing with accretion rate, the results agree well with the theoretical prediction that the inner part of accretion disk becomes replaced by ADAF at low accretion rates (< 1% of the Eddington limit; Esin et al. 1997, although the model predicts that the optically-thick accretion disk should be truncated in all hard states. In addition, we a detected QPO at ν EPOCH1 = 11.6±0.2 mHz and ν EPOCH2 = 2.8 ± 0.1 mHz. The QPO and low-frequency break are found at lower frequencies than those in the BHS. Qualitatively, the longer time scales imply a physically larger size for the hot optically-thin Comptonization regions around black holes, where the QPO is believed to be generated. In the propagating mass accretion rate fluctuations model, the lowfrequency break marks the viscous timescale at the outer edge of the Comptonization region (e.g., Ingram & Done 2010; Ingram & van der Klis 2013). There have been various theoretical models put forward to explain the low-frequency QPOs in BHBs, but the exact mechanism is still highly uncertain. One of the currently promising models is the Lense-Thirring (LT) precession model. Adopting the simplified assumption that the QPO is caused by the effect of the LenseâȂŞThirring precession of a test particle orbiting a spinning black hole at the disk truncation radius , we calculate the inner disk radius inferred from the QPO frequencies using the black hole mass of 8 M and the spin of a * = 0.3. This leads to the characteristic truncation radius of R in, EPOCH1 ∼ 60 r g and R in, EPOCH2 ∼ 100 r g . As a crude estimate, these are broadly similar to the spectral modeling results, in support of the inner accretion disk being significantly truncated in the LHS. During the BHS, however, we note that there is usually disagreement in the measurements from the two methods (e.g., Fürst et al. 2016;Xu et al. 2017;Buisson et al. 2019), especially, Buisson et al. (2019) find that the QPO frequency and the disc inner radius are not connected. As discussed in Ingram & Done (2011), it is possible that the discrepancy is related to physical complexities currently not well understood and thus not included in the QPO models, or complexities related to intrinsic properties of the corona. Additional uncertainties in the measurement of the inner disk radius come from the poorly known nature of the corona, which affects the illumination pattern of disk ). During the above analysis, we assume the disk emissivity of ∝ r −q (q = 3), which is expected for a standard accretion disk in the Newtonian regime. An alternative explanation for the narrow Fe K line that does not involve disk truncation is a low disk emissivity index (q < 2), so that most of the contribution to the reflection features come from the outer disk. One possible accretion geometry that yields such a low emissivity index is a large lamppost height for the corona, which is believed to be associated with the base of a jet (Dauser et al. 2013). The emissivity expected for a lamppost geometry in Newtonian gravity is q ∼ 0 for r < h, and q ∼ 3 for r > h ( Vaughan et al. 2004). The spectra can be equally well fitted by the reflection model assuming a lamppost geometry, relxilllp, with a lamppost height of h EPOCH1 ∼ 40 r g and h EPOCH2 > 70 r g , without the need to invoke disk truncation. But the model cannot self-consistently explain the low reflection fraction, unless significant beaming away from the accretion disk is involved. For a reflection fraction of ∼ 0.1, it requires bulk motion with a Lorentz factor of γ ∼ 1.2 when viewed at the inclination of 30 • , and γ ∼ 1.6 at the inclination of 60 • (Beloborodov 1999). It is uncertain whether strong beaming and the accretion geometry of a compact corona with a large lamppost height above the accretion disk are realistic descriptions of the system. There is evidence that significant beaming is absent in X-ray emission of BHBs in the LHS (Narayan & McClintock 2005). Although the case at low accretion rates is less clear, previous successful applications of the lamppost model to X-ray spectra of bright BHBs and AGNs measures a low lamppost height of < 10 r g , or a steep disk emissivity profile (Fabian et al. 2015). Thus the physical implications of such a large lamppost height or low emissivity index required by the spectral fitting here are currently unclear and requires further investigation. The extended disk-corona model and the lamppost corona model are two competing coronal geometries that have been proposed (Chauvin et al. 2018). The QPO frequency and low-frequency break in the PSD suggest that a physically large size for the Comptonization region, consistent with the extended corona and receding accretion disk scenario. The inner disk being truncated at a large radius also naturally explains the non-detection of any thermal disk emission. Disk truncation provides a straightforward and physically reasonable explanation for the narrow Fe Kα line we detected in MAXI J1820+070. However, we note that an alternative accretion geometry of a high lamppost corona cannot be ruled out by our dataset and may also be plausible. We thank the anonymous referee for helpful comments that improved the paper. We thank Norbert Schartel for approving the XMM-Newton DDT observation and the XMM-Newton SOC for prompt scheduling of the observation. JH acknowledges support from an appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center, admin-istered by the USRA through a contract with NASA. DJW acknowledge support from an STFC Ernest Rutherford Fellowship. This work was supported under NASA contract No. NNG08FD60C and made use of data from the NuSTAR mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. We thank the NuSTAR Operations, Software, and Calibration teams for support with the execution and analysis of these observations.
2020-03-05T02:41:36.268Z
2020-03-03T00:00:00.000
{ "year": 2020, "sha1": "a068aa4ce557309f87500f90d2a903ccb44616a7", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/102511/2/Xu_2020_ApJ_893_42.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2b3093cdf9297f6f21c1301c2f933d3a1d03c3bd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246997742
pes2o/s2orc
v3-fos-license
Clinical and quality-of-life outcomes of a combined synthetic scaffold and autogenous tissue graft procedure for articular cartilage repair in the knee Background Injuries to the articular cartilage of the knee often fail to heal properly due to the hypocellular and avascular nature of this tissue. Subsequent disability can limit participation in sports and decrease quality of life. Subchondral bone perforations are used for the treatment of small defects. Filling out the central portion in larger lesions becomes difficult, and scaffolds can be used as adjuvants, providing a matrix onto which the defect can be filled in completely. Also, autogenous cartilage grafts can be combined, acting as an inducer and improving healing quality, all in a single procedure. Methods This observational study evaluated the clinical and quality-of-life outcomes of patients with articular cartilage lesions of the knee undergoing repair via a microfracture technique combined with a synthetic scaffold and autogenous cartilage graft, with transosseous sutures and fibrin glue fixation, at 12 months of follow-up. Secondarily, it assessed whether combined procedures, previous surgical intervention, traumatic aetiology, lesion location, and age affect outcomes. The sample consisted of adult patients (age 18–66 years) with symptoms consistent with chondral or osteochondral lesions, isolated or multiple, ICRS grade III/IV, 2–12 cm2 in size. Patients with corrected angular deviations or instabilities were included. Those with BMI > 40 kg/m2, prior total or subtotal (> 30%) meniscectomy, second-look procedures, and follow-up < 6 months were excluded. Pain (VAS), physical activity (IKDC), osteoarthritis (WOMAC), and general quality of life (SF-36) were assessed. Results 64 procedures were included, comprising 60 patients. There was significant improvement (P < 0.05) in VAS score (5.92–2.37), IKDC score (33.44–56.33), and modified WOMAC score (53.26–75.93) after surgery. The SF-36 showed significant improvements in the physical and mental domains (30.49–40.23 and 46.43–49.84 respectively; both P < 0.05). Conclusions Combination of microfractures, autogenous crushed cartilage graft, synthetic scaffold, and transosseous sutures with fibrin glue provides secure fixation for treatment of articular cartilage lesions of the knee. At 12-month follow-up, function had improved by 20 points on the IKDC and WOMAC, and quality of life, by 10 points on the SF-36. Age > 45 years had a negative impact on outcomes. Background Injuries to the articular cartilage of the knee often fail to heal properly due to the hypocellular and avascular nature of this tissue. Subsequent disability can limit participation in sports and decrease quality of life, making surgical treatment of these lesions an attractive option [1]. In a meta-analysis, surgical repair of articular cartilage lesions was associated with a 76% rate of return to sport. Medium-term activity scores for this procedure were comparable to those achieved with meniscus repair [2]. The subchondral bone perforations introduced by Pridie [3] and modified by Steadman et al. [4] are still used for the treatment of small (2-3 cm 2 ), International Cartilage Repair Society (ICRS) grade III and IV chondral defects. They can repair the defect from the periphery towards the centre, but in lesions larger than 3 cm 2 , filling out the central portion becomes difficult. To improve this, scaffolds can be used as adjuvants, providing a matrix onto which the defect can be filled in completely [5]. Autogenous cartilage grafts can be combined with this matrix, acting as an inducer and improving healing quality, all in a single procedure [6]. Studies have reported the impact of surgical cartilage repair on physical activity scores (International Knee Documentation Committee, IKDC), osteoarthritis scales (Western Ontario and McMaster Universities Arthritis Index, WOMAC), and visual analogue scales (VAS) of pain; however, few have assessed the impact on qualityof-life outcomes measured with validated instruments, such as the Medical Outcomes Short-Form Health Survey (SF-36). A brief review of the literature found no studies using the WOMAC, IKDC, and SF-36 concurrently. Thus, the objective of this study was to evaluate the clinical and quality-of-life outcomes of the microfracture technique, combined with autologous cartilage graft and a synthetic matrix, in the repair of articular cartilage lesions in the knee. Methods This was a retrospective analysis of medical records of patients who underwent surgical repair of articular cartilage lesions at a single hospital. Approval was obtained from the Ethics Committee of University of Montreal (dossier no. CER 2021-2115). The inclusion criteria were patients aged 18-66 years with symptoms consistent with single or multiple chondral or osteochondral lesions, grade III/IV on the International Cartilage Repair Society (ICRS) classification, 2-12 cm 2 in size. The exclusion criteria were body mass index (BMI) > 40 kg/m 2 , meniscectomy > 30% in the treated compartment, second-look procedures, and duration of follow-up 6 months or less. Patients with axis deviation of the lower limbs, patellofemoral joint abnormalities, and ligament instability were accepted. Any such issues were corrected during the same surgical procedure. Data were collected pre-operatively and at 6 and 12 months post-operatively. All patients completed a subjective VAS for pain; the transformed WOMAC score and the IKDC scale for functional assessment; and the SF-36 instrument for quality of life. Patients were divided into two groups for comparison, in order to ascertain whether there was any statistically significant difference in the following variables at 12 months versus baseline: combination with another intervention during the same procedure; history of previous surgery; aetiology (traumatic or non-traumatic); lesion location (patellofemoral or femorotibial); and age (older or younger than 45 years). Failure was defined as implant loosening, absence of improvement within 12 months, or need for a second procedure. Surgical procedure Patients were placed in the supine position with the knee at 90°. No tourniquet was used. Arthroscopy was performed to confirm the size and severity of the cartilage lesion, followed by a mini-arthrotomy depending on the location of the lesion. The subchondral bone was exposed until a 90-degree well-defined border between the cartilage and the bone. Microfractures were performed from the periphery towards the centre, with a diameter of 1.2 mm and a distance of 3-4 mm between drill holes. A piece of disposable foil was placed on the bed of the lesion to mark out its shape. This was then used as a mould for the membrane, which was cut to fit the defect. The scaffold was prepared with sutures (#1 absorbable poly-glycolic acid) at each end of the membrane. The suture knots were pre-formed so they would Keywords: Cartilage, Tissues, Support, Cartilage, Hyaline, Orthopedics, Arthroplasty, Subchondral, Knee surgery, Sports surgery, Scaffold 17:112 remain buried in the bone. The scaffold was hydrated with the patient's own blood ( Fig. 1). Samples of autogenous cartilage from the healthy intercondylar area were harvested with a trephine and crushed with a scalpel blade. The resulting autogenous cartilage graft was placed in the bed of the lesion, followed by the scaffold, and the membrane was secured with buried transosseous sutures. After membrane fixation, fibrin glue was applied for added final stability (Fig. 2). No orthotic devices or splints were used post-operatively. In cases where any additional procedures were necessary, such as osteotomies or repair of ligament injuries, these were performed before the articular surface was repaired. Rehabilitation was started on the first postoperative day, with range-of-motion exercises. Patients were cleared for progressive weight-bearing immediately in case of patellofemoral lesions and after the sixth postoperative week for femoral condyle lesions. Activities were resumed 12 weeks after the operation, and gradual return to sport was allowed after 6 months. Statistical analysis Patients were evaluated at three separate time points: pre-operatively, and at 6 and 12 months post-operatively. Repeated-measures analysis of variance (ANOVA) was performed to identify significant differences between time points. When comparing VAS, WOMAC, IKDC, and SF-36 scores at the last time point between subgroups, analysis of covariance (ANCOVA) was used, adjusting for patient's pre-operative scores. P-values < 0.05 were deemed statistically significant. Results Sixty patients were evaluated (36 women and 24 men, mean age 44 ± 12.7 years), for a total of 64 knees (33 right and 31 left). Overall, 75 articular cartilage lesions were treated (25.3% grade III and 74.6% grade IV). Eleven knees had two lesions each. The average lesion size was 4.4 ± 2 cm 2 ; the smallest lesion was 2 cm 2 , and the largest, 12 cm 2 . The patella was the most frequent site of injury, accounting for 39.1% of the cases, followed by the femoral condyles with 37.5%. The remainder were combined lesions. When a concomitant procedure was combined with articular surface repair, osteotomy of the anterior tibial tuberosity (ATT) was the most common intervention, prevalent in 23.4% of cases (always performed to correct patellar alignment); 14.1% had a partial meniscectomy, 14.1% had a tibial valgus osteotomy, and 18.7% had no additional procedures. Just over half of procedures (51.6%) were performed on patients with progressive lesions and no preceding trauma, while the remaining 48.4% had a history of trauma. In 71.9% of cases, there had been no previous surgery. IKDC scores were consistent with progressive functional improvement, as shown in Fig. 3. All subjective WOMAC scores (pain, joint stiffness, and functional limitation) improved significantly from baseline after surgery (P < 0.001). Details are given in Fig. 4. Improvement in the physical component of the SF-36 quality-of-life score is shown in Fig. 5. The mental component of the SF-36 quality-of-life score demonstrated improvement (P < 0.05), rising from 46.43 at baseline to 49.84 at 12 months. When stratified into groups, age was the only relevant factor, with more favourable outcomes in patients under age 45 on the final IKDC score (from 37.8 to 65.0 in those younger than 45, versus 30.3 to 50.0 in those aged 45 or older; P = 0.010) and final total WOMAC score (from 60.8 to 84.4 in those younger than 45, versus 48.4 to 69.7 in those 45 or older; P = 0.022). Better results were also achieved by patients under age 45 on the VAS and SF-36 (physical and mental domains) at 12 months after surgery, but in these instruments, the difference was not statistically significant. Likewise, there was no significant difference in IKDC, total WOMAC, VAS, or SF-36 (physical and mental) final scores when comparing patients with traumatic versus non-traumatic aetiology, previous surgery or no previous surgery, articular cartilage repair alone versus combined surgical procedures, or lesion location (patellofemoral versus femorotibial). In five cases (7.7%), the procedure was considered to have failed. In one case, the implant came loose from the patella; it was removed, and the lesions (multifocal) were debrided. In two other cases, the patients subsequently required total knee replacement (one had a multifocal lesion, and the other, a 9-cm 2 lesion). Two cases required second-look patellofemoral arthroplasty (both had multifocal lesions and had undergone combined procedures). There were four complications during the followup period (6%), all in successful procedures: one deep venous thrombosis, resolved with conservative treatment only; two cases of arthrofibrosis, which improved with manipulation under anaesthesia; and a superficial skin infection treated with oral antibiotics. Discussion The objective of articular cartilage repair is to provide long-lasting improvement of the patient's quality of life. It is also important that the technique be readily available and be as harmless as possible. The arthroscopic debridement although gives some good results, the studies are limited, and the evidence is low [7]. On the other hand, the cultured chondrocyte implantation provides good repair capacity but has disadvantages such as high cost, limited availability, and the need for two procedures [8]. Biological interventions as mesenchymal and adipose expanded laboratory cells have some promising results, but some are experimental, and some studies question the durability and safety [9]. Salzmann et al. [6] demonstrated a procedure using a membrane seeded with autologous cartilage cells, which can solve the problems of cost and availability and be performed in a single stage. Among the 64 repairs carried out in our sample, only once did the membrane come loose (1.5%); the other four cases of failure were likely due to progression of the underlying degenerative process, requiring conversion to arthroplasty (6.25%). Gomoll et al. [10], in a study of 101 patients with up to 1 year of follow-up, used fibrin glue and sutures along the periphery only when they felt the need for greater fixation, achieving a 5% reoperation rate due to hypertrophy or delamination. In our study, VAS scores for pain decreased from 5.92 to 2.37 in the postoperative period (P < 0.001), demonstrating clinical improvement in this aspect at 12 months of follow-up. Siclari et al. [11], using the same microfracture scaffold combined with platelet-rich plasma and Smart Nails ® (ConMed Linvatec Italy, Milan, Italy) and fibrin glue for fixation, also reported improvement in the Knee Injury and Osteoarthritis Outcome Score (KOOS) (from 54.1 to 93.5). Verdonk et al. [12] used the MaioRegen ® three-layer scaffold (Fin-Ceramica Faenza SpA, Faenza, Italy) and likewise reported a reduction in VAS pain scores, from 63.1 to 22.7. Although pain may not be related to the articular cartilage lesion per se, it is interesting to note that it improves markedly after cartilage repair procedures are performed. This effect can be credited to restoration of the load-bearing property, to reduction of subchondral pressure by the microfractures, or to neovascularization of the lesion site, reducing inflammatory cytokine levels and thus relieving symptoms [13,14]. We also observed improvements in IKDC scores at 12 months postoperatively (from 33.44 to 56.33, P < 0.001). Berruto et al. [15], using a three-layer matrix implanted through the "press-fit" technique in 49 patients, reported an increase in IKDC scores from 45.55 to 70.86 at 12 months. Delcogliano et al. [16] used the same technique and matrix in 19 patients (21 lesions) and demonstrated an approximate gain of 30 score points (35.7-67.7) in 12 months. Theoretically, a three-layer matrix mimics hyaline cartilage more closely. Even so, the clinical outcomes achieved are very similar and may be attributable to the use of bone perforations or autogenous grafts. In the present study, an approximate 23-point improvement in the mean IKDC score was observed, noting that patients had lower baseline (pre-operative) scores than in the case series of Berruto et al. [15] (45 points) and similar scores to those reported by Delcogliano et al. [16] (35.7 points). This may be attributed to the more advanced average age of our patients compared to the other series (44 years, versus 37 and 33 years respectively) [15,16]. In our cohort, the total WOMAC score improved from 53.26 to 75.93 (P < 0.001) at 12 months postoperatively. Dhollander et al. [17] used allogeneic chondrocyte cell cultures from fresh tissues protected with periosteum. This procedure was performed on 21 patients aged 12-47 years, who obtained an average 42% improvement in WOMAC. For comparison, in our study, we observed a 42.6% increase in total WOMAC scores. Although part of the cases in the present study were of focal, trauma-related lesions, a higher percentage of patients had progressive pain, probably indicative of osteoarthritis rather than traumatic aetiology; the WOMAC score could be a better tool to assess these patients, since it is a scale designed for degenerative processes. Few studies have demonstrated improvement in quality of life after cartilage repair. The SF-36 showed an improvement in its physical domain, from 30.49 at baseline to 40.23 post-operatively (P < 0.001). Gelber et al. [18] reported an improvement in SF-36 scores from 53.9 to 86.6. In our series, improvement was not as marked, which again may be explained by the higher average age of our patients (44 versus 36 years). Older patients tend to have a lower SF-36 score because they experience greater difficulty climbing stairs, kneeling, and doing other daily living activities. Cole et al. [19] reported an improvement in scores from 35.4 to 45.5 in the treatment of osteochondritis-related lesions. This more modest improvement, closer to ours, may be due to their patients having osteochondritis dissecans, which carries a less favourable prognosis. The normal range of the SF-36 physical domain score has been described as 49.7 ± 9.4 [20] in the Canadian population. Thus, it is clear that, despite improvement in patient quality of life, they are unlikely to reach the same level of quality of life experienced by the general population. Knees with a cartilage lesion but no history of surgery and no comorbid pathologies, so-called "green knees", exclude those most likely to have an unfavourable outcome. However, most patients (70% of cases) have combined lesions, or "red knees" [21,22]. Martinčič et al. [23] followed 151 patients for an average of 10 years after knee cartilage repair. They found no statistically significant difference in outcomes between patients with and without a history of prior surgery. In our study, we chose not to exclude "red knees", so as to ensure that our sample was representative of the patients most commonly seen in clinical practice. We found no significant difference when patients were stratified by traumatic versus nontraumatic aetiology, prior surgery, combined surgery, or lesion location. Despite showing an improvement in scores, our patients aged > 45 years fared rather poorly compared to their younger counterparts. Turinetto et al. [24] demonstrated that advancing age decreases the potential for differentiation, immunomodulatory effect, and migration ability of pluripotential cells and chondrocytes, thus reducing the healing power of tissues. An alternative to improve this capacity would be to introduce cell growth factors such as SDF-1alpha [25], which has been demonstrated to increase recruitment of cartilage progenitor cells in laboratory studies or use the expanded mesenchymal cells-witch are showing encouraging results [26]. Overall, there were five cases of treatment failure (7.7%). Assessment showed that all patients needing conversion to total knee replacement had lesions that required greater healing potential and were over 45 years of age. The only patient whose implant came loose was also > 45 years old and had a large lesion, despite no history of trauma. Compared to those of other series, however, our failure rate was similar. Verdonk et al. [12] reported a 5.3% failure rate, and Delcogliano et al. [16], 2 out of 19 cases (10.5%). Conclusions A combined technique consisting of microfractures, autogenous graft, a synthetic scaffold, transosseous sutures, and fibrin glue provides secure fixation for treatment of articular cartilage injuries of the knee. Patients experienced an average of 20 points of improvement in the IKDC and WOMAC scales and 10 points of improvement in the SF-36 score. Age greater than 45 years had a negative impact on outcomes.
2022-02-21T14:17:51.092Z
2022-02-20T00:00:00.000
{ "year": 2022, "sha1": "f3ae438c8077f05550f199c61c5dd7e25d5249b1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "3974b9ea085f31bfda6f8d77160ec014c3632e63", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
235821509
pes2o/s2orc
v3-fos-license
Inhibition of mTOR signaling and clinical activity of metformin in oral premalignant lesions BACKGROUND The aberrant activation of the PI3K/mTOR signaling circuitry is one of the most frequently dysregulated signaling events in head and neck squamous cell carcinoma (HNSCC). Here, we conducted a single-arm, open-label phase IIa clinical trial in individuals with oral premalignant lesions (OPLs) to explore the potential of metformin to target PI3K/mTOR signaling for HNSCC prevention. METHODS Individuals with OPLs, but who were otherwise healthy and without diabetes, underwent pretreatment and posttreatment clinical exam and biopsy. Participants received metformin for 12 weeks (week 1, 500 mg; week 2, 1000 mg; weeks 3–12, 2000 mg daily). Pretreatment and posttreatment biopsies, saliva, and blood were obtained for biomarker analysis, including IHC assessment of mTOR signaling and exome sequencing. RESULTS Twenty-three participants were evaluable for response. The clinical response rate (defined as a ≥50% reduction in lesion size) was 17%. Although lower than the proposed threshold for favorable clinical response, the histological response rate (improvement in histological grade) was 60%, including 17% complete responses and 43% partial responses. Logistic regression analysis revealed that when compared with never smokers, current and former smokers had statistically significantly increased histological responses (P = 0.016). Remarkably, a significant correlation existed between decreased mTOR activity (pS6 IHC staining) in the basal epithelial layers of OPLs and the histological (P = 0.04) and clinical (P = 0.01) responses. CONCLUSION To our knowledge this is the first phase II trial of metformin in individuals with OPLs, providing evidence that metformin administration results in encouraging histological responses and mTOR pathway modulation, thus supporting its further investigation as a chemopreventive agent. TRIAL REGISTRATION NCT02581137 FUNDING NIH contract HHSN261201200031I, grants R01DE026644 and R01DE026870 Introduction in 2020, resulting in 14,500 deaths (1). Despite encouraging novel therapies, only a limited improvement in the survival rates for patients with HNSCC has occurred in the last 4 decades, particularly in tongue and other oral cavity cancers that are often associated with tobacco use and alcohol consumption as the main risk factors (2). Poor treatment outcomes are generally the result of delayed diagnosis and "field cancerization," a unique term describing the occurrence of multifocal potentially malignant lesions or second primary HNSCC (3). Clearly, prevention and early diagnosis are keys to significantly improving the prognosis of patients with HNSCC. Ten randomized clinical trials have been reported for oral cancer chemoprevention, none of which had a positive, long-term effect on cancer development. Initial studies using high doses of 13-cis-retinoic acid reduced oral premalignant lesions (OPLs) and prevented second primary tumors (4). However, chronic administration of 13-cis-retinoic was not tolerable, and although lower doses were tolerable, they were ineffective (5). Similarly, the recently reported Erlotinib Prevention of Oral Cancer trial, which was the first trial involving participant selection based on risk (6,7), did not provide an effective targeted preventive strategy for HNSCC, specifically in individuals with OPLs, who are at a higher risk of developing HNSCC (8). The recent elucidation of the HNSCC genomic landscape revealed that multiple genetic alterations underlie the development of this aggressive malignancy, including mutations and genetic alterations in the TP53, FAT1, NOTCH1, CASP8, CDKN2A (p16 INK4A ), and PIK3CA genes (9)(10)(11). In particular, PIK3CA, which encodes the PI3Kα catalytic subunit, is the most commonly mutated oncogene in HNSCC (~20%). This underlies our initial observations that the aberrant activation of the PI3K/mTOR signaling pathway is a widespread event in HNSCC (>80% of all HPVand HPV + cases; refs. 12,13). We also observed that mTOR inhibitors (mTORis) display potent antitumor activity in a large variety of genetically defined and chemically induced experimental HNSCC models (13)(14)(15)(16)(17)(18)(19) as well as in our recently reported phase II clinical trial in patients with HNSCC (20). This supports that PI3K/mTOR signaling may represent a druggable candidate in HNSCC. However, the potential immunosuppressive activity of direct mTORis may raise safety concerns regarding their long-term use as chemopreventive agents (21). This prompted us to focus on the potential use of metformin, which targets mTOR indirectly, for HNSCC prevention. Metformin is an oral biguanide that is currently the drug of choice for the treatment of type 2 diabetes and is being prescribed to at least 120 million people worldwide (22). Hence, metformin's safety profile for long-term use and the management of its potential side effects are well-documented. Metformin treatment reduces tumor cell growth in part by reducing the activity of mTOR as part of its complex, mTORC1 (23)(24)(25)(26). In prior studies, we have shown that metformin causes a significant reduction in the conversion of OPLs into HNSCC in mice (27) and that metformin decreases mTOR activity and HNSCC progression by acting on cancer-initiating cells directly (28,29). Specifically, the knockdown of the metformin transporter OCT3, which is highly expressed in normal and neoplastic oral epithelium, or rescuing oral cancer cells from the effects of metformin on mitochondrial complex I, nearly abolishes the antitumor activity of metformin in mice in vivo (28,29). Aligned with these experimental studies, 2 large retrospective population case-control cohort studies together involving more than 300,000 diabetic patients demonstrated a decrease in the risk of HNSCC in individuals on metformin (30,31). Metformin use also results in better overall survival in diabetic patients diagnosed with laryngeal HNSCC (32). Based on these experimental findings and the emerging epidemiological evidence, we developed a clinical trial (M4OC-Prevent, NCT02581137) of individuals with OPLs to explore the potential use of metformin for HNSCC prevention. Results The study opened to accrual in June 2016 and closed to accrual in July 2017 ( Figure 1). Thirty-three potentially eligible participants were consented: 4 from UCSD, 16 from BCCA, and 13 from UMN. Of these, 26 met all eligibility criteria and initiated agent intervention. Twenty-two participants completed the 12-14 weeks of agent intervention, and 4 participants did not complete the 12-14 weeks of agent intervention (2 due to adverse events [AEs] and 2 withdrew consent). One participant terminated intervention early but provided postintervention research specimens for outcome evaluation, which was included in the analysis. Thus, 23 participants were considered evaluable. The demographics of participants who initiated the agent (n = 26) are summarized in Table 1. The average age was 58 ± 11 years. Fourteen participants were women. The average BMI from these participants was 30.1 ± 6.8 kg/m 2 . The majority were White and non-Hispanic. Former and current smokers accounted for 42% and 12%, respectively. Baseline disease characteristics of participants who provided postintervention JCI Insight 2021;6(16):e147096 https://doi.org/10.1172/jci.insight.147096 biopsy specimens for research endpoints (n = 23) are also summarized in Table 1. The majority of the participants had mild/moderate dysplasia (87% combined). No erythroplakia lesions were included in the study. The lesions were mostly found on the tongue (70%), with an average lesion size of 239 (±218) mm 2 . Subject and OPL characteristics of individuals providing preintervention and postintervention biospecimens are presented per trial participant in Table 2. IHC analysis of the baseline OPLs revealed that 15 of 23 (65%) were positive for nuclear p53 staining, which is used as a surrogate of p53 mutations (33). Example p53 + and p53cases are shown in Figure 2. The high expression of OCT3, a metformin transporter that is widely expressed in normal, dysplastic, and cancerous squamous epithelium (28), was confirmed in all tissues tested. EGFR expression levels were classified as high (27%), moderate (33%), and low (33%) based on the staining intensity. All tissues tested were negative for p16, the cell cycle protein that is upregulated in HPV + HNSCC, lost in the first steps of malignant progression in HPV -HNSCC (34), and used as a surrogate biomarker for HPV status in oropharynx HNSCC. We also examined the expression levels of PTEN, a driver on PI3K/mTOR activation that is often genetically or epigenetically suppressed in HNSCC (35). Only 1 case showed the absence of PTEN immunoreactivity. Thus, most individuals exhibited typical OPLs that are HPVand p53 + , expressed the metformin transporter OCT3, and exhibited variable levels of EGFR, with only 1 case exhibiting the lack of expression of PTEN. Baseline and postintervention serum glucose, hemoglobin A1c (HbA1c), and C-peptide concentrations and serum and saliva metformin concentrations are summarized in Table 3, Table 4, and Figure 3. Metformin intervention significantly suppressed serum HbA1c levels (from 5.7 ± 0.5 to 5.5 ± 0.4%, P = 0.023) but did not change the glucose and C-peptide levels. The postintervention average serum metformin concentration was 705.0 ± 444.0 ng/mL. Metformin was also detectable in saliva with an average postintervention concentration of 171.0 ± 143.3 ng/mL and was highly correlated with the serum metformin concentration. Most participants had mild or moderate side effects of metformin (Table 5) The clinical response rate was 17% (1-sided 95% CI: 0.06, 1.00). The waterfall plot depicting changes in the lesion size in individual participants, clinical and histological response, and smoking status is shown in Figure 4A. As detailed in Figure 4B, none of the participants had a complete clinical response; 17% had a partial response and 17% had progressive disease. Of the participants, 17% had a complete histological response and 43% (1-sided 95% CI: 0.26, 1.00) had a partial histological response, for an overall histological response rate of 60%. Of the participants, 17% had progressive disease. Exploratory analysis of the relationship between histological response and smoking status revealed a higher number of responses in current or former smokers versus never smokers (P = 0.016; Figure 4C). Notably, all participants who exhibited progressive disease were never smokers. IHC analysis of cell proliferation (nuclear staining for Ki67) showed that metformin induced a statistically significant decrease in cell proliferation in the squamous epithelium ( Figure 5A). Aligned with our prior studies, most OPLs exhibited high levels of mTOR signaling as judged by pS6 staining throughout the lesions, which was reduced significantly by metformin administration ( Figure 5B). The expression levels of total S6 protein was not affected, as we previously reported in experimental systems (27)(28)(29). Qualitatively, the decrease in pS6 was most notable in the basal epithelial cells ( Figure 5B, right panel). Exploratory analysis revealed a significant correlation between the decrease in basal pS6 staining and both histological and clinical response ( Figure 5C). In contrast, we did not see any statistically significant changes in the expression levels of the OCT3 metformin transporter or in p53 and EGFR expression (P = 0.571, n = 20 and P = 0.615, n = 20, respectively). As an approach to identify genetic alterations predictive of a favorable response, we performed whole-exome sequencing of the pretreatment OPL biopsies and matching blood DNA. OPLs were small in size and fixed in formalin, making their molecular profiling more challenging. A total of 17 of 22 OPL specimens yielded limited but sufficient DNA to be processed (median 31.2 ng). All 17 samples were analyzed for somatic copy number alterations (CNAs), and 14 of them had more than 70% of their bases covered at 20x and were further analyzed for single-base substitutions. We identified a total of 1423 mutations, 16 of which were nonsilent, likely pathogenic, and affecting known cancer genes or genes involved in mTOR signaling. These affected 11 of 14 samples and none of them were due to C to A substitution, which could be due to oxidation artifact from FFPE (36). In particular, we identified known or likely pathogenic mutations in TP53 (n = 3), HRAS (n = 2), NOTCH1 (n = 2), CDKN2A (n = 1), PIK3CA (n = 1), or CASP8 (n = 1; Supplemental Table 1; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.147096DS1). This landscape is consistent with mutations identified in HPV -HNSCC (11). None of the mutated genes were significantly associated with treatment response. The 14 specimens with higher quality data had a median mutational burden of 1.43 single-base substitutions/Mb (a median of 51 exon substitutions). We also determined that a median of 6% of the genome was involved in CNAs, and samples with lower CNA burden were more likely to be from participants responding histologically to treatment (P = 0.01, Wilcoxon's test; Supplemental Figure 2). We identified chromosome arm-level CNAs in 9 of 17 samples, including recurrent copy number gains of 8q (n = 3), 8p (n = 2), or 9q (n = 2) or loss of heterozygosity of 9p (n = 5) and 17p (n = 4), as well as 4 losses and 3 gains observed in individual samples, some of which have been previously described in OPLs (Supplemental Figure 2, left; ref. 37). None of the arm-level CNAs were significantly associated with response. However, 3 of the 4 individuals with progressive disease exhibited multiple CNAs, including participants 1 and 10 with extensive arm-level and foci-level loss and participant 5 with only arm-level gain. All of these individuals were never smokers. Although numerically less common, the presence of multiple CNAs (loss and/or gain) on several loci appeared to be more frequent in individuals with progressive disease than in responders (3 of 4 vs. 2 of 10), consistent with the CNA burden analysis. Table 2. Discussion Individuals harboring OPLs are at risk of developing HNSCC. Specifically, OPLs, such as hyperplasia and dysplasia, may undergo variable progression to malignancy over a period of years, thus requiring extended surveillance and possibly therapy (8). OPLs often present as leukoplakia or erythroplakia (white or red patches, respectively), with progression rates to cancer ranging from 11%-36% for leukoplakia to greater than 50% for erythroplakia (38,39). OPLs are often not resectable due to anatomic location, multifocality, or involvement of broad areas of oral mucosal surfaces. In these cases, long-term surveillance for progression is frequently required for long-term clinical management. Furthermore, even individuals undergoing adequate surgical resection with negative margins have a relatively high rate of progression to HNSCC of 15%-40% (40). This reinforces the concept of epithelial field cancerization and that the presence of OPLs represents a risk for malignant transformation across the oral mucosa due to occult clonal premalignant cells that may demonstrate normal histology (41). Thus, there is an urgency to identify new treatment modalities to intercept the conversion of OPLs into HNSCC. Here, we report the first phase II trial exploring the clinical, histological, and biological activity of metformin in individuals with OPLs. Although the primary endpoint of the clinical response rate (defined as a ≥50% reduction in lesion size) was lower than the proposed threshold for favorable clinical response (≥30%), secondary endpoints, including a 60% histological response rate, tolerability, and evidence of biological activity, encourage further investigation of metformin as a chemopreventive agent for HNSCC. There is growing enthusiasm for clinical trials using metformin for cancer prevention and treatment. However, there is a limited number of studies using metformin in prospective clinical studies for cancer prevention. The still poorly understood mechanisms of metformin's purported anticancer activity may also preclude the selection of the patient populations most likely to benefit from metformin treatment. The well-known antihyperglycemic effects of metformin may lower cancer risk by decreasing circulating insulin at the organismal level (42). This may account for the protective effects of metformin in patients with diabetes. However, metformin has also shown chemopreventive efficacy in nondiabetic individuals, where it reduced the prevalence and number of metachronous adenomas or polyps after polypectomy in a phase III randomized, placebo-controlled clinical trial (43). Specifically for HNSCC, metformin displays chemopreventive and antitumor effects in our HNSCC preclinical models in which animals are not obese or insulin-resistant (27)(28)(29). Indeed, we have provided evidence that metformin acts on HNSCC-initiating cells directly, because its beneficial effects are dependent on the expression Metformin levels in serum and saliva before and after metformin treatment. ND, not detectable. of the metformin transporter OCT3 in HNSCC cells (28) and can be abolished by reverting the impact of metformin on HNSCC mitochondrial complex I (29). The latter approach revealed that in HNSCC, metformin decreases mTOR and AKT activity, activates AMPK, and reduces the expression of cancer stemness gene-expression programs, thereby reducing the proliferative capacity of the precancer cells and enhancing their commitment to terminal differentiation (29). Aligned with these experimental observations, we found a high level of OCT3 expression in OPLs in our current study, and we observed a histological response in 60% of individuals with OPLs after 3 months of treatment with metformin, concomitant with reduced cell proliferation. This included 17% complete pathological responses, without affecting circulating glucose or C-peptide levels and independently of the participants' BMI. Although the elucidation of the underlying mechanisms may require further investigation, our current findings suggest that metformin may have acted on OPL squamous cells directly, by inhibiting mTOR signaling, reducing cell proliferation, and enhancing cell differentiation toward a more benign or normal histology. In this regard, it is conceivable that longer treatment with metformin would be needed to increase clinical response rates, because this process requires the progressive remodeling of the OPL and its stroma, whereas reduced cell proliferation and histological changes may occur more rapidly. An unexpected preliminary observation from our study in unselected participants with OPLs is that metformin was more active in current and former smokers, with more histological responses in this particular subgroup. We also observed a numerically lower response rate in never smokers with high levels of CNAs, although these results are to be considered preliminary and hypothesis generating. The effects were independent of the specific gene mutations, albeit only a small number of OPLs yielded high-quality genomic information. Although the mechanistic rationale for these findings is at the present unknown, it is notable that the preclinical activity of metformin in OPL was first revealed in the 4NQO carcinogen-induced, oral-specific carcinogenesis model (27), which we have recently shown to exhibit a mutanome with 94% similarity to the human tobacco-induced carcinogen signature (29). This may have provided a very useful and clinically relevant experimental bias. Specifically, these observations together with the results of our current clinical trial raise the possibility that metformin may have been beneficial primarily in current and former smokers, which is precisely the patient population at the highest risk of HNSCC development (44). This may provide a testable hypothesis by enriching for this high-risk group of participants in future clinical trials. Similarly, we also observed a numerically higher rate of high levels of CNAs in never smokers who progressed on metformin. Although these results are to be considered preliminary and hypothesis generating, CNAs have been associated with immunologically "cold" oral cancers, which could mediate metformin resistance. Table 5. Treatment side effects related to agent intervention in all participants who initiated agent intervention (n = 26) AEs n (%) Grade, n (%) AEs are listed based on the number of individuals and percentage of participants affected (left) and the number and percentage of the corresponding severity (1-2, right). A number of limitations to this study must be acknowledged. Notably, there was no randomized placebo control arm to control for the spontaneous rate of OPL clinical and histological regression. The study was powered for a clinical response rate of 30%, as seen in more extended previous OPL clinical trials; however, OPLs typically wax and wane in size, and thus a longer treatment may be necessary to definitively address clinical responses (45,46). Indeed, the correlation between clinical lesion regression and the lack of cancer development is not well established, because in the setting of oral premalignancy, some cancers can develop outside the target lesion (38). Nevertheless, the goal of this trial was to identify a signal of efficacy, which was realized via the greater-than-expected histological regression rate and the intriguing signal that metformin may be more effective in the setting of tobacco-related field injury (45,46). The study's additional strengths include its prospective nature with careful clinical follow-up and sequential biopsies, as well as the thorough tissue analysis of the impact of metformin treatment in previously untreated OPLs. Further study is needed to investigate the efficacy of metformin as a single agent or in rational combinations in a larger, randomized, placebo-controlled trial of longer duration. In conclusion, metformin is a well-characterized and widely used FDA-approved drug with a known safety profile. Premalignant lesions characterized by upregulated mTOR signaling, such as OPLs, may be uniquely vulnerable to metformin treatment. After only 3 months of treatment, 60% of participants treated with metformin had a partial or complete histological response. OPL tissues showed a reduction in Ki67 expression and phosphorylation of S6, with a marked correlation between the decrease in mTOR signaling in the basal layer of OPLs and both the clinical and histological responses. Overall, the results demonstrate encouraging results and support further study on the potential chemopreventive activity of metformin for HNSCC prevention in selected individuals with OPLs. Methods Study design. The study was a phase IIa single-arm, open-label trial of individuals with oral leukoplakia or erythroplakia to explore the potential of metformin for oral cancer prevention (NCT02581137). The primary endpoint was to determine whether 12-14 weeks of metformin intervention is associated with the clinical response of OPLs. The secondary endpoints included histological response to metformin in the target lesion, pretreatment and posttreatment tissue-based biomarkers of molecular targets and dysregulated molecular mechanisms, modulation of circulating metabolic biomarkers, and serum and saliva metformin concentrations. Study drug. The drug product was supplied to the study site by the Division of Cancer Prevention, NCI. The drug product was commercially available metformin hydrochloride extended-release tablets manufactured by Actavis. Each tablet contained 500 mg metformin hydrochloride as the active ingredient. Extended-release metformin was selected for this study to increase compliance and reduce GI side effects. Study population. Study participants were at least 18 years of age, had oral leukoplakia or erythroplakia with mild, moderate, or severe histological dysplasia or hyperplasia not associated with mechanical factors, and had lesions at least 8 × 3 mm before initial biopsy. Other inclusion criteria included Karnofsky performance status greater than or equal to 70%; normal liver, kidney, and bone marrow function; and ability to sign a written informed consent document. Exclusion criteria included the presence of diabetes treated with insulin or oral agents, HbA1c greater than 8%, history of diabetic ketoacidosis, uncontrolled intercurrent illness, oral carcinoma in situ, history of chronic alcohol use or abuse, acute or chronic liver disease, history of renal disease, history of prior HNSCC unless curatively treated 1 year or more prior, and use of chemotherapy and/or radiation for any malignancy (excluding nonmelanoma skin cancer and cancers confined to organs with removal as only treatment) in the past 2 years. Study procedures. During the initial visit, participants underwent a brief physical exam and performance status evaluation. They were also evaluated for concomitant medications, medical history, baseline symptoms and signs, and tobacco and alcohol use. Participants underwent an oral exam for lesion measurement and photography. Bidimensional measurements for all lesions were recorded. All lesions that met the size criteria (≥8 × 3 mm) were considered target lesions for clinical response assessment. Lesions that did not meet the size criteria were also measured but recorded as nonmeasurable. A biopsy (4 mm) was performed on the lesion that met the size criteria or on the largest lesion, if multiple lesions met the size criteria. The lesions were generally sufficiently large such that the biopsy did not have much effect on lesion size. All biopsies were sent for local pathology evaluation, followed by central pathology review. Archival tissue was used for histological eligibility determination by a centralized pathology review and biomarker analysis if the preenrollment biopsy was performed within 6 weeks of initial screening. Each of the participating centers performed pathology reviews, and centralized consensus pathology review of the target lesion biopsies was performed at UCSD. A predefined process was developed and followed to resolve discrepancies between the local site and central pathology review. In case of disagreement between the local and central pathology review, the following algorithm was used. For minor discrepancy (1 level change), the in-house pathologist's evaluation was considered final. For major discrepancy (2 levels or more difference), referral was made for a third independent review (in-house at NCI). The consensus evaluation of 2 pathologists was used as the final diagnosis. For disagreement among all 3 pathologists (that at the local site, UCSD, and NCI), the 2 in-house pathologists discussed the case to come to a consensus. After eligibility was confirmed, participants were instructed to take metformin extended release 500 mg per day for 1 week, followed by dose escalation to 1000 mg per day for 1 week, followed by dose escalation to 2000 mg per day for the remainder of the study period. Metformin dose escalation is a standard practice in patients with diabetes to minimize the GI side effects and thus optimize adherence. An interim clinic safety visit occurred after 6 weeks of treatment. Participants also underwent an oral exam with lesion measurement and photography and returned after 12-14 weeks of agent intervention for postintervention evaluation. During this visit, participants underwent safety and compliance assessments as well as oral exam for lesion measurement, photography, and biopsy of the previously defined target lesion for histopathology and tissue markers. In general, a biopsy was performed on the residual and worst appearing area of each lesion after treatment. If the target lesion was not visible after intervention, a biopsy was obtained at the site of the previous lesion biopsy. Biofluids were collected for research endpoints. Criteria for clinical and histological response evaluation. Clinical response was evaluated by the following criteria (47): complete response (CR), disappearance of all evidence of lesion(s); partial response (PR), greater than or equal to 50% reduction in the sum of the products of diameters of lesion(s) measurable at baseline; nonmeasurable lesion(s) may not increase greater than or equal to 25% in size and no new lesion may appear; no change (NC), no change in the size of the lesion(s) identified at baseline and no new lesions JCI Insight 2021;6(16):e147096 https://doi.org/10.1172/jci.insight.147096 appearing, i.e., anything that is not CR, PR, or progressive disease (PD); and PD, any increase greater than or equal to 25% in the product of the diameters of any lesion(s) measurable at baseline or in the estimated size of lesion(s) nonmeasurable at baseline or the appearance of an unequivocal new lesion. Histological response was evaluated by the following criteria: CR, complete reversal of dysplasia or hyperplasia to normal epithelium in the target lesion; PR, improvement of the degree of dysplasia or hyperplasia in the target lesion; NC, no change in the degree of dysplasia or hyperplasia in the target lesion, anything that is not CR, PR, or PD; and PD, increase in the severity of grade of histology in the target lesion. Biomarker analysis. All tissues were fixed overnight in Z-fix (zinc-buffered formaldehyde, Anatech Ltd.), transferred to 70% ethanol, and processed for routine paraffin embedding. Sections (5 μm) were obtained that were stained with H&E for imaging or immunoreacted using the ABC method (Vector Laboratories). A detailed description of the methods used can be found in Supplemental Methods. Quantification of slides stained for different biomarkers was performed using Aperio-Leica Scanscope-associated algorithms. For pS6 IHC, H scores were determined as the product of the staining intensity (0, absent; 1, weak staining; 2, moderate staining; and 3, strong staining) multiplied by the percentage of positive cells quantified. The percentage of cells staining positive for pS6 in the basal layers of OPLs and the percentage of cells positive for OCT3 were also determined. Ki67 quantification was performed using Aperio-Leica Scanscope-associated algorithms, and the percentage of positive cells was determined. Analysis of serum and saliva metformin concentrations. Serum and saliva metformin concentrations were measured using high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS; ref. 48). Briefly, an aliquot of serum or saliva was mixed with the internal standard, phenformin. Cold acetonitrile was added for protein precipitation. The supernatant was injected onto the HPLC-MS system. The mass spectrometric analysis was performed using atmospheric pressure chemical ionization operated in the positive ion mode. The analytes were detected by multiple reaction monitoring. The assay was linear over the concentration range of 2-2000 ng/mL. DNA extraction, quality control, sequencing, and copy number analysis. The DNA was extracted from FFPE tissue using the QIAamp DNA FFPE Tissue Kit (QIAGEN). The extracted DNA was quantified by fluorometry (HS dsDNA Kit Qbit, Thermo Fisher Scientific). A detailed description of the methods used for sequencing and copy number analysis can be found in Supplemental Methods. Individual sequence information was deposited in the National Center for Biotechnology Information's database of Genotypes and Phenotypes, study phs002437. v1.p1: https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs002437.v1.p1. Statistics. Participants' characteristics were summarized by mean ± SD for continuous variables and frequency (%) for categorical variables. The primary objective of this study was to determine whether 12 weeks of treatment with therapeutic doses of metformin was associated with clinical response of the target OPL. Clinical response was evaluated after the 12-week treatment period and categorized into CR, PR, NC, or PD. A participant with CR or PR was considered a respondent. A 1-sided, 1-sample binomial exact test at a significance level of 5% was performed to see if the overall response rate was greater than 30% (i.e., ≤30% considered as poor treatment). A sample size of 20 achieved 87% power to detect a response rate 0.30 higher (i.e., a response rate of 60%) than a poor treatment. With an anticipated attrition rate of 20%, 26 participants were accrued to have at least 20 evaluable participants. The secondary endpoints included histological response; pretreatment to posttreatment changes in Ki67, pS6, glucose, and C-peptide; and the effect of OCT3 expression level and genomic alterations on biomarker modulation by metformin treatment and clinical or histological response to metformin. Similar to the primary endpoint, the histological response rate was calculated, and a 1-sided 95% CI based on the exact (Clopper-Pearson) method was derived. Nonparametric methods, e.g., signed-rank test, were performed to evaluate each of the biomarker changes. In addition, logistic regression with the clinical/histological response as the outcome variable was performed to explore if smoking status, changes in pS6 expression, changes in OCT3 expression level, or any genomic alteration was associated with the clinical/histological response to metformin. Statistical analyses were performed using SAS 9.4. The nonparametric Wilcoxon's matched-pairs signed-rank test was used to compare pretreatment and posttreatment values of IHC stainings using GraphPad Prism version 7.00 for Windows. Study approval. The study was conducted at the UCSD Moores Cancer Center, UMN, and BCCA through the University of Arizona Chemoprevention Consortium and funded by the Division of Cancer Prevention, NCI. The study was approved by the NCI Central Institutional Review Board and IRBs at each institution. Written informed consent was received from participants prior to inclusion in the study. Author contributions JSG and ES initiated, designed, and directed the research studies; analyzed the data; and wrote the manuscript. AAM performed the central pathology review and IHC studies, and SMH reviewed pathology and contributed with methods development. XW and ZW collected samples, performed biochemical analysis, and contributed to statistical analysis. DN, OH, and LBA conducted the sequencing and genetic analysis of OPLs. BRW, FGO, DL, LDR, MR, and CC identified the participants; conducted the biopsies, treatment, and subject follow-up; and acquired the data. VDB and LB coordinated the trial at all sites. CHH conducted the statistical design and analysis and wrote the corresponding part of the manuscript. EEWC and SML contributed to the trial design and implementation. SML was the trial principal investigator. JEB and HHSC contributed to the trial design and implementation. HHSC had overall responsibility of the trial conduct and coordination and measurement of metformin concentrations.
2021-07-15T06:16:38.440Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "09107c09e45c561796489d5a659fcadddcdd80a5", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/147096/files/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14c7422028405d47d64d514ae40e2fb4068ca275", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54547579
pes2o/s2orc
v3-fos-license
Decomposing the notion of vine vigour with a proxydetection shoot sensor: Physiocap® The vigour and the vegetative expression of grapevines are parameters of great interest in viticulture, as they describe a general state of growth capacity. Understanding the impacts of agricultural practices on vine vigour, under particular soil and climate conditions, is essential to give a more accurate technical advice, especially on what soil management and vine nutrition are concerned. A shoot sensor called Physiocap®, designed and developed by the CIVC (Comité interprofessionel du vin de Champagne), is used during dormancy season to measure the shoot section, the shoot number and an estimation of the aboveground biomass. The sensor maps vigour spatial variability within a plot, among plots and over years. Physiocap database in Champagne has been analysed since 2011 at different scales, in order to determine the factors impacting the vine vigour. The vintage appeared to be the most impacting factor. For example, climate variability or accidents like dry springs and early spring frosts reduce vine vigour. Champagne vine varieties did not significantly impact vine vigour according to the database. At the scale of the Champagne vineyard, the aboveground biomass estimation of Physiocap® was strongly correlated with the yield of the following year, leaving a promising basis for analysing the impact of different factors on vine vigour. At the scale of the plot, winegrowers are able to compare their plot vigour to a Champagne threshold, which is being refined every year as the Physiocap database is enriched. They can therefore manage their fertilization and soil tillage program more accurately according to their objectives. The Physiocap® sensor appears to be an interesting multidimensional tool binding vine physiology, agronomy and precision viticulture at different scales. When coupled with other data, especially the one describing soil characteristics, it could even be the baseline for creating a decision-aid tool in Champagne for fertilization, tillage and pruning practices. Introduction Maintaining a balance between grapevine vegetative growth and grape production is one of the most important goals in viticulture. This goal has become even more challenging with the physiological evolutions due to climate change, especially the drier springs in Champagne, as well as the shifting regulations. Grapevines exhibiting excessive vigour are likely to produce less fruit of reduced quality, and vines with inadequate vigour will have negative effects on their potential yield. Vigour management should be studied in a multifactorial way. It is the result of a set of factors such as climate and soil characteristics, which interact with grapevine in a given system that is itself modified by practices. This system can be observed under different space and time scales in Champagne. That is exactly were precision viticulture finds all its purpose: measuring vigour space and time variability. In order to define the notion of vigour, three terms are commonly used, based on the perennial character of grapevine [1]: The vegetative expression is the result of the annual metabolic activity during which shoots, leaves and roots grow and stock carbohydrates. It is characterized by the number of shoots during dormancy. The vigour is the intensity of this metabolic annual activity, characterized by the shoot section. The vegetative potential or potential reserve storage is directly linked to both vegetative expression and vigour. It is the result of the plant material, the previous vegetative cycles and of the pedoclimatic conditions. A shoot sensor called Physiocap®, designed and developed by the CIVC (Comité interprofessionel du vin de Champagne), is used during dormancy season. Since 2011, the Physiocap® has crossed over 700 plots in Champagne. Data are collected and analysed in order to understand vigour variability among space and time. The aim of this paper is to present the latest analysis of this Physiocap database at different time and space scales in Champagne: within a whole region, among plots, among vintages and inside a plot. This scale decomposition could be seen as a ‘terroir’ analysis. Introduction Maintaining a balance between grapevine vegetative growth and grape production is one of the most important goals in viticulture.This goal has become even more challenging with the physiological evolutions due to climate change, especially the drier springs in Champagne, as well as the shifting regulations.Grapevines exhibiting excessive vigour are likely to produce less fruit of reduced quality, and vines with inadequate vigour will have negative effects on their potential yield. Vigour management should be studied in a multifactorial way.It is the result of a set of factors such as climate and soil characteristics, which interact with grapevine in a given system that is itself modified by practices.This system can be observed under different space and time scales in Champagne.That is exactly were precision viticulture finds all its purpose: measuring vigour space and time variability.In order to define the notion of vigour, three terms are commonly used, based on the perennial character of grapevine [1]: -The vegetative expression is the result of the annual metabolic activity during which shoots, leaves and roots grow and stock carbohydrates. Material and method Physiocap® is an optical laser embarked on an agricultural engine used during winter just before pruning [2]. The sensor is composed of a laser micrometer which consists of an emitter and a receiver positioned face to face, above wired linkers.It also has a GPS enabling The sensor provides three key physiology variables of grapevine: shoot section, shoot number and an estimation of the aboveground biomass.The shoot section measurement is made by using the interruption section of the laser beam.It is instantaneous and does not depend upon shoot distance from the sensor nor the sensor's speed.The sensor therefore gives many measures of section each second.The second variable, the shoot number, is calculated out of the speed given by the GPS and the shoot section measurements.Finally, the shoot biomass indicator per vine or per meter square is calculated out of the shoot section and the shoot number, as well as a wood density fixed at 0.9 kg/dm 3 in Champagne. Data has been analysed in order to qualitatively weigh the factors impacting vine vigour.These factors are either not modifiable, like climate or soil, or can be changed through agricultural practices.They all impact vine vigour, which in turn has an effect on various agronomical performances (Fig. 2). Results Comparing vintages gives important information on how climate characteristics impact vine vigour [4].Each vintage can be compared to the mean value of Champagnes shoot biomass on six successive years (fig.3).At an interplot scale, comparing the vigour variability among plots enables winegrowers to position their plot in relation to a threshold and better understand the impact of the terroir on their vine vigour.When comparing different regions we observe a great variety of values.These differences may be due to the type of soil, the mesoclimate within a region but also the vineyard management strategies.In other words, this variability is linked to the terroir of each region. For example cover crop strategies in the Montagne de Reims vineyard tends to be frequent, thus lowering shoot biomass.On the contrary, the Côte des Bar region was heavily impacted by frost in 2016 and 2017, which may have increased shoot biomass measured at the end of 2017 from increased reserve accumulation [5]. Figure 4: Shoot biomass boxplots of two different regions of Champagne: Montagne de Reims and Côte des Bar in 2017 The effect of vine material and training system is also significant at the whole Champagne vineyard scale since 2011.Chardonnay appears to be the variety which increases shoot biomass the most.The age also impacts vigor: it seems that beyond 40 years old age, we observe a drop of the number of shoots (mean value of 5.9 compared to 6.8 shoots).Furthermore 'guyot' training system increases shoot biomass compared to a Chablis or a Cordon de Royat training system (fig.5).This could be due to an increasing shoot section as a consequence of the decreasing number of shoots.This correlation correspond to a leaf/grape ratio balance within the vine.At a more local scale, this correlation is not as good as in fig.4, due to climate events such as frost or hail.These events create a disorder on the leaf/grape ratio the year after.The most important one is climate and more specifically annual rainfall.Vine material and training systems impact vigour at the region scale and on pluriannual time scale, while at the plot scale, it is more fertilization and soil tillage that can have an effect on vigour.These practices can therefore be the levers to be used for vigour management.Yield potential can be predicted using the biomass indicator defined by the sensor at a large space scale.Nevertheless, it is essential to take the climate events into consideration when analysing this type of correlation on a local space scale.This is one of the first agronomical analysis on the Physiocap data which is being enlarged every year.On the basis of these preliminary results, Physiocap, coupled with other tools and indicators such as soil analysis, soil tillage and fertilization strategies as well as accurate climate data, seems to be a central component of a multidimensional and innovative vine management. Figure 3 : Figure 3: Boxplot of the shoot biomass in mm² per linear meter measured by the Physiocap sensor since 2011 and number of measured plots. Figure 4 : Figure 4: Link between annual rainfall and shoot biomass on 6 successive years. Figure 6 : Figure 6: Correlation between the shoot biomass (mm²/lm) of year n and the yield of year n+1 on 6 successive years.
2018-12-04T00:47:54.277Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "2ec9f6390e40c011c8d8e516fd4fc9fd0185396c", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/25/e3sconf_terroircongress2018_03003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2ec9f6390e40c011c8d8e516fd4fc9fd0185396c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
119624316
pes2o/s2orc
v3-fos-license
$\ast$-SDYM Fields and Heavenly Spaces. I. $\ast$-SDYM equations as an integrable system It is shown that the self-dual Yang-Mills (SDYM) equations for the $\ast$-bracket Lie algebra on a heavenly space can be reduced to one equation (the \it master equation\rm). Two hierarchies of conservation laws for this equation are constructed. Then the twistor transform and a solution to the Riemann-Hilbert problem are given. Introduction It turns out that many nonlinear integrable systems are reductions of SDYM equations (e.g. see Mason and Woodhouse 1996). The statement that all integrable system of mathematical physics are some reductions of SDYM equations is known as Ward's conjecture (Ward 1985). The twistor construction for SDYM system is in a sense inherited by the reduced system. There are however, exceptional cases which do not fit into this scheme for a finite dimensional structure group (Mason and Woodhouse 1996). An extension of the Lie algebra of SDYM equations to infinite dimensional algebra of Hamiltonian vector fields provides description of heavenly spaces of complex general relativity (Mason 1989). The nonlinear graviton construction (Penrose 1976, Penrose and Ward 1980, Mason and Woodhouse 1996 proves that the heavenly equations constitute an integrable system. Thus the idea arose, that H-space might be an universal integrable system (Mason 1990). However, the reduction of the algebra of Hamiltonian vector fields over a symplectic manifold Σ 2 , sdif f (Σ 2 ) to finite dimensional algebras such as su(N ) does not exist for N > 2. Consequently, it seems that Ward's conjecture should be extended to the algebras that include all finite dimensional Lie algebras sl(N, C) as well as the algebra of hamiltonian vector fields. This is the point where deformation quantization enters into the theory of integrable systems. The idea of deformation quantization, introduced by Bayen, Flato, Fronsdal, Lichnerowicz and Strenhaimer in (Bayen et al. 1978) is to consider a deformed algebra of smooth functions on a classical phase space. The introduced associative * -product of two functions f, g is a formal power series in deformation parameterh, f * g = h k ∆ k (f, g), k ≥ 0. The * -product is assumed to satisfy the following axioms: • it is local i.e. ∆ k (f, g) depends only on f, g and partial derivatives of f, g of rank not greater than k, • it is a deformation of Poisson algebra i.e. ∆ 1 (f, g)−∆ 1 (g, f ) = i{f, g} Poisson . One can prove that such a product exists on any symplectic (De Wilde andLecomte 1983, Fedosov 1994) or even Poisson manifold (Kontsevich 1997). It seems to be useful to consider integrable (quantum) deformations of integrable systems (Kupershmidt 1990, Strachan 1992, Takasaki 1994. It is so, because the Moyal bracket algebra can be reduced to all su(N ) algebras (Fairlie et al. 1990). In a natural way the Poisson algebra is embedded in deformed algebra. This suggests that SDYM equations for * -bracket Lie algebra ( * -SDYM equations) are reducible to su(N )-SDYM equations as well as to heavenly equations. This is the problem which we intend to consider in the present and the next papers. The present paper is devoted mainly to formal problem of integrability of * -SDYM equations. In order to make our results more general we deal with an arbitrary * -product, and the Yang-Mills fields are defined on 4-dimensional heavenly space. It turns out that to have the formalism sufficiently general one needs to deal with formal power series containing all negative powers of deformation parameterh, in particular with the power series of the form exp[ 1 ih A]. As is well known (Fedosov 1996) such power series are well defined only for some special A. To ensure the existence of exponent for wide class of A we introduce new formal parameter t (convergence parameter). It is obvious that in applications only such series will be used that are convergent with respect to the parameter t. Our paper is organized as follows. In section 1 we give some fundamental definitions and properties of formal power series. Then we obtain a group e Q of formal power series suitable for construction of respective gauge theory. The * -SDYM equations on Kahler manifold in the case of heavenly space are reduced to one equation called master equation (ME) (1.16). In section 2 we find two collections of conserved charges (2.4) and (2.7). As is pointed out the collection (2.7) is characteristic for any SDYM system and (2.4) is a generalization of hidden symmetries of heavenly or SDYM equations. We obtain two Lax pairs and the forward Penrose-Ward transform for ME. Dressing operator connecting those two pairs, and finally, the algebra of hidden symmetries are given. Section 3 is devoted to the solution of Riemann-Hilbert problem. We define the homogeneous Hilbert problem and we show the existence of the solution of this problem for the formal power series group e Q (Birkhoff 's factorization theorem). Then the inverse Penrose-Ward transform is considered. Concluding remarks (section 4) close the paper. Some applications of master equation (ME) in the theory of inegrable systems and complex relativity will be presented in the forthcoming paper. In that paper a sequence of su(N ) chiral fields tending to the heavenly space for N → ∞ has been constructed. It has been also shown that any analytic solution of su(N ) SDYM equations can be obtained from some solution of * -SDYM equations. At the beginning of this section we briefly summarize the basic definitions and theorems concerning formal power series. We define algebra, group and adjoint action. For more details see (MacLane 1939, Neumann 1949, Jacobson 1980, Ruiz 1993. The ordered abealian group is a pair ((G, +), P ), where (G, +) is an abealian group, P is a subset of G such that: • 0 / ∈ P , P ∩ −P = ∅ (0 is the neutral element of (G, +)) • ∀g, h ∈ P , g + h ∈ P • G = −P ∪ {0} ∪ P . We call P the subset of positive elements. It allows one to order elements of the group i.e. if g, h ∈ G we say that g is less than h and denote g < h if and only if h − g ∈ P . Let ((G, +), P ) be an ordered group and K a vector space. Formal power series over G with coefficients in K is a map a : G → K, such that its support supp a = {g ∈ G : a(g) = 0} has the least element. The formal power series a will be written in the following form a = g∈G a gh g where a g = a(g),h-parameter. The set of all formal power series over G with coefficients in K will be denoted by K((h G )). It is a vector space over complex field, with addition and multiplication by scalar are defined pointwise by Moreover if the pair (K, •) is an algebra then the multiplications of series is This multiplication is well defined, as the support of each series has the least element, so ∀g ∈ G the number of elements In the case when (G, +) is a group (Z, +) we will write K((h)). Moreover According to Fedosov's works (Fedosov 1994, Fedosov 1996 ]. The * -product considered is a closed * -product i.e. the trace tr(f * g) := ω n n! f * g has the property tr(f * g) = tr(g * f ) (Connes et al 1992, Omori et al 1992, Fedosov 1996) We can define the Lie algebra Our aim is to construct gauge theory. The fundamental object is the gauge group. The group element appears as an exponent of the element of Lie algebra. In finite dimensional case the exponent of left-invariant vector fields is the maximal integral curve, a 1-parameter subgroup of the Lie group. In the case of the * -algebra taking an exponent is possible only for some special vectors (Compare Fedosov 1996, Asakawa and Kishimoto 2000). Such a group will not be general enough to define gauge transformation. In order to make superpositions of formal power series well defined, we need to introduce another parameter. So we would consider formal power series over (Z, +) with coefficients in a space O(Σ 2n )((h)). In a space O(Σ 2n )((h, t)) of all such series we consider a subspace A The star product * defined on O(Σ 2n )[[h]] can be extended to A (we use the same symbol) The algebra (A, * ) is called formal * -algebra. Let A ∈ A. The element A(0) i.e. the one which stands at t 0 will be denoted φ(A) and called the free element The family N of formal power series belonging to a formal * -algebra A, N = {A δ ∈ A , δ ∈ Ω} is called t-locally finite if for each natural m the number of formal power series of this family having non-zero element at t m is finite. Then for each t-locally finite family and any family of complex number {a δ ∈ C , δ ∈ Ω} the sum δ∈Ω a δ A δ is well defined • Let A ∈ A and let free element φ(A) = 0. Then the family {A n , n = 1, 2, ...}, where A n ≡ A * A n−1 = A n−1 * A is t-locally finite. • Let f (z) = ∞ n=0 a n z n be a complex power series of one variable and A ∈ A with φ(A) = 0. We define f (A) = ∞ n=0 a n A n (where A n is as above). • If f 1 (z) = ∞ n=0 a n z n and f 2 (z) = ∞ m=0 b m z m are two formal power series and A ∈ A with φ(A) = 0, then This follows from the fact that A is an algebra with A n * A m = A m * A n = A n+m , and multiplication of coefficients a n , b m is commutative. In what follows we will write A −1 := X. This follows from the fact that the family {[f 2 (A)] n , n = 0, 1, 2, ...} is t-locally finite. In what follows Q is subalgebra of formal power series with free element equal to zero It is worth noting here that we do not consider differential structure on the group e Q , so this is not a Lie group. Apart from that, for A, B ∈ Q one has This justifies our notation. From Corollary 1.1 ∀a ∈ e Q there existsà ∈ Q, such that a = exp(Ã). For traditional reasons we will write For our purpose it is important to consider the following left actions of e Q on the algebra A and the adjoint representation Acording to (1.1) the adjoint representation can be written in the following form Each of the actions do not change the free element. See Asakawa and Kishimoto (2000). Bundle of formal * -Algebras. Let us consider four dimensional complexified Kahler manifold M ‡ . We will consider functions and tensors on M wich takes values in formal * -algebra A. ‡ The coordinates on M are denoted by (w, z,w,z) we use also the following abbreviations z α = {w, z}, zα = {w,z} and z i = {w, z,w,z}. M is hermitian manifold i.e. is equipped Let P(M, e Q ) denote a trivial principle bundle π P : P → M. In a total space P the structure group acts to the right in each fibre P × e Q ∋ (u, c) → u * c ∈ P. The global sections are then Each representation of the group e Q in algebra A defined by (1.2), (1.3), (1.4) allows one to define an associated bundle with tipical fibre A. In the Cartesian product P × A we introduce the following equivalence relations • The adjoint bundle adj(E) := P × A/ ∼ Ad . with holomorphic nondegenerate metric ds 2 = 2g αβ dz α ⊗s dzβ. This reduces the allowed transformations to the ones which preserve the foliation w = const., z = const. as well as w = const,,z = const.. Each 1-form σ ∈ Λ 1 M can be decomposed to a sum σ = σ (1,0) +σ (0,1) where σ (1,0) = σwdw + σzdz, and σ (0,1) = σwdw + σzdz. Analogously, the exterior derivative d is a sum of two Dolbeaut operators d = ∂ +∂ where The Kahler formis a 2-form of the type (1, 1), given by Ω = g αβ dz α ∧ dz β . For complexified Kahler manifold the Kahler form is closed dΩ = 0. Locally this means that Ω = ∂∂K for some complex function K = K(w, z,w,z) called Kahler potential. The Kahler form Ω = g αβ dz α ∧ dzβ gives rise to the volume element ν : For each section σ : M → P of principal bundle, the section f : M → E of associated bundle induces mapf σ : P → P × A such that the following diagram is commutative It is called the representation of the section f : M → E with respect to the section σ : M → P. Higher external powers of the bundleE, with external product (in terms of representationes) This allows to define the bracket of forms where A σ is a local 1-form with values in Q the Lie algebra of the group e Q and f σ : M → A is a representation of the section f . • Any connection D in the bundle E induces connections in bundles E ′ and adj(E), respectively (we will use the same symbol D as in each case the connection is defined by the same 1-form A σ ) The transformation law (1.7), can be rewritten for In what follows we will omit the superscripts σ, ρ denoting sections of principal bundle. For different representations we will use more common symbols ′ , ′′ , etc. The connection in E may be extended, in a natural way, to E k . We denote the exterior covariant differentiation by the same symbol D : For each ω ∈ Sec(E k ) we have D 2 ω = F ∧ ω. Self-dual Yang-Mills equations. The following 2-forms constitute the basis of anti-self-dual forms The self-dual Yang-Mills (SDYM) equations read The first two of above equations can be interpreted as integrability conditions Then the third equation takes the form of After the substitiution J := b * a −1 this becomes Yang's equation (Yang 1977, Parkes 1992) Dolbeault operators and K is the Kahler potential i.e. Ω = ∂∂K. This equation arise from a minimum action principle for S = 1 2 κ ω n KF ∧ F where F is a curvatre form of the connection A = ihJ −1 * ∂J, and ω is symplectic form on Σ 2n (Donaldson 1985, Nair and Schiff 1990, Mason and Woodhouse 1996. In our considerations we will work in the so called K-formalism of Newman (Newman 1978, Leznov 1988, Parkes 1992, Plebański and Przanowski 1996, Mason and Woodhouse 1996 Choosing gauge such that A = ihJ −1 * ∂J, the SDYM equations reduce to Yang's equation (1.11). It can be rewritten in the form where ∇ α is the covariant derivative with respect to the Levi-Civita connection § on M. Equation (1.12) is equivalent to the existence of Ξ such that where ǫ αβ is a tensor density in (w, z) variables, defined in each coordinate neighbourhood by (ǫ αβ ) := 0 1 −1 0 =: (ǫ αβ ). Analogously ǫαβ is a tensor density in (w,z) variables (ǫαβ) := 0 1 −1 0 =: (ǫαβ). The definition (1.13) § On a Kahlerian manifold the hermitian connection is at the same time the Levi-Civita connection Γ α βγ = g ασ ∂γ g βσ , Γα βγ = gα σ ∂γ g σβ . This implies that the only non-zero coefficients of the curvature tensor (apart from those obtained from symmetry operations) are The Ricci tensor R αβ = gσ γ R γβασ = gσ γ R αβγσ = (lng), αβ . and Ricci scalar R = gβ α R αβ . The Weyl tensor of conformal curvature Rg βδ g γα . The manifold (M, ds 2 ) is called weak heaven (Plebański 1975) or right conformally flat (Ko et al. 1981) if the 2-forms C αβ := 1 2 C αβγδ dz γ ∧ dzδ are selfdual. For thise to be true the following conditions should be satisfied: C αβ ∧ Σ of Ξ implies that under the change of variablesw ′ =w ′ (w,z),z ′ =z ′ (w,z) Ξ transforms according to Ξ ′ = ∂(w,z) ∂(w ′ ,z ′ ) Ξ i.e. Ξ is in these variables a scalar density. In this case covariant derivative ∇α acts on densities according to the rule ∇αΞ = ∂αΞ − (ln g),α Ξ while ∇ α Ξ = ∂ α Ξ Inserting Aα given by (1.13) into (1.9) one gets For the first time this equation was proposed by Q-H. Park (1992) for Poisson algebra. In the case of Moyal agebra and flat base manifold it was considered by Plebański and Przanowski (1996), Przanowski and Formański (1999). The linearized equation (1.14) reads Let us consider a special case of the base manifold M i.e. the heavenly space. For a Kahlerian manifold to be heavenly space it is necessary and sufficient that the Ricci tensor R αβ vanish ¶ i.e. (ln g), αβ = 0. Thus the determinant g = G(w, z)G(w,z), and rewriting (1.13) once again we can define Θ := Ξ G which is now a scalar function. With help of this function the master equation (1.14) reads now It will be called master equation (ME). This equation arises from a minimum action principle for the action whereǫ := 1 2 ǫαβdxα ∧ dxβ and Ω = g αβ dz α ∧ dzβ is the Kahler form. (Note that our notation is covariant and we do not work in any special coordinates, like for example Plebański's coordinates (Plebański 1975) in which g=1 (Parkes 1992).) The action (1.17) generalizes actions given by Boyer and Plebański (1985), Leznov (1988), Parkes (1992), Plebański and Przanowski (1996). Conservation laws and twistor construction. In this section we construct two hierarchies of conservation laws for ME on heavenly background (1.16). It is known from the previous section that in the algebra A there exist both the * -product and the bracket {·, ·}. Hence one can expect the existence of two different linear systems for ME. ¶ compare footnote § Hierarchy of hidden symmetries of ME. Let W denote the space of solutions to the master equation (ME) (1.16), and T W the space of solutions to the linearized master equation (LME) Define two operators acting on functions on M with values in Then their commutator can be easily found to be So the operators L α commute iff Θ satisfies ME (1.16). Eq. (2.1) written in terms of these operators reads . As φ (0) solves the LME the current J (1) fulfills the conservation law ∇ α J α (1) = 0. This conservation law can be written in the form ∂ α (GJ α (1) ) = 0, which implies existence of the scalar function φ (1) , such that This function gives rise to the next current J α (2) := L α φ (1) , which divergence also vanishes, i.e. Note that the above equality also says that φ (1) ∈ T W. One can repeat above construction starting from φ (1) . We are led to an iterative procedure. Given the n-th conserved charge φ (n) one constructs the (n+1) current J α (n+1) := L α φ (n) and then one solves Such a solution is an element of T W and it defines a divergence free current. Remarks. • In this way we define an integro-differential recursion operator φ (n+1) = Rφ (n) depending additionaly on boundary conditions imposed. This operator is invertible. Second collection of conserved charges. The existence in each fibre of the bundle E of the * -product allows us to construct another set of operators where Θ is a solution of ME (1.16). For any function with value in A we have In the same way as in the previous case the conserved current j i (1) defines a function η (1) by a system of equations This function fulfills by ME = 0. Remarks • As in the case of hidden symmetries the system (2.7) defines the recursion operator η (n+1) = Rη (n) . The above hierarchy of conservation laws is characteristic for self-dual Yang-Mills equations (compare (Brezin et al. 1979, Prasad et al. 1979, Chau 1983). The characteristic feature of integrable systems besides the existence of infinite number of conservation laws is the existence of a Lax pair and some geometric construction related to the system considered. We are going to deal with this problem. Twistors for M. Twistor surface or β-plane or null string (Penrose 1976, Plebański and Hacyan 1975, Flaherty 1976, Ward and Wells 1990 Woodhouse 1996) is a 2-dimmensional submanifold S ⊂ M such that • S is totally null i.e. ∀p ∈ S and ∀v ∈ T p S ds 2 (v, v) = 0, • The 2-form orthogonal to S is anti-self-dual. This implies that it is also totally geodesic i.e. ∀p ∈ S and ∀v ∈ T p S geodesic with tangent vector v in p lies on the surface S. For heavenly space (M, ds 2 ) we have ∂ α ∂β ln g = 0 i.e. g = G(w, z)G(w,z). In appropriate coordinates g = 1 and this is the first heavenly equation (Plebański 1975). We work in an arbitrary coordinate system, which means that the determinant g is a product of two functions. More general result also holds i.e. the projective twistor space exists iff (M, ds 2 ) is a weak heaven (Penrose and Ward 1980). Both manifolds M and PT are embedded in the so called correspondence The Lax pair and Penrose-Ward transform. In this paragraph we construct the formal bundle over twistor space PT which is determined by a solution Θ of master equation (ME). First we start with a Lax pair for ME. For each value of a spectral parameter belonging to CP 1 , consider a pair of operators And respectively Then one has Thus for each λ ∈ CP 1 − {∞} this commutator vanishes iff Θ satisfies ME. Analogously for ζ ∈ CP 1 − {0} [M w , M z ] = 0. iff Θ satisfies master equation. If Θ is any solution of ME then Frobenius integrability conditions are satisfied and one can find a solution of the linear system where Ψ(λ) ≡ Ψ(t,h; w, z,w,z, λ) ∈ A. In particular this solution is analytic in λ in some naighbourhood of 0 ∈ CP 1 . We will construct such a solution from conserved charges. Let η (k) k = 0, 1, 2, ... denote conserved charges defined by recursion relations (2.7). For λ ∈ CP 1 − {∞} we define Ψ(λ) := ∞ k=0 λ k η (k) (t,h; w, z,w,z). (2.11) The conserved charges can be chosen such that the radius of convergence is grater then zero. As all η (k) satisfies (2.7) thus above Ψ(λ) satisfies (2.10). On the overlap of domains, for λ = ζ ∈ CP 1 − {0, ∞}, Ψ(λ) = Ψ(λ) * H where H is a twistor function defined uniquely by Ψ(λ) and Ψ(λ). As it takes values in the group e Q and twistor space may be covered only by those two neighbourhood the knowledge of this function is sufficient to recover a bundle over PT with H as a transition function. In this way each solution of (ME) corresponds to one bundle over the space PT . As all φ (n) , n = 0, 1, ... satisfy LME (2.1) Φ(λ) satisfies the system which is a Lax pair for ME. Respectively in the neighbourhood of infinity ζ ∈ CP 1 − {0} then the spectral system tis of the form Let F (λ) := F (t,h; w, z,w,z, λ) be such that Such F (λ) exists and is uniquely defined by Ψ(λ) and Φ(λ). Moreover, as Ψ(λ) and Φ(λ) fulfill (2.10) and (2.14) respectively, F (λ) has to be constant along each twistor surface i.e. it depends only on (P w , P z , λ) The definition (2.17) describing F (λ) constitutes Ψ(λ) as a dressing operator for a linear system (2.14). Algebra of hidden symmetries. Consider a superposition of solutions to the linearized master equation (2.1), written in terms of the above defined dressing operator (Park 1990, Takasaki 1990.) The contour γ in (2.18) is the boundary of a domain containing λ = 0 and it does not cross any singularity of integrated functions. To find an algebra of hidden symmetries consider a commutator The following theorem holds (compare (Takasaki 1990, Park 1992, Dunajski and Mason 2000)) The hidden symmetries of ME constitute the algebra The proof can be found in (Przanowski et al. 2001(b), Formański 2004) 3 Integrability of ME. The homogenous Hilbert problem for formal power series. The Hilbert problem for formal power series can be defined in the similar form as in the case of vector functions. The latter case can be found in the monographs (Muscheliszwili 1962, Pogorzelski 1966). We will show the existence theorem in the first case. Let L be a smooth contour. Let S + denote the interior of L and let 0 ∈ S + . By S − we denote the exterior of L i.e. S − := CP 1 − S + − L. Let the formal power series be such that all the functions Φ m,k (λ) are sectionally holomorphic what means that each Φ m,k (λ) is holomorphic on S + and S − . The formal power series is said to have a finite degree at infinity if for each function Φ m,k (λ) there exist c m,k ∈ Z such that lim |λ|→∞ |Φ m,k (λ)| |λ| c m,k = 0. In case c m,k > 0 in the neighbourhood of infinity we can write For c m,k < 0 we have γ m,k (λ) = 0 and for c m,k = 0 the γ's are constant. The formal power series γ(t,h; λ) = ∞ m=0 ∞ k=−m t mhk γ m,k (λ) will be called the principal part at infinity of the series Φ(t,h; λ). It is said that the series Φ(t,h; τ ) τ ∈ L satisfies on L the Hőlder condition H(α), The homogeneous Hilbert problem can be formulated as follows. Suppose we are given an element G(t,h; ξ) of the group e Q , defined on L and satisfying the Hőlder condition on L. Find a sectionally holomorphic formal power series Φ(t,h; λ) having finite degree at infinity, continous on L and satisfying the boundary condition Φ + (t,h; ξ) = Φ − (t,h; ξ) * G(t,h; ξ) ξ ∈ L (3.1) Φ + (t,h; ξ) and Φ − (t,h; ξ) denote the limit values i.e. We will seek for a solution of this problem in the class of formal power series satisfying Hőlder condition on L. In the case of finite groups this problem is solved by the Birkhoff factorization theorem (Birkhoff 1913, Mason andWoodhouse 1996). Since Φ + (t,h; ξ) is the limit value of Φ(t,h; λ) holomorphic in S + , from the Cauchy theorem we find By the Plemelj formula (Plemelj 1908), in the limit λ → ξ ∈ L one gets where the integral above is taken in the sense of principal value. The equation (3.2) is an integral equation for Φ + (t,h; ξ). To answer under what condition the solution of (3.4) defines a sectionally holomorphic solution to the homogeneous Hilbert problem consider Thus Ψ(t,h; λ) is sectionally holomorphic and vanish at infinity. The Plemelj theorem gives As it is seen, from (3.1), the equation ( The problem of finding Ψ(t,h; λ) sectionally holomorphic, vanishing at infinity, satisfying boundary condition (3.5) on L is called the accompanying problem of the problem (3.1). Analogously, this problem implies the integral equation for a limit value Ψ + (t,h; ξ) As it is seen from above, the solution of integral equation (3.4) defines the solution of original homogenous Hilbert problem (3.1) iff the only solution of the accompanying problem is the trivial one Ψ(t,h; ξ) ≡ 0. Thus, in order to prove the existence of the solution of the problem (3.1) we need to prove that equation (3.6) has only the trivial solution and that there exists a solution of the (3.4). To simplify the notations we will denote First observe, that the free element of G(t,h; τ ) * G −1 (t,h; ξ) − 1 vanishes, so we can write Inserting into (3.6) one gets This equation can be solved iteratively Thus the only solution of (3.6) is Ψ + (t,h; ξ) = 0. Consequently, each solution of (3.4) defines a solution of Hilbert problem and it can be solved iteratively Note that the solution of (3.4), takes value in a group e Q iff γ(t,h; λ) ∈ e Q . Some remarks on Riemann-Hilbert problem for * algebra can be also found in (Takasaki 1994, Strachan 1997). Inverse Penrose-Ward transform. In this paragraph we will show the correspondence between holomorphic formal bundles over twistor space PT and solutions of master equation (1.16). Each holomorphic formal bundle over PT is characterized by a transition function H(t,h; P w , P z , λ) : V ∩ V → e Q i.e. with H m,k (P w , P z , λ) being holomorphic. The pull back of the series by the map p : M × CP 1 → PT gives on the correspondence space F = M × CP 1 the series H = p * H constant along each twistor surface Briefly we will write H(t,h; λ) ≡ H(t,h; w, z,w,z, λ). The problem of such factorization, known as Riemann-Hilbert problem, reduces to the previously discussed homogenous Hilbert problem. Indeed, let L be a smooth contour on CP 1 (for example an equator). One can find Ψ(t,h; λ) holomorphic on S + (we use the same notation as in the previous paragraphs), Ψ(t,h; λ) holomorphic on S − and continous on S + ∪ L and S − ∪ L, respectively. On L they satisfy the condition Ψ + (t,h; ξ) = Ψ − (t,h; ξ) * H(t,h; ξ) ξ ∈ L. The series Ψ and Ψ can be analitically continued onto S In this way we obtain Ψ(t,h; λ) defined in each finite point of the comlex plane and sectionally holomorphic on S + , S − , satisfying on L the condition Ψ + (t,h; ξ) = Ψ − (t,h; ξ). This means that such a Ψ(t,h; λ) is holomorphic on the whole complex plane, as desired. Analogously Ψ(t,h; λ) is holomorphic on C − {0}. From the definition (3.9) the factorization (3.8) holds. The LHS is holomorphic everywhere apart from λ = ∞, while RHS is holomorphic everywhere apart λ = 0 and at infinity it may have only a first order pole. Thus from the Liouville theorem they both are linear with respect to λ i.e. Conclusions In this work we have found the evidence of integrability of * -SDYM equations. This evidence follows from: • the existence of infinite number of conservation laws, • the existence of Lax pair, • the one-to-one correspondence between solutions of * -SDYM equations and formal holomorphic bundles over PT with structure group e Q . • the existence of solution to the Riemann-Hilbert problem what gives rise to an algebraic method of generating solutions to (ME). In the second part of this paper some examples of reductions of * -SDYM to other integrable systems such as SU (N )-SDYM equations, SU (N ) chiral equations and heavenly equations will be given. We also find a sequence of SU (N ) chiral fields tending to the heavenly space when N → ∞.
2019-04-12T09:06:59.140Z
2004-12-03T00:00:00.000
{ "year": 2004, "sha1": "c99fc7c1cb55c03256319c0d6bbf157eb693deb1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0412011", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2ebc109c73451af1342348fc7321eb53641683e8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
232245040
pes2o/s2orc
v3-fos-license
Characterizing the immune response of chickens to Campylobacter jejuni (Strain A74C) Campylobacter is one of the major foodborne pathogens causing bacterial gastroenteritis worldwide. The immune response of broiler chickens to C. jejuni is under-researched. This study aimed to characterize the immune response of chickens to Campylobacter jejuni colonization. Birds were challenged orally with 0.5 mL of 2.4 x 108 CFU/mL of Campylobacter jejuni or with 0.5 mL of 0.85% saline. Campylobacter jejuni persisted in the ceca of challenged birds with cecal colonization reaching 4.9 log10 CFU/g on 21 dpi. Campylobacter was disseminated to the spleen and liver on 7 dpi and was cleared on 21 dpi from both internal organs. Challenged birds had a significant increase in anti-Campylobacter serum IgY (14&21 dpi) and bile IgA (14 dpi). At 3 dpi, there was a significant suppression in T-lymphocytes derived from the cecal tonsils of birds in the challenge treatment when compared to the control treatment after 72 h of ex vivo stimulation with Con A or C. jejuni. The T-cell suppression on 3 dpi was accompanied by a significant decrease in LITAF, K60, CLAU-2, IL-1β, iNOS, and IL-6 mRNA levels in the ceca and an increase in nitric oxide production from adherent splenocytes of challenged birds. In addition, on 3 dpi, there was a significant increase in CD4+ and CD8+ T lymphocytes in the challenge treatment. On 14 dpi, both pro and anti-inflammatory cytokines were upregulated in the spleen, and a significant increase in CD8+ T lymphocytes in Campylobacter-challenged birds’ ceca was observed. The persistence of C. jejuni in the ceca of challenged birds on 21 dpi was accompanied by an increase in IL-10 and LITAF mRNA levels, an increase in MNC proliferation when stimulated ex-vivo with the diluted C. jejuni, an increase in serum specific IgY antibodies, an increase in both CD4+ and CD8+ cells, and a decrease in CD4+:CD8+ cell ratio. The balanced Th1 and Th2 immune responses against C. jejuni might explain the ceca’s bacterial colonization and the absence of pathology in Campylobacter-challenged birds. Future studies on T lymphocyte subpopulations should elucidate a pivotal role in the persistence of Campylobacter in the ceca. Introduction Effect of Campylobacter jejuni challenge on Campylobacter load in the ceca, spleen, and liver On 7,14, and 21 dpi, ceca, spleen, and liver were dissected, and a total of 108 samples (3 birds/ cage x 6 cages/treatment x 2 treatments x 3 organs) were aseptically collected into stomacher bags, placed on ice, and transported to the laboratory. Samples were macerated with a rubber mallet, and 3X (wt/vol) 1X Bolton's broth supplemented with Bolton selective supplements (HiMedia Laboratories, West Chester, PA) were added, and bags were stomached for 60 s. A volume of 10 μl of homogenates was directly plated or serially diluted in 7 tubes having 90ul of 0.85% saline resulting in 10 −1 to 10 −8 dilutions using the microdilutions method as described by [12]. From every dilution, a volume of 10μl was spotted in triplicate on campy-CEFEX agar. Plates were then incubated for 48 h in a microaerobic atmosphere at 42˚C. After incubation, colonies were counted and confirmed by microscopic observation for characteristic corkscrew morphology and motility on wet mount preparations. Colonies were further confirmed using SyBr green qPCR and primers targeting C. jejuni mapA gene (Table 2). Enumeration data were recorded as CFU/g and then transformed to log10 CFU/g for statistical analysis. Effect of Campylobacter jejuni challenge on Campylobacter jejuni mapA gene in the ceca On 0, 7, 14, and 21 dpi, ceca were dissected from three birds per cage and pooled aseptically into one stomacher bag/cage (n = 6), placed on ice, and transported to the laboratory. Ceca samples were macerated with a rubber mallet, and 3X (wt/vol) buffered peptone water was added. Samples were then stomached for 60 s, and ceca homogenates were stored at -80˚C. Bacterial DNA from the ceca was isolated, as described previously [17]. Campylobacter jejuni bacterial DNA in the ceca was quantified using SyBr green qPCR with primers targeting C. jejuni mapA gene (Table 2). Quantification data were reported as 40-Ct and then were statistically analyzed. Effect of Campylobacter jejuni challenge on serum IgY and bile IgA anti-Campylobacter antibodies On 0d of age, blood was collected and pooled from 3 broiler chicks to measure the specific maternally derived serum anti-Campylobacter IgY antibodies. On 0, 7, 14, and 21 dpi, blood and bile were collected and pooled from 3 birds per cage. Specific serum IgY and bile IgA antibodies directed against C. jejuni (Strain A74C) whole-cell (WC) antigens were determined by enzyme-linked immunosorbent assay (ELISA) as described previously by [18] with modifications. Briefly, WC antigen was prepared by lysing 1X10 9 CFU/ml of C. jejuni (Strain A74C) by seven cycles of bead-beating (TissueLyser LT, Qiagen, Germantown, MD) using acid-washed glass beads (Sigma-Aldrich, MO, USA) followed by freezing and thawing. ELISA plates (Nunc Maxisorp™, ThermoFisher Scientific, Waltham, MA) were coated with 2.5 μg/ml of WC diluted in coating buffer (carbonate/bicarbonate, pH 9.6). Serum was diluted 1:10 and bile was diluted 1:200 in PBS containing 2.5%, non-fat dry milk and 0.1% Tween 20 (VWR, Radnor, PA). The Horseradish peroxidase (HRP) conjugated polyclonal goat anti-chicken IgG (Bethyl, Montgomery, TX) and HRP-conjugated polyclonal goat anti-chicken IgA (SouthernBiotech, Birmingham, AL) were used at 1:10,000 as a secondary antibody. The absorbance was measured at 450 nm using Epoch microplate spectrophotometer (BioTek, VT, USA) and antibody levels were reported as OD 450 values. Effect of Campylobacter jejuni challenge on cecal tonsil CD4+ and CD8+ T lymphocytes, and CD4+:CD8+ cell ratio On 0, 1, 3, 14, and 21 dpi, cecal tonsils from three birds per cage were aseptically pooled into a 5 mL tube having 3 mL of RPMI, placed on ice, and transported to the laboratory. Flow cytometry analysis for CD4+ and CD8+ cells was performed as described by [19]. Single-cell suspensions of the cecal tonsils (1 × 10 6 cells) were incubated with PE-conjugated mouse antichicken CD4 and FITC-conjugated mouse anti-chicken CD8 (Southern Biotech, Birmingham, AL) at 1:200 dilution, and unlabeled mouse IgG at 1:500 dilution in a 96-well plate for 20 minutes. After incubation, cells were washed twice by centrifugation at 400 x g for 5 minutes using wash buffer (1× PBS, 2 mM EDTA, 1.5% FBS) to remove unbound primary antibodies. After washing, cells were analyzed using CytoSoft software (Guava Easycyte, Millipore, Billerica, MA). CD4+ and CD8+ cells were reported as the percentage of gated cells, and CD4+:CD8+ ratio was calculated. Effect of Campylobacter jejuni challenge on immune gene expression On 0,1, 3, 7, 14, and 21 dpi cecal tonsil and spleen samples were dissected, and 3 organs per cage were pooled in 5 mL tubes filled with 3 ml of RNAlater (Qiagen, Germantown, MD). Samples were stored for seven days at 4˚C until RNAlater permeated the tissue and stabilized the RNA. Excess RNAlater was removed from tubes, and samples were stored at -80˚C until analyzed. Total RNA was extracted from cecal tonsils and reverse transcribed into cDNA [20]. The mRNA was analyzed for TGF-β4, IL-10, IL-1β, LITAF, TLR-4, iNOS, K60, IL-4, IL-6, CLAU-2, and ZO-1 genes by real-time PCR (CFX96 Touch Real Time System, BioRad) using SyBr green after normalizing for GAPDH mRNA ( Table 2). The fold change from the reference was calculated using the 2 (Ct Sample-Housekeeping) /2 (Ct Reference-Housekeeping) comparative Ct method, where Ct is the threshold cycle [21]. The Ct was determined by iQ5 software (Biorad) when the fluorescence rises exponentially 2-fold above the background. Effect of Campylobacter jejuni challenge on nitric oxide production from adherent splenocyte MNCs On 1, 3, 7, 14, and 21 dpi, spleens were dissected from three birds per cage, and half-spleens were aseptically pooled into a 5 mL tube having 3 mL of RPMI, placed on ice, and transported to the laboratory. Spleens were strained using a 45μm cell strainer (Fisher scientific) to obtain a single-cell suspension. A volume of 3 mL of single-cell suspension was enriched for MNCs by density centrifugation over 3 mL of Histopaque (1.077 g/mL, Sigma-Aldrich, St. Louis, MO) for 10 min at 1,200 X g at 10˚C without breaks as described by [35] with modifications. The splenocyte MNCs were washed and resuspended in 8 mL of complete RPMI-1640 medium (media supplemented with 4% FBS, 2% chicken serum, and 1% penicillin plus streptomycin) in T75 cell culture flasks and incubated in a 5% CO2 incubator at 42˚C. After 24 h of incubation, non-adherent cells were washed with PBS, and adherent cells were removed by trypsinization (5 ml of 0.4% trypsin supplemented with 0.025% EDTA). Adherent cells were then washed in 20 mL of complete media and resuspended in 1 mL of complete RPMI for counting. Splenocyte MNCs were reseeded in triplicates in 96-well plates (100 μL/well of 5 x 10 5 cells/mL). Cells were stimulated by adding a volume of 100 μL of complete RPMI media supplemented with 10 μg/mL of Salmonella Enteritidis LPS (Sigma Chemicals, MO, USA) or 20 μg/mL of lysed C. jejuni (Strain A74C). Campylobacter jejuni (Strain A74C) was lysed by seven cycles of bead-beating followed by freezing and thawing, as described above. The 96-well plates were then incubated for 48 h. After incubation, plates were centrifuged at 400 X g for 10 min, and the supernatant was removed. Nitrite levels in the supernatant were determined using the Griess method [36]. A volume of 100 μL of sulfanilamide/N-(1-Naphthyl) ethylenediamine dihydrochloride solution (#R2233500, Ricca Chemical Company, Arlington, TX) was added to 100 μl of the supernatant. After 5 min of incubation in the dark and at room temperature, absorbance was measured at 540 nm using Epoch microplate spectrophotometer (BioTek, VT, USA), and OD 540 values were recorded. The nitrite concentration in the samples was determined using the equation derived from the standard curve of serially diluted sodium nitrite Vs. OD 540 values. Effect of Campylobacter jejuni on the proliferation of cecal tonsil MNCs On 1 and 3 dpi, cecal tonsils from three birds per cage were aseptically pooled into a 5 mL tube having 3 mL of RPMI, placed on ice, and transported to the laboratory. Cecal tonsil Mononuclear Cells (MNCs) were collected by straining the cecal tonsils for single-cell suspension using a 45μm cell strainer (Fisher scientific). The MNCs were then washed by centrifugation and resuspended in complete RPMI to a cell density of 5 x 10 5 cells/mL. A volume of 100 μL of the cell suspension was added in triplicates to 96-well plates. Cecal tonsil mononuclear cells were then stimulated with complete RPMI media supplemented with Con A (15μg/mL) or 20 μg/ mL of lysed C. jejuni (Strain A74C). The 96-well culture plates were then incubated for 72 h in a 5% CO2 incubator at 42˚C. At 72 h of incubation, the proliferation of MNCs was measured using MTT tetrazolium 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) colorimetric assay as previously described [37], and obtained optical density values at 570nm (Bio-Tek Epoch spectrophotometer; GEN5 3.03 software) were statistically analyzed. Ex vivo stimulation of cecal tonsil MNCs with lysed C. jejuni (Strain A74C) or Salmonella Enteritidis OMPs To investigate the antigen-specific response of cecal tonsil mononuclear cells, on 7, 14, and 21 dpi, MNCs were seeded in 96-well plates in triplicates at the same density as described above (100 μL/well of 5 x 10 5 cells/mL). Cells were then stimulated with 100 μL of Con A (15μg/mL), 20 μg/mL or 2 μg/mL of lysed C. jejuni (Strain A74C), or 20 μg/mL or 2 μg/mL of Salmonella Enteritidis OMPs. Salmonella Enteritidis OMPs were extracted as described previously by [38]. The 96-well plates were then incubated for 72 h, and the proliferation of MNCs was measured using MTT assay as described above. Statistical analysis Student's t-test was used to determine the effect of Campylobacter jejuni challenge on dependent variables. The significance level was set at (P < 0.05). Cage was used as an experimental unit (n = 6). Effect of Campylobacter jejuni challenge on performance parameters There were no significant differences in FCR, BWG, or FI between the control and challenge treatments ( Table 3). Effect of Campylobacter jejuni challenge on Campylobacter load in the ceca, spleen, and liver At 7 dpi, the C. jejuni challenge treatment had a significantly higher Campylobacter load by 3.9, 2.5, and 0.9 log10 CFU/g in the ceca, spleen, and liver, respectively, when compared to the control treatment (P < 0.01) (Fig 1a-1c). There was a significant increase of 1.5 log10 CFU/g in the Campylobacter load in the spleen of the challenge treatment when compared to the control treatment on 14 dpi (P < 0.01) (Fig 1b). In general, the bacterial load in the ceca of C. jejuni challenge treatment reached around 4.9 log10 CFU/g on 21 dpi (Fig 1a). However, the Campylobacter load in the spleen and liver decreased with age in the C. jejuni challenge treatment until it reached non-detectable (ND) levels on 21 dpi (Fig 1b & 1c). Effect of Campylobacter jejuni challenge on Campylobacter jejuni mapA gene in the ceca At d14 (0 dpi), there were no significant differences in the C. jejuni mapA gene levels. The mapA gene levels were significantly higher in the C. jejuni challenge treatment on 7, 14, and 21 dpi, when compared to the control treatment (P < 0.05) (Fig 1d). Effect of Campylobacter jejuni challenge on serum IgY and bile IgA anti-Campylobacter antibodies Day-old birds had high maternally derived IgY antibodies (OD 450 = 0.97) against C. jejuni (Fig 2a). Both treatments started with similar IgY and IgA levels at 0 dpi (Fig 2). The challenge treatment had a significant increase in IgY antibody levels on 14 and 21 dpi when compared to the control treatment (P < 0.05) (Fig 2a). At 7 dpi, the challenge treatment had a significant increase in specific IgA levels when compared to the control treatment (P < 0.01) (Fig 2b). (Fig 3). There was a significant increase in CD4+ and CD8+ T lymphocytes in the challenge treatment on 3 dpi (P < 0.05) and 21 dpi (P < 0.01) (Fig 3a). At 14 dpi, there was a significant increase in CD8+ T lymphocytes in the challenge treatment (P < 0.05). There was a significant decrease in CD4+:CD8+ cell ration (1.03 Vs. 0.69) on 21 dpi (P < 0.05) in the challenge treatment when compared to the control treatment. A similar decrease trend in CD4+:CD8+ cell ratio (0.83 Vs 0.77) was observed in the challenge treatment on 14 dpi (P = 0.1) when compared to the control treatment (Fig 3b). Effect of Campylobacter jejuni challenge on immune gene expression At 0 and 1 dpi, both treatments had similar ceca and spleen transcript levels of the analyzed genes except for 1 dpi where there was a significant decrease in K60 mRNA levels in the spleen of the challenge treatment (Fig 4). At 3 dpi, there was a significant decrease in the mRNA levels of LITAF, K60, and CLAU-2 (P < 0.01) and IL-1β, iNOS, and IL-6 (P < 0.05) in the ceca. On the same day, a similar trend of a decrease in mRNA levels of TLR-4 (P = 0.06) and ZO-1 (P = 0.09) was observed in the ceca (Fig 5a). At 7 dpi, the ceca of the challenged birds had a significant increase in LITAF and IL-4 mRNA levels (P < 0.05) and a similar trend for iNOS mRNA levels (P = 0.07) was observed (Fig 5c). At 14 dpi, the challenged birds had a significant increase in TGF-β, IL-10, IL-1β, LITAF, TLR-4, iNOS, IL-4, and K60 mRNA levels in the spleen (P < 0.05) (Fig 6b). On the same day, there was a significant increase in TLR-4 mRNA levels (P < 0.05), and a similar increase in IL-4 mRNA levels (P = 0.06) in the ceca (Fig 6a). At 21 dpi, the challenged birds had a significant 2.7-fold increase in IL10 mRNA levels and a 2.4-fold increase in LITAF mRNA levels in the ceca when compared to the control treatment (P < 0.05) (Fig 6c). Effect of Campylobacter jejuni challenge on nitric oxide production from adherent splenocyte MNCs At 3 dpi, the challenged birds had significantly higher nitrite concentrations after LPS (54.6 Vs. 4.6 μM) and C. jejuni (72 Vs. 4 μM) ex vivo stimulation when compared to the control treatment (P < 0.05) (Fig 7). There were no significant differences in nitric oxide production levels between the control and challenge treatments at other time points (1, 7, 14, and 21 dpi) (Fig 7). Effect of Campylobacter jejuni on the proliferation of cecal tonsil MNCs At 3 dpi, there was a significant suppression in T lymphocytes derived from the cecal tonsils of challenged birds when compared to the control birds after 72 h of ex vivo stimulation with Con A or C. jejuni (P < 0.05) (Fig 8a). Ex vivo stimulation of cecal tonsil MNCs with lysed C. jejuni (Strain A74C) or Salmonella Enteritidis OMPs At 7 dpi, there was a significant suppression in the proliferation of MNCs derived from the cecal tonsils of birds in the challenge treatment when compared to the control treatment after 72 h of ex vivo stimulation with diluted C. jejuni (P < 0.05) (Fig 8b). At 14 dpi, there were no significant differences between the two treatments irrespective of the ex vivo stimulant (Fig 8c). At 21 dpi, there was a significant increase in the proliferation in MNCs derived from the cecal tonsils of birds in the challenge treatment when compared to the control treatment after 72 h of ex vivo stimulation with 1:10 diluted C. jejuni (P < 0.05) but not in Con A, C. jejuni, SE OMP, or diluted SE OMP stimulated MNCs (Fig 8d). Discussion This study characterized the immune response of broilers to Campylobacter jejuni (Strain A74C). Campylobacter jejuni is generally considered a commensal in poultry, including broilers, and many researchers reported the bacterial isolation from a wide range of domestic poultry [10,39]. The commensal nature of Campylobacter in broilers observed in our study was supported by previous studies which showed a high bacterial load in the ceca (9 log10 CFU/g), absence of pathology in Campylobacter-positive birds, and similar performance parameters in colonized and control birds [5,40]. There were no significant differences in performance parameters in this study, namely FCR, BWG, and FI, between the challenge and the control treatments. Our results agree with most research studies conducted to study the effect of Campylobacter challenge in chickens, which showed that Campylobacter does not alter performance parameters and is asymptomatic in birds. However, other researchers have shown that Campylobacter is capable of reducing body weight and affecting performance parameters by disrupting nutrient absorption and competing for amino acids in the gut [7,41,42]. The different results in performance parameters seen between our study and other studies might be due to the differences in the C. jejuni isolates selected and chicken breeds used in our study, immune status, and challenge models. In the present study, Campylobacter jejuni (Stain A74C) colonized the ceca of challenged birds on 7 dpi (P < 0.05) and persisted in the ceca until 21 dpi (d35 of age). In addition to direct plating, Campylobacter jejuni was quantified by qPCR targeting the mapA gene transcript levels. On d14 (0 dpi), there was no significant difference in mapA gene levels between the two treatments. However, post-challenge, mapA gene detection in the ceca followed the direct plating trend seen as an increase in bacterial load in challenged birds over time. Campylobacter jejuni was disseminated from the ceca and was detected in the spleen and liver of challenged birds with a significant Campylobacter load compared to the control birds (Spleen 7 dpi and 14 dpi P < 0.05; Liver 7 dpi P < 0.05). However, challenged birds cleared the Campylobacter load in the spleen and liver on d35 (21 dpi). Interestingly, the clearance of Campylobacter from the spleen and liver was accompanied by an increase in specific serum anti-Campylobacter IgY antibodies observed in challenged birds when compared to the control birds. Most commercial broiler flocks remain Campylobacter-negative until 2-wk of age. This early protection has been attributed to maternally derived antibodies [8,43]. In our study, we observed that there were high levels of serum specific IgY maternally derived antibodies against Campylobacter on d0 of age confirming that early protection reported by other researchers may be due to the presence of IgY maternally derived antibodies. Bile serves a reservoir for IgA antibodies in chickens. IgA might be the most important immunoglobulin involved in mucosal immunity due to the complex it forms with the secretory component acquired from the surface of epithelial cells. This secretory component protects IgA from digestion in the gut [44]. At 7 dpi, there was a significant increase in bile specific IgA antibodies against Campylobacter. However, this increase in IgA levels did not affect the Campylobacter colonization in the ceca (4.2 log10 CFU/g). To study the effect of B cells on Campylobacter colonization, scientists have shown that Campylobacter was able to colonize the ceca of bursectomised birds concluding that humoral immunity has a limited impact on Campylobacter colonization in the ceca of commercial broilers [45]. Researchers have demonstrated that Campylobacter is able to disseminate to internal organs, including the spleen, liver/gallbladder, thymus, and bursa of Fabricius, after challenge via the oral route [9]. Interestingly, prior to bacterial dissemination to internal organs observed at 7 dpi, Campylobacter-challenged birds where immunosuppressed at 3 dpi. This immunosuppression was seen as a suppression in cecal tonsil MNCs stimulated with either Con A (indicating that it is a T cell suppression (3 dpi)), lysed C. jejuni (3 dpi), or lysed and diluted C. jejuni (7 dpi). In addition to the suppression in cecal tonsil MNCs' proliferation, a significant decrease in gene expression at 3 dpi was observed for both immune genes (LITAF, K60, IL-1β, iNOS, IL-6, and TLR-4) and tight Effect of Campylobacter jejuni challenge on nitric oxide production from adherent splenocyte MNCs stimulated ex vivo. On d14 of age, birds were weight-matched and randomly assigned to two treatments: control and challenge. Birds were orally challenged with 0.5 mL of 2.4 x 10 8 CFU/mL of Campylobacter jejuni or mock-challenged with 0.5 mL of 0.85% saline. Adherent splenocyte MNCs (5 x 10 4 cells) were stimulated with 10 μg/mL of LPS or 20 μg/ mL of lysed C. jejuni (CJ). After 48 h of stimulation, NO production was measuring using the Griess assay. Results were expressed as mean + SEM nitrite concentration (μM). Non-detectable (ND). � P < 0.05; �� P < 0.01 compared with control (n = 6), Student's t-test. junction proteins (CLAU-2 and ZO-1). Other researchers have compared the immune response of broilers to Campylobacter versus Salmonella challenge and have shown that unlike Salmonella, C. jejuni challenge significantly downregulated the antimicrobial peptide gene expression 6 h post infection [46]. The immune suppression of MNCs, cytokines, and tight junction proteins seen in our study might explain the high bacterial colonization in the ceca and the bacterial dissemination from the gut to internal organs in broilers. At 14 dpi, there was a significant upregulation in both pro-and anti-inflammatory cytokines in the spleen. This upregulation in both pro-and anti-inflammatory cytokines suggests that a balanced Th1 and Th2 immune response might explain the absence of pathology in Campylobacter-challenged birds. Nitric oxide production is critical for vasodilation, neurotransmission, and host proinflammatory immune response against pathogens and tumors. Nitric oxide production is mediated by Nitric Oxide Synthase (NOS). In chickens, nitric oxide is mainly secreted by macrophages and monocytes and is induced by intracellular pathogens, some tumors, LPS, and IFN-γ [47]. In the present study, mononuclear cells derived from the spleen of Campylobacter jejuni-challenged birds 3 dpi had a significant increase in nitrite concentrations, when compared to MNCs from the control birds, after 48 h of ex vivo stimulation with Salmonella enteritidis LPS or the homologous lysed C. jejuni. In addition, an upregulation in pro-inflammatory genes such as LITAF, IL-1β, and iNOS in either the spleen, ceca, or both was observed in the C. jejuni challenged birds. These data present further evidence of bacterial dissemination to internal organs, seen in our study and other studies, and is in agreement with other studies that have shown that Campylobacter is not purely a commensal but that chickens mount a pro-inflammatory immune response post-infection [6,11,46]. In vitro and in vivo experiments showed that Campylobacter is usually recognized by TLR4 and TLR21 [48,49]. In our study, we looked at TLR-4 and observed that C. jejuni challenge decreased the mRNA levels of TLR-4 in the ceca 3 dpi, and upregulated TLR-4 in both the ceca and spleen 14 dpi. In many species, including chickens, mice, and humans, CD4+:CD8+ cell ratio is an indicator of immune competence [47,50,51]. In general, a CD4+:CD8+ cell ration that is higher than 1, reflecting a higher percentage of CD4+ cells relative to CD8+ cells, is observed in healthy individuals. Age, dietary treatment, breed, and disease status, are factors reported to change the CD4+:CD8+ cell ratio in chickens [52][53][54][55]. In our study, we observed that C. jejuni challenge increased significantly CD4+ and CD8+ T lymphocytes 3 and 21 dpi and CD8+ cells on 14 dpi. The CD4+:CD8+ cell ratio was decreased numerically (0.83 Vs. 0.77) at 14 dpi (P = 0.1) and significantly (1.03 Vs. 0.69) at 21 dpi (P < 0.05) when comparing C. jejuni-challenged birds to the control. This indicates that the increase in CD8+ cells was of greater magnitude when compared to the increase in CD4+ cells. At 21 dpi, cecal tonsil MNCs derived from the challenged treatment had a significantly higher antigen-specific proliferation when compared to MNCs from the control treatment after ex vivo stimulation with the diluted lysed C. jejuni. These data support our flow cytometry data which showed a significant increase in both CD4+ and CD8+ T-lymphocytes 21 dpi in C. jejuni-challenged birds. In addition, 21 dpi, a significant upregulation in LITAF and IL-10 was observed. Our observation of IL-10 upregulation in C jejuni challenged birds agrees with other researchers who showed that Campylobacter persistence and the absence of pathology in challenged birds was associated with an upregulation in IL-10 in challenged birds [6]. Many researchers speculate that regulatory T cells play a major role in orchestrating the commensal nature of Campylobacter in chickens [6,56]. Avian regulatory cells suppress other immune cells through both a contact-dependent mechanism and a contact-independent mechanism (mediated by the production of IL-10 and TGF-β) [57]. Other researchers attributed the commensal nature of Campylobacter to its ability to induce a Th17 response that restricts the bacterial colonization to the intestine [58]. Other studies focused on understanding the colonization factors of C. jejuni strains and concluded that these factors could be responsible for the commensal nature of Campylobacter in chickens and its infectious nature in humans [59,60]. In this study we show proinflammatory, antiinflammatory, and regulatory cytokines were expressed after Campylobacter jejuni challenge and agrees with findings by other researchers [58]. In conclusion, our study shows that Campylobacter jejuni (Strain A74C) colonized the ceca of broilers at a high level and disseminated to the spleen and liver. Over time, Campylobacter jejuni persisted in the ceca, but was cleared from the spleen and liver accompanied by a significant increase in specific serum IgY antibodies. Our study demonstrated that Campylobacter jejuni can induce an inflammatory response in chickens. The Th1 immune response was evident in the production of NO from the adherent splenocytes, the upregulation of different pro-inflammatory cytokines at different timepoints in the spleen and cecal tonsils, and the increase in CD8+ cells. However, the upregulation in anti-inflammatory and regulatory cytokines, T lymphocyte proliferation suppression 3 dpi, the increase in CD4+ cells, and the increase in specific serum IgY levels suggest that C. jejuni also induced a Th2 immune response. Therefore, the balance between the Th1 and Th2 immune responses against C. jejuni might explain the bacterial persistence in the ceca and the absence of pathology in Campylobacter-challenged birds. Future studies directed at studying the T lymphocyte subpopulations are needed to elucidate the pivotal role that T lymphocytes play in the persistence of Campylobacter in the ceca and will be valuable for developing successful interventions.
2021-03-17T06:17:25.909Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "9b894313e6d99e1a6ec6512a069aecb14a0dc465", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247080&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "489ef7b27a37a01c60da7c63522cfd81956ed8d5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237581562
pes2o/s2orc
v3-fos-license
General One-loop Reduction in Generalized Feynman Parametrization Form Recently there is an alternative reduction method proposed by Chen in [1,2]. In this paper, using the one-loop scalar integrals with propagators having higher power, we show the power of the improved version of Chen's new method in which we used some tricks to cancel the dimension shift and the terms we do not want. We present the explicit examples of bubble, triangle, box and pentagon with one propagators doubled. With these results, we have completed our previous computations in \cite{wang} with the missed tadpole coefficients. Introduction To give more precise theoretical prediction of scatting amplitude of a given process, calculation of high loops integrals becomes important. For these calculations, the PV-reduction method [4] is one of the most used ideas. One way to implement the reduction method is to use Integrating-by-Parts (IBP) relation [5,6,7]. As one of the most powerful techniques for loop integrals reduction, IBP gives a large number of recurrence relations, and one could get the reduction of the simpler integrals directly by Gauss elimination. However, as the number and power of propagators become higher and higher, the IBP method becomes hard and inefficient. Finding more efficient reduction methods becomes an important direction. Unitarity cut method is one alternative reduction method and has been proved to be very useful for one-loop integrals [8,9,10,11,12,13,14,15,16,17,18,19]. For physical one-loop process, the power of propagator is just one, but if the method is a complete method, it should be able to give the reduction of integrals with higher power of propagators. Such a situation is not just a theoretical curiosity. In fact, it appears in the higher loop diagrams as a sub-diagram. Furthermore, although for one-loop integrals the scalar basis is natural, in general the choice of basis can be different, depending on the physical input. For example, for one-loop bubble, the basis with one propagator having power two could be useful as part of UT-basis [20,21]. In our previous work [3], by combining the trick of differential operators and unitarity cut, we successfully got the analytical reduction result of one-loop integrals with high power propagators and gave the coefficients to all the basis except the tadpoles' coefficients. Since the tadpole have only one propagator, the unitarity method could not be used to get the tadpole part. To complete our investigation, we want to find the missing tadpole coefficients by some efficient methods. Except the unitarity cut method, there are other proposals to overcome the difficulty in IBP, by using some tricks and other representations of integrals in recent years, such as Baikov representation [22,23] and Feynman parametrization representation [24,25] for loop integrals. In recent years, Chen has proposed a new representation for loop integrals [1,2]. His method is based on the generalized Feynman parametrization representation, i.e., an extra parameter x n+1 has been introduced to combine the U , F in the standard Feynman parametrization representation. Such a generalization will bring some benefits in deriving the IBP recurrence relation, as will shown in this paper. As a common feature, the IBP recurrence relation derived using the generalized Feynman parametrization representation will naturally have terms in different spacetime dimension. Since we always concern the reduction in a certain dimension D, which is usually set to be 4 − 2ǫ for the reason of renormalization, we want to cancel these terms in different dimension. This is usually not an easy work. In [26] Gluza, Kajda and Kosower have shown how to avoid the change of power of propagators in the standard momentum space. Larsen and Zhang have considered the Baikov representation and showed how to eliminate both dimension shifting and the change of power of propagators [27,28,29,30,31,32]. These methods require the solution of syzygy equations, which is not easy to figure out in general. In Chen's second paper [2], he proposed a new technique to simplifying the recurrence relation based on the non-commutative algebra. Motivated by above discussion and preparing Chen's method for the high-loop computations, in this paper, we will use the Chen's method to find the missing tadpole coefficients in our previous work. Furthermore, we will use the idea to remove terms with dimensional shifting in the derived IBP relation to give a simpler reduction method with the analytic results written by the elements of the coefficients matrix  . The plan of the paper is following. In section 2, we have reviewed the Chen's new method and illustrated with a simple example in the section 2.1. In the example, the integrals in different dimension will naturally emerge. We discussed the physical meaning of the boundary terms, which contributes to the sub-topologies. To cancel the dimension in the parametrization form and simplify the IBP relation, in the section 2.2 we proposed a new trick by adding free auxiliary parameters based on the fact that the F in the integrand is a homogeneous function of x i with degree L + 1. By our trick, we successfully canceled the dimension shift and dropped the terms that we do not concern to a certain extent, and give a simplified IBP relation in which all the integrals are in the certain dimension D and integrals except the target have lower total power of propagators. We gave our analytic result by the determinant of the cofactor of the matrixÂ, which is completely determined by the graph. In section 3, combined with our trick, we calculated the triangle I 3 (1, 1, 2), box I 4 (1, 1, 1, 2), and pentagon I 5 (1, 1, 1, 1, 2) in the parametric form proposed by Chen , and gave the analytic result of all the coefficients to the master basis, especially the tadpole parts as the complement of our previous work. Reduction method in parametric form by Chen In this section, we will introduce a new reduction method proposed by Chen in [1]. The general form of loop integral is given by where for simplicity, we have denoted l = (l 1 , l 2 , l 3 , · · · , l L ) and k = (k 1 , k 2 , k 3 , · · · , k n ). Since in this paper, we consider only the scalar's integrals with N (l) = 1, let us label I(L; λ 1 + 1, · · · , λ n + 1) = d D l 1 · · · d D l L 1 By the procedure of Feynman parametrization, thus the loop integrals can be done as is a homogeneous function of α i with degree L, while the V (α) is a homogeneous function of α i with degree L + 1, and the 1 The relation has been verified in many places based on the method in graph theory loop integral becomes to To derive the parametric form suggested by Chen, we do the following. Using the α-representation of general propagators, where the "iǫ" has been neglected, we get To go further, we change the integral variables as α i = ηx i . Since there are totally n independent variables, we must put another constraint condition. In general, we could let i∈S(1,2,3,···n) where S is an arbitrary non-trivial subset of {1, 2, 3, · · · n}. After carrying out the integration over η, the second line of eq.(2.5) becomes to Finally we by Mellin transformation 2 we could write the (2.9) as Putting all together, we now finally get the parametric form of scalar loop integrals (2.5), The IBP identity in parametric represent The parametric form of (2.14) is the starting point of Chen's proposal. The IBP relations in this form is given by 34 where i = 1, ..., n + 1 and the dΠ (n) in the second term is The second term in (2.15) contributes to a boundary term which leads to the sub-topologies to the former term. To illustrate the IBP relation (2.15), we present the reduction of I 2 (1, 2) as an example. The general form of one-loop bubble integrals is given by (2.17) 3 In some sense, the parametric form can be considered as the generalized Feynman parametrization form. Thus the IBP relation (2.15) could be called the IBP relation in the generalized Feynman parametrization form. 4 The IBP relation requires the term in the bracket of the first term to be degree (−n), which can be obtained by multiplying any monomial of degree one. Here in (2.15) we have multiplied xn+1 by our experiences from later examples, but one can make other choices. and the corresponding parametric form is (in this article we ignore the former factor π LD 2 ) with λ 0 = − D 2 and λ 3 = −3−m−n−2λ 0 . Using the eq.(2.15), we could get three IBP recurrence relations. Taking ∂ ∂x 1 first, the first term in (2.15) gives Here we need to explain the notation i λ 0 ;−1,n . From the middle expression of (2.22), we see that it is the parametric form of tadpole d D l (l 2 −m 2 λ 0 i λ 0 −1;m,n + 2m 2 1 λ 0 i λ 0 −1;m+1,n + ∆λ 0 i λ 0 −1;m,n+1 + δ m,0 i λ 0 ;−1,n = 0 (2.23) When we set m = n = 0 in (2.23), it reads Similarly, we could take the differential ∂ ∂x 2 and get the second IBP relation Naively, we should solve i λ 0 ;0,1 by i λ 0 ;0,0 from (2.24) and (2.25). However, for bubble part, we have λ 0 − 1 instead of λ 0 . This one could be fixed by rewriting λ 0 → λ 0 + 1 since λ 0 is a free parameter. However, the boundary tadpole part i λ 0 ;0,−1 will become i λ 0 +1;0,−1 , i.e., having the dimensional shifting, which is a common feature in the parametric IBP relation. To deal with it, using the parametric form of tadpoles and taking the ∂ ∂x 1 and ∂ ∂x 3 , we could get two IBP relations from which we solve Putting (2.28) to (2.24) and (2.25), we can solve the i λ 0 −1;0,1 . After shifting λ 0 → λ 0 + 1, we finally get Translating back to scalar basis, we get the reduction of I 2 (1, 2) as with the coefficients The result is confirmed with the FIRE6 [33,34]. Improvement of parametric IBP As we have seen from the previous subsection, the IBP relation given in (2.15) will contain the integrals with dimension shift, which makes the reduction program a bit troublesome. We would like a recurrence relation without dimension shift. As we reviewed in the introduction there are several references dealt with this or related problems. Based on these work, an improved version of IBP relation has been given in [2] (see Eq.(2.12), (2.13) ). All these methods require the solution of syzygy equations, which is not an easy task in general. However, for our one-loop integrals, the function F (x) is a homogeneous function of x i with degree two 6 . This good property makes the related syzygy equations simple, which can be solved straightly 7 . In this paper, we will develop a direct algorithm to write down IBP relations without the dimension shift and the terms having unwanted higher power of propagators. In the generalized parametric representation, our improved IBP relation is to multiply a degree zero coefficient z i , for example, , in (2.15). Since the degree of the new integrand does not change, the IBP identity still holds. Summing them together we get 8 Since the second boundary term involve integrals with sub-topologies, we focus on the first term. Expanding it, we got From (2.13), one can see the power λ 0 of F is related to dimension. To cancel the dimension shift, we need to choose the proper coefficients Since coefficients z i are not polynomials, (2.34) is not the "normal sygyzy equation" and one can not directly use the technique developed for polynomial ring. In [2], Chen developed a method based on the lift and down operators. Here for the one loop integrals, we can solve it directly with some free auxiliary parameters, as we will show shortly. When putting back solutions to the IBP recurrence relation, we could choose these free parameters to cancel both the dimension shift and unwanted terms with higher power of propagators, which leads to a simpler recurrence relation. Now let us explain the idea in details. Note that in one loop case, the homogeneous function F is a degree two function of x i , so we can write where A is the symmetric matrix 9 . Thus we have 8 Note the summation of i is form 1 to n + 1, where we have included the auxiliary parameter xn+1, which is an apparent different from the tradition Feynman parametrization. 9 In general it is not necessary to make be symmetry matrix, and this is just one choice. But for the simplification of the following calculation, since we will later set an antisymmetric matrixKA, it is convenient to make the convention to set be symmetry matrix. where the coefficients' matrixK is a real symmetry matrix. In fact we can do more. Using the trick that with any antisymmetric matrix K A , we could add (2.37) to (2.36) to get a more general form Noticing that because the arbitrary matrixK A of rank n + 1, there are n(n+1) 2 free independent parameters, a 1 , · · · , a n(n+1) 2 in the matrixQ in (2.38). Now putting (2.38) back to (2.34), we could solveẑ aŝ Noticing that since z is degree zero, we should have B homogenous function of degree −1. In our article, we choose B = 1 x n+1 . The choice of z given by (2.39) will guarantee to remove the dimension shift in the IBP relation. Furthermore, by choosing particular value of these free parameters ofQ, we could cancel some unwanted terms. In the later computations, we will give some examples to illustrate this trick. Reduction of one-loop integrals As we have mentioned in the introduction, one motivation of the paper is to complete the reduction of scalar basis with general powers. Using the unitarity cut method in [3], we are able to find reduction coefficients of all basis, except the tadpole. In this section, we will use the improved IBP relation (2.32) to find the tadpole coefficients as well as other coefficients. The bubble's case Let us start from the bubble topology. Although we have done it already in (2.30), here we will redo it using the improved IBP relation (2.32). The parametric form of bubble is given by (2.18), (2.19) and (2.20). Using our label, we havef Adding the antisymmetric matrix K A , we havê Deriving the recurrence relation Expanding the (2.32), we got the IBP recurrence relation where the δ 2 is the boundary term, which we will compute later. Other coefficients are Since we want to get the reduction of I 2 (1, 2), starting from m = n = 0, we want to eliminate terms with indices (m + 1, n) and (m + 1, n − 1), while keeping the term with index (m, n + 1). Thus we impose c m+1,n = 0 and c m+1,n−1 = 0, which can be satisfied by choosing the free parameters 10 After this choice, the matrixQ becomes tô For this example, one can check that we can not add another constraint to fix a1. and it left us five terms with non-zero coefficients 11 . The boundary δ 2 term: The δ 2 term is given by where the λ i represents the power of x i . It is worth to emphasize that since z i contains x i , the total power λ i of x i is not equal to m, n, λ 3 in general. Expanding it, we get 12 Remembering our extended notation explained under (2.22), we have and the δ 2 term could be written as where the subscript r in δ 2;r and Q ij;r means that the a 2 and a 3 should be replaced by (3.6). Since the m and n could not be −1, the first and fifth terms are actually zero. Now we could use (3.4) and (3.11) to get our result directly. Setting m = 0 and n = 0, all other terms in (3.4) are equal to zero, and we are left with 13 with the coefficients From it we could directly write down the answer Translating back to scalar integrals, it is I 2 (1, 2) = c 12→11 I 2 (1, 1) + c 12→10 I 2 (1, 0) + c 12→20 I 2 (2, 0) + c 12→01 I 2 (0, 1) + c 12→02 I 2 (0, 2) (3. 16) with c 12→20 = 0 and 13 When setting m = n = 0, except the boundary term δ2, among other seven terms in (3.4), the coefficients of the second and the third terms have been chosen to be zero. For the other five terms, one can show that cm−1,n+1, cm,n−1, cm−1,n are zero by using the last line of (3.5). There is another technical point. When m = n = 0, the seventh term will contain i λ 0 ;−1,0 , which looks like the one defined in (3.10). But they are, in fact, different. The one appeared in (3.4) with the measure dΠ (3) while the one appeared in (3.10) with measure dΠ (2) . 14 The reduction of tadpole with higher power is simple. Noticing that I2(1, 0) ∝ (m 2 1 ) by dimensional analysis, one can take the derivative over m 2 1 to get the wanted reduction coefficients. with the coefficients which is given in (2.30). The general case of bubbles Now let us consider the more complicated examples, i.e., the bubble with general higher power of propagators. By the choice (3.6) we got an IBP recurrence realtion (3.7) and use it we could reduce the bubbles i λ 0 ,m,n+1 to the simpler bubbles having less total power of propagators and no higher power in D 2 . Similarly, by choosing the different values of a 2 and a 3 , we could get another IBP recurrence realtion to reduce the integral to those having no higher power in D 1 . The choice is (3.20) and the corresponding IBP recurrence is In the example I 2 (1, 3), we just need to reduce D 2 from power 3 to 1. The strategy is to use (3.7) two times. In the first step, by setting m = 0 and n = 1 in (3.7) we got For the first term in (3.24), setting m = 0 and n = 0 in (3.7) again we have Putting (3.25) into (3.24) and using the reduction of tadpole 15 we get with the coefficients The result is confirmed with FIRE6. In this example, we just need to solve 2 equations in reducing bubbles' topology. With the same idea going down, we just need to solve 14 equation to complete reduce the I 2 (3, 5). The analytic expression by these 14 equations have also been confirmed by FIRE6. 15 In general, we could repeat the similar procedure to give the tadpoles' IBP recurrence relation, and calculate them step by step. Here, for simplicity, we could just use the trick, I2(1, 0) ∝ (m The triangle's case The triangle I 3 (m + 1, n + 1, q + 1) is given by The parametric form of it is Using the expression (2.10), we have Thus we can read out matriceŝ c 0,0,0 i λ 0 ,0,0,0 + c 0,0,1 i λ 0 ,0,0,1 + δ 3;000 = 0 (3.42) with the coefficients One can see that in (3.42), only two terms of triangle topologies are left: one is the scalar basis and one is the target we want to reduce. Other five terms in (3.39) disappear by the expression in (3.40). Thus there 16 Since the boundary term having only one xi = 0, it reduces to the sub-topologies with only one propagator pinched. Summary and further discussion In this paper, we consider the one-loop scalar integrals in the parametric representation given by Chen. However, in the recurrence relation, there are usually some terms that we do not want, as well as some terms with dimensional shifting in general, which makes our calculation not easy and efficient. In Chen's later paper [2], he used a method based on non-commutative algebra to cancel the dimension shift. Different from others methods, in the one loop case, we have used a straight method by solving the linear equation systems to simplify the IBP recurrence relation in the parametric representation. Benefited from the fact that the F is a homogeneous function of x i with degree two in one-loop's situation, we could solve the x i by ∂F ∂x i with some free parameters. Then combining all the IBP identities with particular coefficients z i , and then choose particular values of the free parameters, we succeed to cancel the dimension shift and the terms with higher total power. As the complement of the tadpole coefficients in the reduction to our previous paper, we calculated several examples and gave the analytic result of the reduction. For further research, there are some questions needed to be considered. In the previous calculation, we can see that the coefficients we constructed z i is not polynomial since it has the denominator with the form x γ n+1 , so we could not directly use the technic of syzygy. Also, the application of Chen's method to higher loop is definitely another future direction. For this case the homogeneous function F (x) is of degree L + 1, where L is the number of loops. For high loop's case, we should consider how to construct the coefficients z i efficiently, and find a relation similar to (2.37) to cancel the terms we do not need. Thirdly, the sub-topologies are totally decided by the boundary term in the parametric representation, and this may lead to some simplification of calculation.
2021-09-22T01:16:03.754Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "93c7fdb62521dd4f0901281d3f5bdb7d86b55a6a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1674-1137/ac7a1c/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "93c7fdb62521dd4f0901281d3f5bdb7d86b55a6a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7662070
pes2o/s2orc
v3-fos-license
The Influence of Recombinant Human Erythropoietin on Apoptosis and Cytokine Production of CD4+ lymphocytes from Hemodialyzed Patients Recombinant human erythropoietin (rhEPO) treatment of hemodialyzed (HD) patients normalizes the altered phenotype of CD4+ lymphocytes and restores the balance of Th1/Th2 cytokines. We decided to test how the presence of rhEPO in cell culture modulates cytokine production of CD4+ lymphocytes in HD patients with stable hemoglobin level and expression of activation antigens of stimulated CD4+ lymphocytes similar to those observed in healthy individuals. We also tested whether the presence of rhEPO in cell culture protects stimulated CD4+ lymphocytes of HD patients from apoptosis. Peripheral blood mononuclear cells (PBMC) of HD patients were stimulated with an immobilized anti-CD3 antibody with or without addition of rhEPO. The percentage of apoptotic CD4+ lymphocytes and the level of Th1/Th2 cytokines in culture supernatants were measured with flow cytometry. HD patients showed a decrease in the percentage of apoptotic CD4+ cells after stimulation with the anti-CD3 antibody combined with rhEPO. The level of IFN-γ and IL-10 was increased while the level of TNF-α was decreased in the presence of rhEPO in cell culture from HD patients. These results confirm the role of rhEPO signaling in T lymphocytes of HD patients. Introduction Chronic renal failure is accompanied by a deficiency state in both cell-mediated and humoral immunity which is deepened by hemodialysis (HD) [1]. A direct contact of the patient's peripheral blood mononuclear cells (PBMC) with an artificial membrane increases the production of pro-inflammatory cytokines: tumor necrosis factor-alpha (TNF-α), interleukin (IL)-1 and IL-6 [2,3]. What's more, HD patients demonstrate a decreased production of IL-2 and interferon gamma (IFN-g) [4,5]. The deficient production of cytokines is probably related to changes in phenotype of CD4 + lymphocytes from HD patients, which exhibit decreased expression of the major co-stimulatory CD28 antigen and main activation markers: CD25 and CD69 [6]. Recombinant human erythropoietin (rhEPO) administered to HD patients to correct the anemia state can influence both the phenotype of T lymphocytes and the production of cytokines. Our previous studies have shown that rhEPO treatment normalizes the impaired expression of CD28 and CD69 antigens of CD4 + lymphocytes [7]. Additionally, several months of therapy restores the balance of cytokines by reducing the level of TNF-α [8,9] and increasing the level of IL-2 and IL-10 [4,8,9] in whole blood cell cultures of HD patients during treatment. However, the levels of cytokines change in a different ways during rhEPO treatment. The level of TNF-α and IL-10 changes immediately when the hemoglobin level exceeds the optimal value of 10 g/dl and is stable during treatment, while the level of IL-2 increases continuously during treatment [9]. This observation suggests that the normalization of the levels of TNF-α and IL10 was reached shortly after the start of rhEPO therapy. Increase of the level of IL-2 has been achieved after several months of rhEPO administration, so this change was secondary with respect to changes in the level of TNF-α and IL10. Probably changes in the level of these cytokines are dependent upon various factors. For example, the level of IL-2 may depend on increased expression of CD28 on CD4 + lymphocytes and/or decreased percentage of CD152 + lymphocytes. However, it is still not clear which cytokines are regulated by the presence of erythropoietin in the circulation. In order to better understand these mechanisms, we decided to test how the presence of rhEPO in cell culture influences cytokine production in stimulated CD4 + lymphocytes from HD patients with a stable hemoglobin level and expression of CD28, CD69 and CD25 antigens of stimulated CD4 + lymphocytes similar to those observed in healthy individuals. Repetitive HD procedure has also been reported to induce apoptosis of T lymphocytes, which leads to T-cell lymphopenia sometimes observed in these patients [10]. Therefore, we also tested if the presence of rhEPO in cell culture can protect stimulated CD4 + lymphocytes of HD patients from apoptosis. Patients and Methods Patients For this study we have chosen 13 HD patients (11 men and 2 women, mean age 59±13.99 years) with a hemoglobin level above 10 g/dl (mean hemoglobin 12.53±1.57 g/dl) and expression of CD28, CD69 and CD25 antigens of stimulated CD4 + lymphocytes similar to those observed in healthy controls. 7 HD patients received epoetin alpha or beta and the epoetin doses were adjusted to hemoglobin levels. 6 HD patients who didn't receive rhEPO were also included in the study group. These patients did not require rhEPO administration, since the level of hemoglobin was maintained at the correct level. They showed similar phenotype of stimulated CD4 + lymphocytes and the level of Th1 and Th2 cytokines in culture supernatants as HD patients treated with rhEPO and healthy controls (data not shown). None of the patients suffered from any infection, inflammation, malnutrition, malignancy or blood loss during the study. The study has been approved by the Ethical Committee of the Medical University of Gdańsk. Assessment of the Percentage of Apoptotic CD4 + lymphocytes and the Level of Th1/Th2 cytokines Thirty milliliters of venous peripheral blood from each HD patient was collected in tubes containing EDTA as the anti-coagulant agent before HD session. Peripheral blood mononuclear cells (PBMC) were isolated from venous peripheral blood as previously described [6] and stimulated with an immobilized anti-CD3 antibody (125 ng/ml) with or without the addition of rhEPO (epoetin alpha, 0.1 U/ml) and incubated for 3 days at 37°C, 5 % CO 2 . Stimulated cells were then collected in order to estimate the percentage of apoptotic CD4 + lymphocytes. Culture supernatants were collected and frozen at −80°C for the assessment of cytokine production. Collected cells were stained with the RPE-Cy5-conjugated anti-CD4 monoclonal antibody (DAKO, Denmark) and PE-conjugated annexin V (BD Pharmingen, USA) and analyzed with flow cytometry on FACScan (Becton Dickinson, USA). Cytometric Bead Array (CBA™, BD Biosciences, USA) was used to estimate the level of Th1/Th2 cytokines produced by stimulated PBMC. Cytokine concentrations were analyzed with the use of Becton Dickinson CBA software. Fig. 1 Comparison of the percentage of CD4 + cell apoptosis depending on the presence of rhEPO in cell culture. Lymphocytes were selected on the basis of their forward and side scatter characteristics. Annexin V binding assay was used to distinguish between viable and apoptotic cells-apoptotic CD4 + lymphocytes were annexin V positive (CD4 + Annexin V + cells). Graph shows the percentage of CD4 + Annexin V + cells in HD patients. Midpoints of figures present medians, boxes present the 25 and 75 percentile and whiskers outside visualize the minimum and maximum of all the data, p<0.05, Wilcoxon signed ranks test Analysis and Statistics Data were analyzed with Cyflogic, version 1.2.1 (©Perttu Terho and ©CyFlow Ltd). Statistical analysis was done using the Statistica program, version 8 (StatSoft, Poland). The significance tests were chosen according to data distribution. The level of significance in all was p≤0.05. Results and Discussion The expression of erythropoietin receptor (EPO-R) has been reported in many non-hematopoietic cells including lymphocytes [11]. We have recently shown that rhEPO increases the phosphorylation of signal transducer and activator of transcription 5 (STAT5) in stimulated CD4 + lymphocytes [12]. Phosphorylation of STAT5 in T lymphocytes plays an important role in promoting their survival. Meanwhile, repetitive HD procedure has been reported to induce apoptosis of T lymphocytes [10]. Therefore, we examined whether the presence of rhEPO in cell culture can protect stimulated CD4 + lymphocytes of HD patients from apoptosis and we have shown that if present at a concentration similar to the physiological level rhEPO promotes the survival of CD4 + lymphocytes. HD patients showed a significant decrease in the percentage of apoptotic CD4 + cells (CD4 + Annexin V + cells) after stimulation with the anti-CD3 antibody combined with rhEPO as compared to cells stimulated with the anti-CD3 antibody alone (Fig. 1). This fact has been observed in other cell types and our team is being the first to described it in T lymphocytes. We also investigated the level of Th1 cytokines (IL-2, IFN-g, TNF-α) and Th2 cytokines (IL-4, IL-5, IL-10) in culture supernatants of lymphocytes stimulated with the anti-CD3 antibody with or without addition of rhEPO. We didn't observe any changes in the level of IL-2 in culture supernatants of lymphocytes stimulated with the anti-CD3 antibody in the presence of rhEPO. Since the phenotype of stimulated CD4 + lymphocytes from these patients and the level of IL-2 in culture supernatants is similar to those observed in healthy individuals, we suspect that the production of IL-2 is dependent on the level of CD28 antigen on CD4 + lymphocytes, as confirmed by a positive correlation between these parameters (data not shown). This relationship has already been described by other authors [13]. In our study the presence of rhEPO in cell culture increased the levels of IFN-g and IL-10 ( Fig. 2), whose production can also be enhanced by IL-2 [14,15]. Since IL-2 and EPO act through receptors that belong to the same family and have common signaling pathways [16], we believe that rhEPO acts like IL-2. The level of TNF-α was decreased in the presence of rhEPO in cell culture from HD patients (Fig. 2). It is not clear whether down-regulation of TNF-α was caused by rhEPO or rather IL-10, whose level was increased in the presence of rhEPO. The level of IL-4 and IL-5 remained unchanged (Fig. 2). Not the first time we demonstrate the impact of rhEPO on cytokine levels in HD patients. Our study shows that rhEPO affects the production of cytokines which are found to be regulated through signaling pathways involving STAT5, for example. The presence of rhEPO appears to regulate levels of TNFTNF-α and IL-10, depending on the initial level of cytokines. Similar experiment was carried out in a group of healthy people. Healthy controls had a lower percentage of apoptotic CD4 + cells after stimulation with the anti-CD3 antibody compared to HD patients so the presence of rhEPO in cell culture didn't influence it. Healthy controls also didn't show any changes in Th1 or Th2 cytokine levels when rhEPO was added to cell culture but at the same time the level of TNF-α was already decreased while the level of IL-10 and IFN-g was increased compared to HD patients (data not shown). Conclusions These results confirm one more time the role of rhEPO signaling in T lymphocytes of HD patients. In our opinion, rhEPO protects CD4 + lymphocytes from apoptosis and restores the balance of cytokines by reducing the level of TNF-α and increasing the level of anti-inflammatory IL-10, though the mechanism of action of rhEPO on T lymphocytes is still unclear. At the same time these results suggest that improved production of IL-2 is not directly dependent on rhEPO presence but seems to be a consequence of a longterm rhEPO treatment. These observations confirm that rhEPO administration not only has a beneficial effect on the red blood cells but also regulates the functioning of immune cells.
2016-05-12T22:15:10.714Z
2012-11-20T00:00:00.000
{ "year": 2012, "sha1": "565574186c12c49c67557f47fdd828562c93cc3a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10875-012-9835-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "88e8d4072707b9ea7289db1127cc4bdfd5e2fd10", "s2fieldsofstudy": [ "Biology", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235452971
pes2o/s2orc
v3-fos-license
Immune Cell Infiltration as Signatures for the Diagnosis and Prognosis of Malignant Gynecological Tumors Background Malignant gynecological tumors are the main cause of cancer-related deaths in women worldwide and include uterine carcinosarcomas, endometrial cancer, cervical cancer, ovarian cancer, and breast cancer. This study aims to determine the association between immune cell infiltration and malignant gynecological tumors and construct signatures for diagnosis and prognosis. Methods We acquired malignant gynecological tumor RNA-seq transcriptome data from the TCGA database. Next, the “CIBERSORT” algorithm calculated the infiltration of 22 immune cells in malignant gynecological tumors. To construct diagnosis and prognosis signatures, step-wise regression and LASSO analyses were applied, and nomogram and immune subtypes were further identified. Results Notably, Immune cell infiltration plays a significant role in tumorigenesis and development. There are obvious differences in the distribution of immune cells in normal, and tumor tissues. Resting NK cells, M0 Macrophages, and M1 Macrophages participated in the construction of the diagnostic model, with an AUC value of 0.898. LASSO analyses identified a risk signature including T cells CD8, activated NK cells, Monocytes, M2 Macrophages, resting Mast cells, and Neutrophils, proving the prognostic value for the risk signature. We identified two subtypes according to consensus clustering, where immune subtype 3 presented the highest risk. Conclusion We identified diagnostic and prognostic signatures based on immune cell infiltration. Thus, this study provided a strong basis for the early diagnosis and effective treatment of malignant gynecological tumors. INTRODUCTION Malignant gynecological tumors are the main cause of cancer-related death in women worldwide. Typically, common malignant gynecological tumors, including uterine carcinosarcomas, endometrial, cervical, and ovarian cancer and breast cancer, are also considered (Fahad Ullah, 2019). These cancers are closely related to reproductive factors and share common characteristics, suggesting similar etiological pathways or mechanisms (Kelsey et al., 1993;Bates and Bowling, 2013). Breast cancer surpassed lung cancer among all the cancer types to become the most frequently diagnosed cancer and cause of mortality. Moreover, the mortality of other female reproductive cancers should not be underestimated (Sung et al., 2021). Thus, it is of great significance to determine the effective biomarkers for promoting the diagnosis and prognosis of patients with these cancers. The main treatments of malignant gynecological tumors include surgery, chemotherapy, and radiotherapy (Denschlag and Ulrich, 2018;Chandra et al., 2019;Koh et al., 2019;Rossi et al., 2019). Among them, radical surgery is usually the intervention of choice. Chemotherapy and radiotherapy have also been performed as adjuncts to surgery, for reducing the size of tumors and ameliorating their recurrence (Wang et al., 2011;Bestvina and Fleming, 2016;Matei et al., 2019). Occasionally, local palliative treatments are necessary for alleviating the pain that patients experience (Davidson et al., 2018). Nevertheless, many needs remain unaddressed; advanced stage diseases are still incurable, with numerous patients dying of gynecological tumors annually. With the deepening of the research on the immune system, immunotherapy has become a very promising treatment method that can be used after surgery and chemotherapy. Different immunotherapy strategies are adopted for different categories of immunocompromised patients. However, complications such as specific antigen recognition and the treatment of adverse reactions remain unresolved (Tagliabue et al., 2018). Developing methods to improve toxicity to cancers, identify more specific targets, and improve their efficacy and safety are the difficulties we must overcome (Pandolfi et al., 2018). Recently, the use of immunotherapies to treat cancer patients has become a reality (Gajewski et al., 2013). More studies are increasingly focused on the tumor microenvironment, which can act as potential biomarkers to increase the accuracy of diagnoses and prognoses and provide opportunities for new cancer therapy strategies (Masugi et al., 2019;Yang et al., 2019). The infiltrating immune cells are an essential part of the tumor microenvironment and may exhibit tumor-antagonizing or tumor-promoting effects (Wang et al., 2019;Lei et al., 2020). While the immune microenvironment was analyzed in various cancer studies (Stanton and Disis, 2016;Karn et al., 2017;Zhang et al., 2020), few comprehensively analyze the role of immune cell infiltration in malignant gynecological tumors. CIBERSORT (Cell-Type Identification by Estimating Relative Subsets of RNA Transcripts) is a new algorithm for calculating the quantity of immune cells. It contains 547 genes and 22 types of common human immune cells in Newman et al. (2015). Moreover, it can also determine the immune cell landscape of various tumors and select related biomarkers for diagnosis and prognosis . Much research has been carried out with CIBERSORT to study the tumor microenvironment (Blum et al., 2018) further. Our study estimated the proportion of 22 immune cells in malignant gynecological tumors based on the CIBERSORT algorithm using the sample expression data downloaded from TCGA. We further constructed the diagnosis and prognosis models, which provided a strong basis for early diagnosis and effective treatment of malignant gynecological tumors. Data Acquisition The data used in the study were all obtained from open-source databases. The cohort of the female reproductive system used to determine the immune signature consisted of endometrial, uterine, ovarian cancer, cervical, and breast cancer data. For more comprehensive results, female breast cancer data were also included. We retrieved all RNA-seq transcriptome cancer data from The Cancer Genome Atlas (TCGA) database 1 (Blum et al., 2018). Due to the shortage of normal samples in the TCGA database, data from the GTEx database (mainly from autopsies) were selected to expand the subset of normal data samples 2 . Then, the RNA-seq transcriptome data were normalized by fragment per kilobase of exon model per million (FPKM, mean fragment per kilobase million). The exact sample number, data sources, and primary organs are listed in Table 1, and a total of 2,562 data samples and 25,496 genes were obtained. Furthermore, we downloaded the patients' clinicopathological information which consisted of their age, gender, survival time, outcome, and TNM stage from the TCGA database with the approval of the TCGA. The samples with missing or incorrect follow-up data and less than 30 days follow-up time were removed and excluded from the prognostic analysis; however, they were included in the diagnostic analysis. Analysis of Infiltrating Immune Cell Components To estimate the immune cell components in each sample, CIBERSORT 3 was used with the LM22 signature and 1,000 permutations (Newman et al., 2015). We used a panel of 22 immune cells consisting of B cells, T cells, natural killer cells, macrophages, dendritic cells, and myeloid subsets. CIBERSORT acquires a probability, P for the deconvolution of each sample via Monte Carlo sampling, providing a measure of confidence in the results. In our analysis, P < 0.05 means the results calculated by the CIBERSORT are accurate, subsequently, only 506 samples (P < 0.05) were used in the follow-up analysis. The final output estimates were normalized for each sample, and the summary of each immune cell component was 1. Diagnostic Analysis The diagnostic analysis was carried out among the eligible samples, which were randomly split into training and validation cohorts with a 5:5 ratio using the R package "caret" 4 . Logistic regression was used to construct the diagnostic signature of the training group, and step-wise regression was used to screen the variables. Receiver operating characteristic (ROC) curves were used to analyze the predictive efficacy of the signatures, and the area under the curve (AUC) was calculated. This result was further tested and verified in the training cohort, the validation cohort, and for all datasets. Prognostic Analysis Only the samples that met the inclusion criteria with complete clinical and follow-up information were included in the prognostic analysis. The eligible patients were separated into training and validation cohorts in a 7:3 ratio using the R package "caret, " and then the LASSO analysis was conducted to obtain a predictive signature from the training cohort. The coefficients characterized the risk score according to the least absolute shrinkage and selection operator (LASSO) algorithm by using the R package "glmnet" 5 . A risk score was calculated by applying the following formula : where Codfi is the coefficient and x i is the relative expression value of each of the candidate immune cells. The samples in the training-and validation-groups were divided into high-and low-risk groups, and the median risk score was used as the cutoff point. A Kaplan-Meier analysis was conducted to assess the difference in overall survival between the training set, validation set, and datasets. Validation of Diagnostic Signature and Prognostic Signature in Geo Datasets We constructed other cohorts from Gene Expression Omnibus (GEO) to further demonstrate the effectiveness of the diagnostic signature and prognostic signature. These cohorts were selected with a search scope limited to "Homo sapiens, " and the chip platform limited to GPL57, GPL7759, and other common 5 https://cran.r-project.org/web/packages/glmnetcr/index.html platforms. Furthermore, the cohort that met the following exclusion criteria was not selected: (i) datasets that used cell lines or animal samples; (ii) the patients' survival information was not complete. After confirmation, CIBERSORT was again used to confirm the immune components, followed by verification of the reliability and validity of the diagnostic, and prognostic signatures. Nomogram Construction Nomograms are simplified models for predicting the cancer prognosis as a single numerical value. The length of the line represents the indicator's impact on the results, and a longer line represents a greater impact. The nomogram application is achieved by adding together all the point scales of each variable. The total points projected on the bottom scales represent the probability of 3-year, and 5-year overall survival. The R package "rms" 6 was used to draw the nomogram, and the R package "survivalROC" was to compile the ROC curve. Identification of Immune Subtypes We performed an unbiased grouping of all patients using consensus clustering analysis with the R package "ConsensusClusterPlus" 7 to explore the correlation between different immune cell infiltration subtypes and the prognosis of patients. In addition, we conducted a survival analysis of various immune subtypes. Statistical Analysis R software (Version 4.0.3) was used for all statistical analyses, and the data were shown as mean ± standard deviation. The default Wilcoxon test and one-way analysis of variance (ANOVA) were used to analyze the differences between the two groups and among multiple groups, respectively. The overall differences in survival rate among groups were quantified via Kaplan-Meier analysis and a log-rank test. Results were regarded statistically significant when P < 0.05. Patient Characteristics Immune cell infiltration is necessary for the initiation and progression of cancer. We developed selection criteria to assess the biological role of immune cell infiltration in malignant gynecological tumors and downloaded them from the TCGA database and GTEx database. The resulting P < 0. 05 samples in CIBERSORT were used for further analysis. In total, 2,057 patients were diagnosed with female reproductive system tumors (181 UCEC samples, 306 CESC samples, 427 OV samples, and 1,099 BRCA samples), and 494 normal samples were selected. The detailed distribution of the patients in each group is summarized in Table 1, and the workflow of the study is illustrated in Figure 1. Composition of Immune Cells in Malignant Gynecological Tumors The distribution of the immune cells in and across clinical groups of the malignant gynecological tumors is shown in Figure 2A. We can deduce that the five most common immune cell fractions were follicular helper T cells, activated CD4 memory T cells, CD4 memory resting T cells, resting Dendritic cells, and resting mast cells. The total proportion of the five immune cells were more than 60% in all clinical subgroups (Supplementary Figure 1). However, in normal tissue, follicular helper T cells, resting Dendritic cells, resting CD4 memory T cells, memory B cells, and gamma delta T cells were the five main immune cells; and their total proportion surpasses 70%. In addition, we further distinguished the discrepancy between each immune cell within tumor, and normal tissues. As shown in Figure 2B, the follicular helper T cells, activated CD4 memory T cells, CD4 memory resting T cells, resting Dendritic cells, and resting mast cells were all up-regulated in the cancer group, while the M2 Macrophages were down-regulated. Here, P < 0.05 was considered to be a statistically significant result (Supplementary Table 1). Diagnostic Signature Building All selected samples were spilt into a training cohort (1,007 samples) and a validation cohort (1,006 samples). A logistic regression model was built based on the training set, and variables were screened using step-wise regression (see Supplementary Table 2). We observed that the resting NK cells, M0 Macrophages, and M1 Macrophages all satisfied the condition that P < 0. 05. Thus, they were chosen as variables for building the diagnostic signature. We also predicted that the results of the tumor and normal tissues in the training, validation, and entire cohorts to further verify the diagnostic value of our model. The ROC curve suggested that our model had high accuracy (AUC = 0.898, 0.769, and 0.914, respectively; Figures 3A-C). Prognostic Signature Building Based on our screening criteria, 1,731 patients with over 30 days follow-up time were first distributed randomly into the training cohort (1,127 samples) and validation cohort (604 samples) at a 7:3 ratio. Next, it was used to construct the prognostic signature using LASSO-Cox analysis (Figures 4A,B) The training cohort's risk scores were then estimated using the LASSO algorithm coefficients. The formula was as follows: risk score = (−4.638 * expression level of B cells naive) + (−0.259 * expression level of T cells CD8) + (11.463 * expression level of NK cells activated) + (22.048 * expression level of Monocytes) + (2.841 * expression level of M2 Macrophages) + (−4.073 * resting Mast cells) + (68.399 * expression level of Neutrophils). The training group samples were then split into high-and low-risk groups, and the median value was used as the dividing line. The Kaplan-Meier curves were assessed to ensure that patients scoring as high-risk had a higher survival possibility in the training cohort ( Figure 4C). To ensure the prognostic model's consistency in predicting results in different groups, we used the same formula to calculate risk factors and for validation of the whole cohorts. Median risk scores were also treated as the cut-off value for distinguishing between the high-or low-risk groups, and the results were consistent with those in the training cohort. A higher risk score corresponded to short survival probability in both the validation cohort (P = 0.046, Figure 4D) and the entire cohort (P < 0.0001, Figure 4E). Validation of the Diagnostic Signature and Prognostic Signature Using the GEO Datasets The following datasets: GSE21422+GSE42568 (BRCA), GSE54388 (OV), GSE54388+GSE14407 (OV), and GSE63514 (CESC) were downloaded from the GEO database to test the value of the diagnostic signature (Supplementary Table 4). In each group, there was a high diagnostic accuracy for the tumor samples; subsequently, the AUCs were 0.8523, 0.83, 0.67, and 0.71, respectively. Furthermore, the GSE20685 (BRCA), and GSE53963 + GSE32062 (OV) datasets were both treated as a group to verify the prognostic value of our signature (Supplementary Table 5). Consistent with our TCGA database results, the higher risk scores represented a lower possibility of survival in patients. However, the result showed a notable difference in BRCA; here, patients with a high-risk score experienced good survival. Thus, as mentioned above, both results were statistically significant. Multivariate Cox Regression Analyses To test the clinical indicators, a multivariate Cox model was constructed for the training, internal validation, and full data sets to estimate whether clinicopathological characteristics (including age, tumor stage, cancer status, residual tumor, and tumor grade) could be independent prognostic factors in malignant gynecological tumors ( Table 2). In this multivariate analysis, the tumor stage and cancer status influenced all data sets (HR > 1, P < 0.05), so they were selected as effective clinical indicators for further analysis. Identification of the Nomogram A prognostic nomogram based on clinical information was constructed to produce a quantitative method for predicting the prognosis of patients with malignant gynecological tumors. The nomogram ( Figure 5A) integrated risk factors such as risk signature, age, and stage, and the results indicated that the tumor stage had the greatest impact on the model. The later tumor stage indicated a lower survival rate in patients, while patients with higher "with tumor" and "risk score" had a higher risk of a poor prognosis. Moreover, the 3-year ( Figure 5B) and 5-year ( Figure 5C) ROC curve directly showed that the value of the risk factors. The nomogram had the highest accuracy, when the areas under the ROC curve (AUC) were 0.808 and 0.858. The decision curve analysis (Figures 5D,E) showed similar results, indicating that the nomogram has proper clinical applicability. Immune Subtypes We grouped all 1,731 malignant gynecological tumor cases in an unbiased way to discriminate clear types of immune infiltration by using consensus clustering analysis. The stability of the clustering increased from k = 2-10 ( Supplementary Figure 2), and K = 5 was considered the most optimal choice, so five immune subtypes were determined. Furthermore, the relevance between various cancers and immune subtypes is exhibited in Table 3. BRCA patients were primarily distributed in the immune subtypes 1 and 4, while UCEC patients were mostly distributed in immune subtype 5. Nearly half of the OV patients were distributed in immune subtype 3, while CESC patients were mainly distributed in both immune subtypes 2 and 5. Each immune cell's specific distribution in each immune subtype is exhibited in Figure 6A. Among them, immune subtype 1 was characterized by high levels of resting CD4 memory T cells, while immune subtype 2, immune subtype 3, and immune subtype 5 were defined by resting dendritic cells and activated dendritic cells. Immune subtype 4 was defined by both resting and activated CD4 memory T cell types. Also, the calculated risk scores for different subgroups ( Figure 6B) indicate that the immune subtypes 3 and 4 had significantly higher risk scores than the other subtypes. Combined with the risk score distribution and Kaplan-Meier analysis (Figure 6C), immune subtype 3 was the most highrisk subtype. DISCUSSION Gynecological cancer is both the most common cancer in women and the leading cause of death in women. The currently treatment methods used, include surgery, radiotherapy, and chemotherapy, are gradually improving. In recent years, immunotherapy research has steadily expanded, and the research results are constantly being applied in clinical practice. However, due to untimely diagnoses and tumor invasiveness, the survival rate of advanced patients is still exceptionally low. Therefore, it is necessary to construct new and effective diagnosis or prognosis signatures for early diagnosis and to improve treatment methods. Notably, recent developments in novel cancer treatment modalities have focused primarily on early intervention. Munoz and Plevritis (2018) presented a predictive model using the estrogen receptor and human epidermal growth factor receptor 2 status to determine potential survival outcomes. Likewise, Chen et al. (2019) used five lncRNAs data in the TCGA database to obtain a five-lncRNA signature for use as an independent risk factor for OC recurrence. Furthermore, research on tumor microenvironments in cancer has gradually become popular. Yang et al. (2019) applied immune cell infiltration in cancers of the digestive system to process an effective diagnostic and prognostic model for these cancer types. Thus, there is a need for a greater mechanistic understanding of immune cell infiltration's varied role in tumor progression. We attempted to determine how it participates in tumorigenesis, along with the development and prognosis of malignant gynecological tumors. First, the newly developed CIBERSORT algorithm was used to determine the composition of immune cells in each sample. We found notable differences in the proportion of immune cells between normal samples and tumor samples, different tumors, different age groups, and different stage groups. Based on the differences between the tumor and normal groups, we selected the samples with p < 0.05 and then used the stepwise regression model, resting NK cells, M0 Macrophages, and M1 macrophages to develop a structured diagnostic model. The AUC = 0.8981 value indicated that our model was accurate (89.8% of cases) at diagnosing tumors. Moreover, it also proved the immune system's involvement in the occurrence and development of cancer. In this article, candidate cells used to build the prognostic model were also applied according to the high-throughput gene expression generated by CIBERSORT. The LASSO-Cox analysis selected the CD8T cells activated NK cells, Monocytes, M2 Macrophages, resting Mast cells, and Neutrophils as the key biomarkers. According to the expression quantity and expression coefficient of the abovementioned cells, we obtained the risk value of each sample and divided it into high-risk and low-risk groups. The Kaplan-Meier curves confirmed that the patients with highrisk scores had a higher possibility of survival in the training cohort. The results of the internal and external verification sets were consistent with the above results. Furthermore, the multivariate Cox prognostic analysis confirmed that the tumor stage and cancer status impacted all data sets and could be used as an independent prognostic factor. To better understand the prognosis of the patients, we simplified the models to predict cancer prognosis into a single numerical value, as the nomogram. It integrated tumor stage, cancer status, and risk score, along with the compiling 3-year, and 5-year ROC curves. The results showed that the nomogram has good clinical applicability. Reports have demonstrated a connection between the tumor's immune microenvironment and its survival rate (Li et al., 2017;Anichini, 2019). Based on the abundance of immune cells, five immune subtypes were identified by consensus cluster analysis, and we further explored the distribution of patients among the different immune subtypes. Combined with the risk score distribution and Kaplan-Meier analysis, immune subtypes 3 was identified as the most high-risk subtype. Many studies have reported the impacts of the tumor microenvironment on the development and prognosis of tumors, including esophageal (Lin et al., 2016), pancreatic (Wei et al., 2019), colorectal (Roelands et al., 2017), and gastric cancers (Lazãr et al., 2018), as well as melanoma . However, this research provided comprehensive immune profiles of malignant gynecological tumors, and the resulting diagnostic and prognostic models could serve as biomarkers for early diagnoses and therefore the early initiation of treatment, and for predicting survival. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
2021-06-17T13:16:18.932Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "d73ea77888f39d90c8d6fd2ceee4388dd5bd28dc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.702451/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d73ea77888f39d90c8d6fd2ceee4388dd5bd28dc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55655501
pes2o/s2orc
v3-fos-license
Types of Sentences In EFL Students’ Paragraph Assignments: A Quantitative Study on Teaching and Learning Writing At Higher Education Level This research investigates Indonesian EFL students writing four types of English sentences in their paragraph writing assignments that were posted online in Writing 1 course of English Education at STKIP PGRI Sumatera Barat. The analysed types of sentences are Simple Sentence (code: S.S.), Compound Sentence (code: C.S.1), Complex Sentence (code: C.S.2), and Compound-Complex Sentence (code: C.C.S). The percentage of each type of sentences that appears in the students’ writings within each five genres represents the students’ syntactical composition. Moreover, this research focuses on quantitatively analysing the above five types of sentences that appeared in students’ assignments in each type of following genres: argumentative, descriptive, process, cause-effect, and comparison-contrast. Data are taken from 10% samples of all population. The finding shows that writing Simple Sentence in paragraphs is a common type of sentence that is used by the students. It indicates that the guiding process to teaching students about writing paragraphs with varied sentence types is important for further development of teaching process of writing. Introduction Research on English writing and composition has received less attention among scholars of English in Indonesia.Although the substance of that kind of research in English published by Indonesian scholars might be the same; however, the specificity of each research article is different.Two broad fields related to English studies that exist in Indonesia are English linguistics and literature.Research that concentrates on studying writing or composition in Indonesian EFL context is limited.This condition leads to a gap happening in the field of English language and literature in Indonesia.Therefore, this research provides realistic opportunity for us to fill in the gap, which is to find new ways of improving quality of teaching and learning process in English writing classroom for Indonesian context.In doing so, a research on EFL students' writings is considered important to be conducted.Dealing with a gap in research, "[m]asalah merupakan kesenjangan (gap) antara apa yang seharusnya ada dengan apa yang terjadi; atau antara apa yang diharapkan akan terjadi dengan apa yang menjadi kenyataan" (Yusuf, 2007, p. 106).In other words, the gap in this research is.Through this research, we investigate students' paragraph writings that are focused on English sentences that the students use in their online submitted assignments. Furthermore, this research, as it is called as a scientific research in the field of English writing or composition studies, employs systematic steps with controllable attention toward data from purposively selected samples.Yusuf mentioned that scientific research has the following elements: systematic and controllable steps, with careful and logical methods, objective and empirical as well as directive toward target that will be solved (2007, p. 28).This research had been done systematically and controllably.Relevant method had been selected properly in terms of data analysis and representation.For data interpretation, we apply objective approach, which leads to the way we interpret the data as what they show statistically. Focus of his research is to quantitatively analysing Indonesian EFL students' syntactical composition.This composition is analysed through the way students use four types of English sentences in their submitted online paragraph writing assignments.In essence, problem that is studied in this research is the variety of students' sentences that are categorised into four types of sentences: simple sentence, compound sentence, complex sentence, and compound-complex sentence.sentence is coded as error sentence, which is a sentence that does not belong to the four types of sentences. In a larger perspective, writing in English is complex.Not only in Indonesia, teaching and learning English writing is also complicated, as it is perceived in the community of other speakers of English.For example, "English writing instruction is very difficult, but the task is even greater [in Thailand because it has] EFL context" (Tawachai, 2010, p. 181).Thailand considers English as a foreign language and Indonesia considers in that way as well.As a result, this condition has led many researchers in the field of English to conduct research about teaching of English; however, conducting research that focuses on online English writing is numerically limited for different reasons, especially research that is conducted in the context of Indonesian academia (Penrod, 2005, p. 132).It claims that research on how students write their assignments online within EFL learning context needs further attention from scholars in the field. The scope of this research is to see EFL students' syntactical composition quantitatively on the use of four kinds of English sentences in their online writing assignments.This research falls within research on English composition studies or research on teaching of English writing in Indonesian EFL context.Linguistic terms might be used in data analysis session; however, the scope is concentrated on studying students' sentences in writing a paragraph for five kinds of genres.Meanwhile, the purpose of this research is geared toward Hyland's statement, which is "to help us [and all English writing teachers] understand writing more clearly or to teach writing more effectively and [therefore] this is an enormous field with many unresolved issues and potential areas of inquiry" (2009, p. 141).Besides, this research leads a purpose to shape "teachers' perceptions and understanding of genre pedagogy principles"; moreover, it is "crucial since teachers frame the overall process of the teaching in their particular classrooms" (Tawachai, 2010, p. 194).This research presents some perspectives on which kinds of English sentences that students use generally in their writing. In line with above purposes, this research has been formulated to answer the following question: "Which type of sentences that are commonly found in EFL students' online paragraph assignments?"In order to define key terms in this research, we use dictionary approach.In other words, defining terms in this research follows the idea of constitutive definitions, or the terms are derived from dictionary approach (Fraenkel & Wallen, 2008, p. 30).Besides, we also incorporate terms that had been specified by researchers in the field of English composition. Literature Review This research concentrates on finding numerical data of students' types of sentences in their paragraph writing assignments.This notion leads to the understanding of the process of writing in English.Two terms that relate to writing as a process."Construction and discovery" exist in the process of writing and composing a text (Perl, 2011, p. 34).In line with this idea, EFL students who learn academic writing skills in higher education level are encouraged to write well by closely paying attention to acceptable standards and conventions in English academic writing.Although reaching the ability to write well in English is positioned at the highest level of hierarchy on learning English as a foreign language; the first step of reaching that point is in the ability to write a good sentence grammatically.Rowe & Levine defines "[a] sentence is a string of words that is grammatically complete with at least two components, a subject and a predicate" (Rowe & Levine, 2009, p. 112).In Writing 1 course of the college where we conducted this research, we taught students about using parts of speech in English in order to enable them to write a meaningful but grammatically correct sentence.Eventually, they will be able to write a paragraph with a good structure.A paragraph has a topic sentence, some supporting sentences, and a concluding sentence (Reid, 1988, p. 8).These three elements of an academic paragraph were taught in the course, along with the understanding of types of sentences.Focusing on paragraph development is considered to be broad; therefore, we pay attention to the syntactical level, which is termed as a sentence level of a paragraph. Syntactical Composition in English Sentences In Writing 1 course, we emphasize on the idea of being conscious to the aspect of grammar and punctuation to the students when they write their paragraph.In brief, "[t]he objectives in the sentencelevel strand are grouped into two sections: 'grammatical awareness', and 'sentence construction and punctuation' awareness" (Wyse & Jones, 2005, p. 163).Having good understanding of these types of awareness is good for the students; however, the problem is that they students are EFL learners who might need more times to develop their own awareness over this matter.In the beginning of the course, we had informed them that academic writing is a genre of written language in English (Brown, 2004, p. 219).What we expected from the students after the first meeting of this course was that they were able to get a sense of what it means to write in academic style.Thus, we find that this research with emphasis on analysing the students' paragraph writings is interesting to be conducted. A major field in English studies that briefly publish research on writing is known as composition studies.A specific topic within this field that becomes centre of attention for us is the process of writing in English.EFL students that we mean in this research are those who learn English as a foreign language.In many literature reviews, ideas of English as a foreign language are understood almost similar with English as a second language.However, the focus is in the process of composition.One of the focuses on "organising L2 writing teaching" is composing processes (Hyland, Second Language Writing, 2003, p. 2).Teaching writing in EFL context refers to the planning of teaching practice is geared toward understanding of building content for writing.Content "refers to the topic and its explanation or elaboration, discussion, evaluation, and conclusion" (Leo, 2007, p. 1).It needs to be clear, specific, and relevant to the assigned assignment in the writing course.Therefore, Writing 1 course was designed to help students to be able to compose a paragraph with an academic tone, but content of the paragraph remains essential.In this case, clarity on content can be achieved if only the students are able to write a paragraph with meaningful sentences grammatically or syntactically. Types of English Sentences As researchers in the field of English, we understand that teaching English writing in EFL context needs proper understanding on how students acquire this language.Therefore, when we taught students on how to write paragraphs well in academic style, we basically refer to realizing the notion of language acquisition.It involves specific strategy in teaching writing (Facella, Rampino, & Shea, 2005, p. 210).Sentences that construct a paragraph are taught systematically through learning how to write four types of sentences. Briefly, English language has four types of sentences.They are a simple sentence, compound sentence, complex sentence, and compound-complex sentence."A simple sentence is one independent clause.[…] A compound sentence is two or more independent clauses joined together" (Oshima & Hogue, 2009, p. 162;Pardiyono, 2007, p. 9).Compound sentences have the usage of coordinators, or coordinating conjunctions, conjunctive adverbs, and semicolons."A complex sentence contains one independent clause and one (or more) dependent clause(s)" (Oshima & Hogue, 2009, p. 172;Pardiyono, 2007, p. 9).The use of adverb clauses, adjective clauses, and noun clauses exist as part of writing complex sentences in English."A compound-complex sentence has at least three clauses, at least two of which are independent [clauses]" (Oshima & Hogue, 2009, p. 174;Pardiyono, 2007, p. 9).These four types of sentences are the centre of research problem that we intend to pursue in this research. Moreover, writing a paragraph in English involves good understanding of clause construction.Students who learn how to write essays in English academically need to learn that a text is a form of realization of meanings that can be in the form of information, messages, or ideas within formation of sentences that is constructed rhetorically in an appropriate genre grammatically (Pardiyono, 2007, p. 8).When the students wrote sentences that do not follow grammatical or syntactical standards of sentences in English, these sentences are categorised as error sentences.Traditionally, content, organization, expression, and mechanics are major components that are measured in students' writings (Hindman, 2002, p. 416).For this research, our main focus is not in the assessment of students' writings.Instead, we focus on analysing types of sentences that the students dominantly or marginally use in their writing assignments. In the process of obtaining students' writing assignments, we used a blog in which students could post their paragraph online.When the submission is due, we closed the link.This method of data collection is related to the use of technology in the teaching of English writing.In terms of using technology in writing, the use of 'word access' simply means "how to adapt writing technologies for use in a variety of writing contexts" (Hart-Davidson, 1991, p. 550).Another reason of why we used a blog is because we intended to provide a medium where students could see their written works each other. Review of Related Findings Research that focuses on writing and is published in many journals that are listed in many indexing system is numerous.For example, in his research, Baroudy discusses the idea of successful and unsuccessful writers.He mentioned that "[t]o be a successful student-writer, one has no choice but to abide by the true obligations of being or becoming a good language learner, as well" (Baroudy, 2008, p. 48).We believe that being a good learner in English writing means that students pay attention to sentence level at least, then up to textual level. Another research that focuses on writing in EFL context is conducted by Jalilifar, whose article was related to the case of Iranian students as EFL learners.Focus of this research is the discourse markers in Iranian students' writings.This researcher emphasizes that "[w]ith the status of English as an international language and the expansion in the use of English, an increasing number of second language learners are engaged in academic pursuits that require them to write compositions" (Jalilifar, 2008, p. 114).For this matter, we also view that EFL students are encouraged to write in English due to needs for international scale.This purpose might be too broad; however, the foundation of such idea remains in good understanding and ability of writing English in the sentence level as a starting point before they reach up into textual level.Therefore, we narrow this research down into analysing students' writing on the level of sentences with types of sentences as its focus. An interesting research in the field of foreign language teaching that also relates to topic of this research was done by Lekova.This researcher studied language interference in the context of French language learning and how to overcome it by using methods that are acceptable in foreign language teaching.A provocative idea that we find intriguing is that "communication between the two language systems (native and foreign) is the reason for the interference which is the object of psycholinguistics and linguistics research" (Lekova, 2010, p. 320).Another pattern that exists in the students' writings is the pattern of language interference.How far the native language of the students interfere the process of acquiring English is reflected through how well the students write English well. In brief, research that was conducted by Baroudy, Jalilifar, or Lekova indicates that conducting research on English writing can be viewed from any angle and thus, it provides rich opportunity for scholars to construct similar type of research.This research frames its purpose on viewing EFL students' sentences as core elements that scholars need to investigate before moving on to analysing higher level of textual analysis in English studies. Research Method An obvious element of this research that we mention in research method session is the way a scientific approach is applied toward data collection, analysis, and interpretation.Briefly, this research has followed convention on how to conduct such approach for a study.This type of approach is applied in advance to this research for the purpose of seeking truth by doing following ways chronologically: (1) pursuing needs over one thing; (2) defining problems of the research; (3) formulating hypothesis or research question; (4) gathering related data; and (5) concluding the research findings (Yusuf, 2007, pp. 17-18).These five chronological ways had been done completely in this research. Furthermore, this research falls within the category of applied research in the field of English writing or composition studies.Nature of this type of research is to seek for data in careful manner about a problem emerging in pedagogical context continuously in order to be useful for academic purposes (Nazir, 2014, p. 17).The following explanation will be related to research design, research object, population and sample, instrument and technique for collecting and analysing data, and data analysis method and formula of this research. Research Design: A Quantitative Research and a Descriptive-Empirical Study This research was designed within the paradigm of quantitative research; besides, mixed-method research in the process of collecting and analysing data were incorporated.In other words, method that was used in collecting and analysing data was quantitative method; meanwhile, after implementing this quantitative method, we approached findings with qualitative interpretation method to see the overall mapping of the students' syntactical composition.The idea of intertwining quantitative and qualitative method in this research led us to frame this research within empirical research.The use of quantitative analysis together with qualitative analysis within research made this research became empirical research (Beach, 1992, p. 219).Therefore, qualitatively, the percentage of each type of English sentences revealed EFL students' dominant type of syntactical composition.Since this research was also conducted within the paradigm of qualitative research, therefore, it incorporated the analysis of "social reality", "interactive processes", "authenticity", "values [in learning English]", "contextual situation", "few cases subjects", "thematic analysis", and researcher's involvement in data interpretation (Gunawan, 2016, pp. 34-35).We noticed that the use of types of sentences in students' writings as a form of social reality within the pattern of interaction process between students and their lecturer; and how values of learning writing in English were looked upon through a thematic analysis based on contextual situation with a concentrated group of samples purposively selected from students. Specificity of this research relates to the concept of descriptive research.As an Indonesian research and author, Yusuf states that "[p]enelitian deskriptif mencoba memberikan [gambaran] keadaan masa sekarang" (Metodologi Penelitian: Dasar-Dasar Penyelidikan Ilmiah, 2007, p. 82).In English, it means that descriptive research provides evidence of actual phenomena that are in relation to the researchers' field.In this case, this research quantitatively and qualitatively describes types of sentences in EFL students' paragraph assignments that they submitted online.Furthermore, "Penelitian [deskriptif] dimaksudkan untuk mengangkat fakta, keadaan, variable, dan fenomena-fenomena yang terjadi saat sekarang (ketika penelitian berlangsung) dan menyajikannya apa adanya" (Subana & Sudrajat, 2001, p. 26).In other words, core values in descriptive research are facts, conditions, variables, and phenomena that are happening at present times and this type of research presents all these things as they are.This research had been designed to meet these criteria.However, we emphasize that this research is also termed as descriptive empirical research.This term is different from experimental research because this research did not have intention to "manipulate the effects of variables" (Beach, 1992, p. 221).All data and variables in this research are presented as they are. The other term that we use to frame this research is that this research is categorised as an applied research.This term is broadly defined by Yusuf as a research that concentrates on the application or implementation of science, knowledge, or the use of science and knowledge for specific purposes (Yusuf, 2007, p. 102).For this research, we directed our motivation for this study to study the application of students' understanding of English sentences in their paragraph assignments.This brought us to the idea of focusing more on the aspect of applicability of educational values and how they work in real life settings.In essence, "[a]pplied research is interested in examining the effectiveness of particular educational practices" (Fraenkel & Wallen, 2008, p. 7).We analyse students' paragraph assignments on the aspect of sentences that they use so that we can come to a point of how effective we taught them about English sentences in Writing 1 course. Selection of Research Object This research applies data into nominal variables for each of sentence types that are written by students.This variable is divided into four variables in form of a coding system as in the following: Simple Sentence (code: S.S.), Compound Sentence (code: C.S.1), Complex Sentence (code: C.S.2), dan Compound-Complex Sentence (code: C.C.S).These four types of sentences have mutual "exlusive relationship" and refer to the term "categorical variable" (Yusuf, 2007, p. 130).Due to the goal of this research is to represent overall types of sentences that are used by students enrolling in Writing 1 course; we therefore applied a purposive sampling method in order to enable us to conduct research that is parametric, which means to "generalize from samples to larger populations" (Beach, 1992, p. 219). Students' similar background in Writing 1 course is a constant variable in this research.It stays the same throughout duration of this research.It is defined as "any characteristic or quality that is the same for all members of a particular group" (Fraenkel & Wallen, 2008, p. 49).Students enrolling in Writing 1 course are considered as a group in this research.What we gathered from this group was their paragraph writing assignments that they submitted online through a blog.Hyland, a prominent researcher in the field of Second Language Writing, emphasizes that "[a] major source of data for writing research is writing itself: the use of texts as objects of study" (Hyland, Teaching and Researching Writing, 2009, p. 149).Moreover, he states that writing assignments posted online by students is "authentic examples of writing used in a natural context" (Hyland, Teaching and Researching Writing, 2009, p. 145).This research eventually is specifically valid in terms of objects that we studied. Population and Sample Population in this research means all paragraphs writing assignments that students posted online.Consequently, number of writing assignments is related to number of students.Concept of population is that it is totality of all analytical units from unit of analysis in relation to information that a research needs to obtain (Yusuf, 2007, p. 182).Meanwhile, sample method that we applied was a purposive sampling method.It means "pengambilan sampel didasarkan pada maksud yang telah ditetapkan sebelumnya" (Yusuf, 2007, p. 205).We classify several characteristics beforehand, and then we select samples according to the fulfilment of these characteristics.Percentage of samples that we selected from the population was 10%. The actual population in this research is geared toward number of writings that are submitted online.Each genre was given specific attention as to how many paragraph writing assignments that were posted online.Total number of submitted writing assignments online in each genre category is considered as the population of this research.10% of this population was the samples in this research. Therefore, samples for this research are designed as in the following The sample above is taken in the form of paragraphs.For argumentative and descriptive genre, 15 paragraphs were as samples per each.For process genre, 16 paragraphs were as samples.For cause-effect genre, 12 paragraphs as samples.For comparison-contrast genre, 13 paragraphs as samples.Each sample in these genres is selected purposively as in line with the sample selection criteria that we designed beforehand. Instrument and Techniques for Collecting and Analysing Data In the process of collecting the data, the technique applied is direct observation with observation guidelines and document checklist as research instruments (Yusuf, 2007, p. 251).We observed the students' writings assignments online.Direct observation is applied accordingly in the process of collecting relevant data in research.It deals with collecting data by participating in the natural scenes of where the data occur (Subana & Sudrajat, 2001, p. 143). After all writings had been collected, we checked which paragraph that was suitable with our criteria for being a sample.After the number of samples reached 10%, we stopped the process of sample selection.In each paragraph, we focused on finding relevant data.The data are related to the four types of English sentences.They were collected into four codes: (S.S.) stands for simple sentence; (C.S.1) stands for compound sentence; (C.S.2) stands for complex sentence; and (C.C.S.) stands for compound-complex sentence.This coding technique follows the concept of open coding system, which relates to the process of performing data categorization (Gunawan, 2016, p. 242). To analyse the collected data, we applied a taxonomy analysis technique as procedure for data analysis.This technique demands researchers to comprehend specific domains in line with focus of research or research questions (Gunawan, 2016, p. 213).The subdomains are the four types of English sentences: simple sentence, compound sentence, complex sentence, and compound-complex sentence; meanwhile, the top domain of these subdomains is English sentences in the students' writings. Data Analysis Method To analyse the collected data, we apply document checklist as an instrument of analysis.The way we did this document checklist was by doing act of reading.Reading or immersion, as well as coding, indexing, and writing research memos are tools for analysing data in research about English writing (Blakeslee & Fleischer, 2007, p. 172).We dominantly used coding system as a tool for this research.Data triangulation in this research involves researchers' triangulation, which means data triangulation has been focused on involving two researchers in conducting observation (Gunawan, 2016, p. 220).In this case, data triangulation is free from bias. Percentages of Types of Sentences in Cause-Effect Genre The fourth genre of paragraph that we analysed was cause-effect.From the above graph, we notice that 75 Simple Sentence type occur in the students' writing assignments.This number becomes the highest percentage of all types of sentences in the paragraph genre (47.77%).The rest of the sentences have fewer occurrences (11.46% for Compound Sentence and 7.01% for Compound-Complex Sentence). Graph 4. Percentages of Sentences in Cause-Effect Genre Percentages of Types of Sentences in Comparison-Contrast Genre The last genre of paragraph that we analysed was comparison-contrast genre.The highest number of sentences is Simple Sentence (34.23%) and Error sentence has the lowest number of sentence occurred in the assignments (12.75%). Overall Data Description Notes: ∑ 1 is the number of sentences written under Simple Sentence category within the genre ∑ 2 is the number of sentences written under Compound Sentence category within the genre ∑ 3 is the number of sentences written under Complex Sentence category within the genre ∑ 4 is the number of sentences written under Compound-Complex Sentence category within the genre ∑ 5 is the number of sentences written under Error Sentence category within the genre ∑ 6 is the number of all types of sentences written within the specified genre From the table above, total number of sentences in students' paragraph writing assignments is 952 sentences.This number includes error sentence as an additional type of sentence.In terms of number of sentences in each genre, descriptive genre has the highest number, which is 265 sentences (27.83%) and comparison-contrast genre has the lowest number, which is 149 sentences (15.65%).The occurrence of Simple Sentence (code: S.S.) type is mostly found in descriptive genre with 116 sentences found (32.58% from 356 sentences of this type in all genres).This type of sentence is rarely found in comparison-contrast genre, which is only 51 sentences.Similarly, Compound Sentence (code: C.S.1) exists highly in descriptive genre and it has less number of occurrences in process genre (10.56%). Argumentative genre dominantly has Complex Sentence (code: C.S.2) type with 53 sentences (34.42%) of all sentences in the same genre.This number is the highest percentage of all genres that have C.S.2 occurrences.Process genre has the lowest number of C.S.2 (12.34% of 154 sentences) within the same category.Compound-Complex Sentence (code: C.C.S) has significant number of occurrences in comparison-contrast genre (30.26%) from 76 sentences of the same type in all genres.The last type of sentence, which is additional element in this research, is error sentence.This error sentence has dominant occurrence in process genre (32.79% and fewer occurrences in comparison-contrast genre (8.48%).In brief, this table presents that from 952 sentences that the students wrote in their assignments, they dominantly wrote Simple Sentence; meanwhile, sentences that have more complex form are fewer.Error sentence takes 23.52% of all sentences in the students' paragraph writings.It indicates that majority of students' sentences has followed correct types of sentences in English.However, they should be informed that the highest error they make in writing the paragraph is in the process genre.The students should be encouraged to write process paragraph that needs further attention from them. Conclusions After we conducted this research, it is evident to say that writing in English is complex for EFL students.Good understanding on English grammar and syntax is considered as sufficient enough for modality in writing; however, EFL students need more times to express their thoughts and ideas in English academically and concisely.From the findings of this research, we can conclude that from such facts, English lecturers teaching English writing and composition in EFL context needs to be aware that sentence construction is still a prominent problem that most EFL students encounter.We argue that writing in error sentences for EFL students basically do not resemble deficiency in English, but it relates to the notion of how far the students do writing practices.In so doing, we could see the truth at this point.As Nazir mentioned, empirical movement, as the other form of movements in knowledge, has briefly looked at truth from facts that are understandable and approachable within human experience (2014, p. 3).Thus, this research has shown that by analysing the way students use the types of sentences in their paragraph assignments, we can find ways to help EFL students to construct a better writing style academically within their own personalised views. In terms of using a blog as medium for us in teaching English writing to students in the college, we clearly understand that today's era is an information technology era where most students are already familiar with technological instruments that might aid them in their learning process.Besides, by using a blog, we have provided a medium for students to see each other works and give evaluation one another so they eventually improve themselves along the way.For this circumstance, "…assessment and networked writing environments -offers writing teachers a richer, more varied understanding of how technology can be beneficial for composition pedagogy" (Penrod, 2005, p. 169).From this research, it is imperative to declare three important points regarding teaching writing to EFL students: (1) using online medium is a useful way to adjust students' experience on using digital tools for learning English writing; (2) writing a paragraph with a specific genre may give different experience to EFL students because each genre has specific language features and functions; (3) paying attention to which type of English sentences that most of our students use for their paragraph assignment can be beneficial for us to accommodate important matter to our teaching purposes for English writing classroom respectively. Formula Sentence (code: C.S.2) Percentage = ∑ ..2 ∑ × 100% Compound-Complex Sentence (code: C.C.S.) Percentage = ∑ .. ∑ × 100% Graph 3. Percentages of Sentences in Process Genre The other type of 138 Types of Sentences in EFL Students' Paragraph Assignments: A Quantitative Study on Teaching and Learning Writing at Higher Education Level table : Table 1 . Sample Measurement Graph 5. Percentages of Sentences in Comparison-Contrast Genre
2018-12-06T22:30:09.640Z
2017-08-14T00:00:00.000
{ "year": 2017, "sha1": "69fc8fa5ca3df5349862ca9d68a54d2199b59b1a", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undiksha.ac.id/index.php/JERE/article/download/10322/7536", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "69fc8fa5ca3df5349862ca9d68a54d2199b59b1a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
93470592
pes2o/s2orc
v3-fos-license
UV absorption by cerium oxide nanoparticles/epoxy composite thin films Cerium oxide (CeO 2) nanoparticles have been used to modify properties of an epoxy matrix in order to improve the ultra-violet (UV) absorption property of epoxy thin films. The interdependence of mechanical properties, UV absorption property and the dispersed concentration of CeO 2 nanoparticles was investigated. Results showed that, by increasing the dispersed concentration of CeO 2 nanoparticles up to 3 wt%, tensile modulus increases while two other mechanical properties, namely tensile strength and elongation, decrease. The UV absorption peak and the absorption edges of the studied thin films were observed in the UV-Vis absorption spectra. By incorporating CeO 2 nanoparticles into the epoxy matrix, an absorption peak appears at around 318 nm in UV-Vis spectra with increasing CeO 2 concentration from 0.1 to 1.0 wt%. Scanning electron microscopy (SEM) images revealed that a good dispersion of nanoparticles in the epoxy matrix by an ultrasonic method was achieved. Introduction Polymer nanocomposite is a relatively new class of material. Incorporation of inorganic nanoparticles into a polymer matrix can significantly influence the properties of the matrix. The obtained composite might exhibit improved thermal, mechanical or optical properties. The properties of polymer composites depend on the type of incorporated nanoparticles, their size, shape, concentration and interactions with the polymer matrix [1,2]. Epoxy resin has developed rapidly since it was invented, and has been widely used in practical applications. The most important and industrialized epoxy is bisphenol A, which is derived from epichlorohydrin and consists of glycidol and hydroxyl groups with ether bond on the main chain. It is widely used in adhesive, electronic and coating industries due to the small volume shrinkage in curing and outstanding electrical performance. In the aerospace industry bisphenol A is commonly used as a primer matrix on aluminum alloy 2024-T3 for protecting the surface of aircraft [3]. The incorporation of various reinforced nanosized phases into the epoxy, such as graphite nanofiber, carbon nanotubes, nanoclays, cellulose nanofiber and nanoalumina resin, is one of the most effective ways to improve the properties of the epoxy [3][4][5]. A wide range of organic and inorganic UV (λ < 365 nm) absorbers are used in the coatings industry to minimize the destructive effect of sunlight on coated items designed for outdoor applications. Organic absorbers include aromatic hydroxyl triazoles, hydroxyl triazines and hydroxyl ketenes, which function as pure UV light absorbers or light stabilizers [6]. Some inorganic oxides have been found to noticeably absorb UV light, such as zinc oxide, cerium oxide, titanium dioxide and iron oxide. Due to its optical, magnetic and electronic properties, CeO 2 is widely used in various applications, such as catalysis, optical materials, abrasives and ultraviolet absorbents [6][7][8]. CeO 2 possesses many attractive properties which make it highly promising for a wide range of applications such as UV blockers and filters. Previous investigations have shown that the efficiency of nano additives for UV protection is dependent upon a number of parameters. Particles size is the most important factor. For incorporation of inorganic oxides into the epoxy matrix, the UV absorption of nanosized particles was reported to be much stronger than that of microsized ones. In addition, nanoparticles with smaller size lead to higher transparency and better UV protection. A reduction of the refractive index is also beneficial to UV protection. While concentration and film thickness also affect the transparency, the UV protection qualities of a number of nanosized additive absorbers were compared [9]. The aim of the present work is to investigate the mechanical properties and UV absorption property of CeO 2 /epoxy thin films. It is expected that the CeO 2 nanoparticles would enhance the UV absorption properties of the resulting nanocomposite thin films. The thin film was coated on the surface of thin glass with a thickness of 100 µm using a spiral bar coater. Epoxy and curing agent. The epoxy used is diglycidyl ether of bisphenol A (epoxy YD-128, Kukdo chemical), which is a standard epoxy with epoxide equivalent weight equal to 190 g eq −1 . After being cured with an appropriate curing agent, it demonstrates excellent mechanical, chemical, electrical and adhesive properties in a cured state. CeO 2 nanoparticles were applied to the epoxy system and processed with the anhydride curing agent MTHPA (methyl tetra hydro phthalic anhydride, KBH-1089, Kukdo chemical). The mixing ratio of epoxy and curing agent was 100/90. Cerium oxide nanoparticles. Cerium oxide (CeO 2 ) nanoparticles were prepared by an auto combustion process of rare earth nitrate gel/polyvinyl alcohol at low temperature. They were synthesized from rare earth oxides using polyvinyl alcohol as a polymer matrix and obtained by heating the gel precursor from 250 to 800 • C in a muffle. The morphology of CeO 2 nanoparticles (figure 1) was studied by transmission electron microscopy (TEM). They have spherical shapes with diameters less than 100 nm. The specific area of CeO 2 nanoparticles was determined to be 35.0 m 2 g −1 . Fabrication of the CeO2 nanoparticles/epoxy composite In order to produce the CeO 2 nanoparticles/anhydride-cured epoxy composite (up to 3 wt% of nanofibers) by an ultrasonic method, CeO 2 nanoparticles were sonicated in ethanol for 30 min using a Sonic Mater sonicator. The solution contained more than 100 ml of ethanol per 1 g of CeO 2 nanoparticles. The epoxy was then added and mixed for an additional hour, and the mixture was divided into two layers (with an upper layer of ethanol) after 1 h. The ethanol was decanted and the remaining portion was removed by vacuum extraction at 80 • C for 9 h. The curing agent was blended into the mixture by mechanical stirring for 1 h. The air bubbles were removed under a vacuum at 50 • C for 30 min. The 100 µm thickness thin film was coated by a 24 × 50 mm spiral bar coater (figure 2). The film was kept in the freezer at −26 • C for 36 h (at B-state), and was then cured at 80 • C for 30 min and at 100 • C for 5 h. The specimens for the mechanical test were prepared from the CeO 2 nanoparticles/epoxy panels. The epoxy matrix was injected into a metal mold using a vacuum pump. The specimens were cured at 80 • C for 30 min and at 120 • C for 3 h. Mechanical test The specimen was prepared in accordance with ASTM 4 D638 standards. The specimen thickness, width, and total length were 3.0, 12.0 and 160.0 mm, respectively. An Instron 5567 was used for the tensile test and the extensometer length was 25.0 mm. The machine was operated under displacement control mode with 2.0 mm min −1 of speed and all the tests were performed at room temperature. Five specimens were prepared from each panel for each condition. Transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The fracture surfaces of the CeO 2 nanoparticles/epoxy were observed by SEM after the tensile test. An Au coating with thickness of a few nanometers was applied to the fracture surface. Cerium oxide nanoparticles were observed by TEM. The samples were prepared by ultrasonic dispersion of CeO 2 nanoparticles in ethanol before applying a few drops of the solution to a holey carbon copper grid. UV absorption The thickness of the thin film was evaluated before the UV absorption test. UV-Vis spectra of the thin film were measured on a Cary UV-5000 spectrophotometer at room temperature. The optical absorption spectra were measured from 200 to 800 nm with a double-beam optical spectrum analyzer. The measurement error is within ±1%. The effect of dispersion by the ultrasonic method It is well established that the dispersion state of nanoparticles is a crucial factor in determining the final properties of nanocomposites. With conventional processing, mechanical mixing and normal processing, it is usually difficult to achieve very good distribution of nano-structured particles. Moreover, the mechanical properties of CeO 2 nanoparticles/epoxy with excellent dispersion increased significantly compared to nanocomposites with somewhat poor dispersion. Significant issues about dispersion and nanocomposite processing need more attention to be solved. The dispersion method has been achieved primarily by sonication of nanoparticles in a solvent followed by dispersion of both the polymer and nanoparticles in solution. As a result, when using ethanol as a dispersion solution, cerium oxide nanoparticles were well dispersed in ethanol by the ultrasonic method for 30 min. The uniform dispersion of CeO 2 nanoparticles was investigated by SEM and showed that CeO 2 nanoparticles are well dispersed in the epoxy matrix ( figure 3). Furthermore, the ethanol was easily removed after mixing with the epoxy resin and drying. Mechanical properties of CeO 2 nanoparticles/epoxy composite The tensile modulus is the slope of tensile strength between 0.1% and 0.3% of strain. The relationship between the tensile modulus of the CeO 2 nanoparticles/epoxy composite and the content of the nanoparticles is graphically shown in figure 4. The tensile modulus of the CeO 2 nanoparticles/epoxy composite increased with increasing CeO 2 nanoparticle concentration up to 3.0 wt%. The tensile modulus of the CeO 2 nanoparticles/epoxy increased by 2.1, 3.0, 4.3 and 6.3% with the addition of 0.5, 1.0, 2.0 and 3.0 wt% of CeO 2 nanoparticles, respectively. The enhancement of the tensile modulus obtained with CeO 2 nanoparticles in this study implies that CeO 2 nanoparticles are one of the most promising candidates to be used for reinforcing epoxynanocomposites. The stress-strain curves of the CeO 2 nanoparticles/epoxy matrix containing 0.5, 1.0, 2.0 and 3.0 wt% of CeO 2 are presented in figure 5. It can be seen that the tensile strength of the CeO 2 nanoparticles/epoxy composite decreased with increasing the concentration of the CeO 2 nanoparticle content up to 3.0 wt%. Indeed, the nanoparticles had no The effect of CeO 2 nanoparticle content on the UV absorption property of thin film For evaluation of the UV protective quality of the samples, the main measured factors were the reflection coefficient of R = I r /I 0 and transmission coefficient of T = I out /I 0 , where I r , I out and I 0 are the intensities of the reflected, transmitted and initial optical signals, respectively. Each photodetector based on the unit formed an electric signal proportional to the intensity of either reflected or transmitted light with the adjusted wavelength. Calculations of all the main spectral and optical properties of the samples were based on the measured dependencies R(λ) and T (λ) [10]. The UV absorption coefficient of CeO 2 nanoparticles/ epoxy thin film containing 0.0, 0.1, 0.25, 0.5, 0.75 and 1.0 wt% of nanoparticles is shown in figure 6. A well-defined sharp and strong absorbance peak located at 318 nm, which is shifted toward the short wavelength, indicates the narrow and uniform particle size distribution obtained via this fabrication route. The UV absorption of the CeO 2 nanoparticles/epoxy thin film increased with increasing CeO 2 nanoparticle concentration up to 1.0 wt%. The maximum absorption corresponding to the larger nanoparticles is shifted towards the short wavelength region. Those results indicated that the nanoparticle content had a strong effect on the absorption property of the CeO 2 nanoparticles/epoxy thin film. With 0.5 and 1.0 wt% of CeO 2 nanoparticles, the thin film absorbed more than 48 and 90% of UV light with an absorbance peak at around 318 nm, respectively. Conclusion CeO 2 nanoparticles with an average size of less than 100 nm were synthesized and incorporated into the epoxy matrix. The mechanical properties and UV absorption property were determined for the thin film based on the CeO 2 nanoparticles/epoxy. Enhancements of 2.1, 3.0, 4.3 and 6.3% in tensile modulus were obtained after adding 0.5, 1.0, 2.0 and 3.0 wt% CeO 2 nanoparticles, respectively. The tensile strength and elongation decreased with increasing CeO 2 nanoparticles up to 3.0 wt%. It has been shown that with small CeO 2 nanoparticle concentration, the UV absorption property of the thin film was noticeably improved. The epoxy thin films with 1.0 wt% of CeO 2 nanoparticles absorbed more than 90% of UV light with an absorbance peak at around 318 nm. The data obtained during the experiments can be used for the fabrication of UV reduction thin films.
2019-04-04T13:15:10.095Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "2f40134cb4706a71a8c81caea4c16994e1af0a22", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1088/2043-6262/2/4/045013", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "20c32f8aa07320ec75d521b2f606f68dec8ce0b2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
103799273
pes2o/s2orc
v3-fos-license
Chitosan composite scaffolds for articular cartilage defect repair: a review Articular cartilage (AC) defects lack the ability to self-repair due to their avascular nature and the declined mitotic ability of mature chondrocytes. To date, cartilage tissue engineering using implanted scaffolds containing cells or growth factors is the most promising defect repair method. Scaffolds for cartilage tissue engineering have been comprehensively researched. As a promising scaffold biomaterial for AC defect repair, the properties of chitosan are summarized in this review. Strategies to composite chitosan with other materials, such as polymers (including collagen, gelatin, alginate, silk fibroin, poly-caprolactone, and poly-lactic acid) and bioceramics (including calcium phosphate, calcium polyphosphate, and hydroxyapatite) are presented. Methods to manufacture three-dimensional porous structures to support cell attachment and nutriment exchange have also been included. Introduction Cartilage tissues exist in all movable joints in the body (synovial joints) and provide protective wear-resistant surfaces at the end of movable bones. 1 Cartilage tissues play a signicant role in human health and normal life. Nevertheless, many risk factors can injure these tissues, such as natural degradation, traumatic, posttraumatic, and degenerative disease. The lack of blood supply to cartilage and the low mitotic ability of chondrocytes results in the poor self-repair ability of articular cartilage (AC). 2 Therefore, a method to assist the repair of damaged cartilage tissues with few side effects is urgently needed. Due to the special structure and properties of cartilage tissues, effective methods for their defect repair have yet to be found. Existing autologous or allogra implantation methods have drawbacks, such as a lack of material source, long recovery time, 3 immunological rejection, and cross infection with donors. To address these issues, cartilage tissue engineering has been developed and its advantages demonstrated. In the repair process, scaffolds guide the migration of target cells, such as chondrocytes, to the injury site and stimulate their growth and differentiation, nally degrading in response to matrix remodeling enzymes released by cells as tissue repair progresses. 4 Cartilage tissue engineering can create a more durable and functional replacement of the degenerated tissue in vivo. Furthermore, the regenerated tissue is more likely to survive mechanical conditions in the joint. 5 As a promising biomaterial, chitosan (CS) has outstanding bioactivities, antimicrobial properties, 6 non-toxicity, 7 biocompatibility, 8 biodegradability, 9 and superior physical properties. 10 The application of CS has been studied in many areas, such as wound healing, 11 drug delivery, tissue engineering, 12 and water treatment. CS can be processed into diverse forms, including gels, lms, bers, and sponges. 13,14 In particular, the hydroxyl groups of chitosan make it easy to composite with other materials. The tissue engineering approach to repair and regeneration is based on the use of polymer scaffolds, which serve to support, reinforce, and, in some cases, organize the regenerating tissue. 15 To date, various materials for cartilage tissue engineering have been developed, including polymers such as chitosan 16 and collagen, 17 and bioceramics, such as calcium phosphate ceramic and hydroxyapatite. 18 Natural or naturederived polymers belong to the natural living body constitution. The main advantages of natural polymers are biocompatibility, and their degraded products are non-toxic when implanted. Some natural polymers also possess the ability to activate gene expression. However, sources of natural polymers are limited and usually require complex post-processing. The main advantages of synthetic polymers are their superior mechanical properties, high porosity for chondrocyte and cartilage tissue growth, and adjustable properties. However, biological compatibility is a weakness of synthetic polymers when applied to cartilage tissue engineering. Furthermore, bioceramics exhibit excellent physical properties, but their fragility and degradation properties limit applications in AC defect repair. Much effort have been made to develop scaffolds with porous interconnected network structures and the necessary mechanical strength to support the cell attachment, proliferation, and differentiation toward repairing the AC defect. 19 Successful cartilage tissue engineering requires a biologically compatible and degradable scaffold with desired structural features for cell attachment. 20 This paper summarizes CS composite scaffolds that have been investigated recently. 2 Articular cartilage defect repair AC is a biphasic, viscoelastic, porous, and permeable material with unique biomechanical properties. 21 AC is made up of chondrocytes distributed into the extracellular matrix (ECM). The ECM is composed of different types of collagens (mainly type II), proteoglycan, and non-collagenous proteins. 22 The ECM occupies the vast majority of the tissue (more than 95%), and acts as the function element, 23 as shown in Fig. 1. Chondrocytes play a key role in building AC during development. 24 However, the ability of chondrocytes in AC to repair and regenerate 25 is gradually reduced with increasing age, while the cell density of chondrocytes in AC also decreases progressively. For instance, chondrocytes only account for about 1% of the volume of hyaline cartilage. 26 AC can be divided into four zones based on the changing mechanical environment. From the surface to inner tissue connecting with subchondral bone, mechanical stress caused by uid ow and compressive strains becomes gradually weaker, while uid pressure increases, making the density, porosity, and form of chondrocytes differ signicantly from each other. These zones are referred to as the supercial zone, middle zone, deep zone, and calcied zone, as shown in Fig. 2. 24,27 The main types of cartilage injury can be classied into partial-thickness and full-thickness defects. Self-healing aer defects involves the tissue itself and adjacent tissues. 24 Mature chondrocytes, the only kind of cells found in AC, lack the ability to repair and regenerate. In partial-defects, the calcied cartilage prevents adjacent cells from permeating into the void. In full-thickness defects, the damage runs through the AC, penetrating into the subchondral bone marrow, 28 from the marrow to bring cells and brin that can form repair tissue similar to native cartilage overow into the lesion. However, this regenerated tissue always degrades within 6-12 months, 28 meaning recovery is only temporary. Therefore, in general, AC tissue possesses little self-repair ability. Cartilage tissue engineering seeks to repair voids or lesions by implanting biomaterial scaffolds containing cells and stimulating factors that can act as a substitute for native AC ECM 24 and simulate the chondrocyte survival environment. 29,30 3 Chitosan as a biomaterial Chitosan (poly-b-1,4-linked glucosamine, CS) is a cationic polysaccharide made from alkaline N-deacetylation of chitin. 31 Sources of chitin are abundant in nature, such as crustacean shells, insect cuticles (mainly from shrimps and crabs), and fungi cell walls. 32 CS is a linear polysaccharide consisting of blinked D-glucosamine residues with a variable number of randomly located N-acetylglucosamine groups. 33 The molecular structure of chitosan is similar to that of some polysaccharide repeating units in AC, as shown in Fig. 3, 13,34 making the characteristics of chitosan similar to those of hyaluronic acid and glycosaminoglycan (GAG) in the ECM. 30 Therefore, CS can simulate the articular cartilage ECM to some extent and could promote the formation of cartilage tissue. As a result, chitosan is a promising natural biomaterial scaffold for cartilage defect repair. 35 Furthermore, the positive charge on chitosan could facilitate scaffold binding to negatively charged tissue cells or body uid. Bioactivities of chitosan The bioactivities of chitosan are mainly reected in its antimicrobial, antitumor, and antioxidant activities. These mechanisms are shown in Table 1. Park et al. 36 examined a chitosan series against different types of Gram-negative and Grampositive bacteria, with chitosan showing better antimicrobial activity against Gram-negative bacteria, including E. coli, P. aeruginosa, and S. typhimurium due to its cationic nature. Sudarshan et al. 37 explored the antibacterial action of three kinds of chitosan, as shown in Fig. 4. The CS solution signicantly inactivated Enterobacter aerogenes and Staphylococcus aureus, with the degree of deactivation related to the type or concentration of the chitosan solution. Zivanovic et al. 41 tested the antibacterial properties of chitosan composite lm towards Escherichia coli K-12, using phosphate buffer with no lm and phosphate buffer/lm with no inoculum acted as controls. The results showed that chitosan was endowed with antibacterial properties. The bioactivities of chitosan help to reduce inammation when chitosan scaffolds are used for AC defect repairs. The Antimicrobial activity Cationic nature (1) Low molecular weight (M W ) chitosan penetrates bacterial walls, binds with DNA, and prevents gene expression 37 (2) High M W chitosan binds with negatively charged components on the bacterial cell wall, forming a polymer membrane that prevents nutrients from entering the cell 38 Antitumor activity Stimulating hormone secretion and cells mature (1) Increased lymphokine production resulting in proliferation of cytolytic T-lymphocytes 39 (2) Causes maturation or inltration of macrophages and cytolytic T-lymphocytes and suppresses tumor growth 40 Antioxidant activity Hydrogen donors Scavenges unstable oxygen radicals, such as hydroxyl, superoxide, and alkyl radicals, and highly stable DPPH radicals 39 research of Oprenyeszk et al. 42 conrmed that the production of inammatory mediators IL-6 (p ¼ 0.0012), IL-8 (p < 0.0001), and PGE2 (p ¼ 0.0056) by chondrocytes was signicantly reduced by the addition of chitosan aer a 3 day culture. Furthermore, this signicant inhibitory effect was sustained during a culture period of 17-21 days. Moreover, oxidative stress of cells and tissues led to cell death and tissue injury, and maximized inammation, 43 which could hinder tissue wound healing. Therefore, the inherent antioxidant activity of chitosan is important for tissue engineering. Biocompatibility of chitosan When biomaterial is implanted in the body, adverse reactions, such as immune exclusion and cross infection, may occur owing to the human body's own defense system. The grade of cell bonding and spreading to the implantation biomaterial will affect their proliferation and differentiation, 44 which further affects the repair processes. As a biomaterial, chitosan has been shown to have excellent biocompatibility. 45 Zhang et al. 46 indicated that the rst reaction of the body towards an implant depends on its surface properties, and explored the biocompatibility of chitosan-based material with protein adsorption and cell culture. The results showed that the addition of polyethylene glycol (PEG) could signicantly improve protein adsorption and cell adhesion, growth, and proliferation. Chitosan supports both osteoblastic and osteoclastic cell growth. 47 Chen et al. 14 described the status of human skin broblast growth on chitosan microcarriers. Aer a 24 hour incubation, some cells had attached closely to the surface of the microcarriers (Fig. 5a). Aer 120 hours, the cells had almost covered the microcarrier surface (Fig. 5b). These results supported the biocompatibility of CS towards human cells. Chatelet et al. 48 found that the degree of acetylation (DA) of chitosan lms had an inversely proportional relationship with cell adhesion and proliferation, meaning that a lower DA resulted in higher cell adhesion. The charge density of chitosan increased with decreasing DA, which reinforced cell adhesion. Thorat et al. 49 incubated magnetic nanoparticles (MNPs) and chitosan-MNPs with L929 HeLa and MCF7 cells at different concentrations and incubation times under a simulated environment in vivo to explore whether chitosan-encapsulated MNPs affect cellular activity. Cell growth and death were tested to measure the cell viability, with the results (Fig. 6) showing that cell viability increased aer chitosan coating, which indicated that chitosan coating increased the biocompatibility. Malafaya et al. 50 comprehensively explored the in vivo biocompatibility of chitosan particles implanted in animals. Aer two weeks of implantation, connective tissue was growing and neovascularization had increased between the particles of the scaffold. Aer three months of implantation, the general expression of von Willebrand Factor (vWF), observed one week aer implantation due to the initially formed clot, had decreased, which supported that the implanted chitosan particle scaffold stimulated neovascularization. Remarkably, no angiogenic growth factor or previously seeded angiogenic cells were used in this study. This research showed that chitosan is a promising biomaterial for cartilage defect repair considering the factors mentioned above that prevent cartilage from self-repairing. Biodegradability There are three biomaterial phases in tissues, namely, tissue removal, tissue replacement, and tissue regeneration. The main purpose of the last phase is assisting or enhancing the body's own reparative capacity. 51 Therefore, the biodegradability of biomaterials is vital for regeneration aer implantation in the human body and for inducing tissue self-regeneration. An appropriate biodegradability rate is required to match the rate of tissue regeneration. 52 Rapid degradation of the scaffold affects the adhesion of chondrocytes, while slow degradation hinders cell proliferation and matrix secretion. 53 Furthermore, using degradable biomaterial appropriately is an effective way to reduce the infection risk. 54 Chitosan is a polysaccharide, which means it contains breakable glycosidic bonds. 55 Chitosan is known to be degraded in vertebrates predominantly by lysozyme and certain bacterial enzymes in the colon. 56 The biodegradability rate of chitosan in living organisms mainly depends on the degree of deacetylation (DD) and average molecular weight (M W ), 9 which affect the affinity between enzyme and substrate. 57 Increasing the DD decreases the degradation rate 58 because the crystallinity of chitosan, a semi-crystalline polymer, decreases with increasing DD. However, polymer crystallinity is inversely proportional to the biodegradation activity. 55 The degradation rate reects the extent of degradation within a certain time period. The molecular weight of chitosan affects its chain exibility in solution, which in turn affects its affinity for enzymes in hydrolysis reactions. 57 Jin Li prepared chitosans with different M W s and the same DD through ultrasonic degradation to explore the connection between M W and degradation, which showed that a lower M W resulted in a faster degradation rate. To summarize, chitosans with lower DDs and M W s are more susceptible to enzymatic degradation. 57 Therefore, the chitosan degradation rate can be altered by adjusting the DD or M W . Chitosan is mainly degraded by lysozyme hydrolysis, with its by-products exhibiting no pyrogen activity, toxicity, or mutation effect on implanted cells. 59 Li et al. 60 studied the metabolism of chitosan aer muscle implantation in rat through uorescence labeling. The result showed that chitosan is gradually distributed to serum and major organs (liver, spleen, kidney, heart, and brain) aer implantation. Although the M w s of chitosan degradation products found in different organs vary, all M W s were less than 10 kDa, determined from the largest part in organs in a given time of metabolism comparing with dextran. 60 Furthermore, chitosan and its degraded products could induce gene expression of cells for AC components, such as type II college. 61 CS composite scaffolds for articular cartilage defect repair Scaffolds designed for AC tissue engineering should exhibit similar physical structures and chemical compositions to the cartilage matrix. The molecular structure of chitosan is similar to that of glycosaminoglycan (GAG), which constitutes the AC ECM. Therefore, this GAG analog is a good choice to fabricate scaffolds for AC defect repair. When substituting native AC ECM, scaffolds should be able to: (i) match up with the native tissue and possess sufficient strength to support cell growth; 62 (ii) generate a highly hydrated environment that allows for the appropriate exchange of nutrients, electrolytes, oxygen, metabolic waste, and small molecule mediators of cell viability, differentiation, and function; (iii) establish intimate cell-matrix contact to ensure optimal cell viability, differentiation, and function; and (iv) seamlessly integrate with adjacent cartilage tissue and adhere to subchondral bone. 24 Although CS has superior bioproperties as a biomaterial, it has drawbacks when applied to cartilage tissue engineering. These requirements can be tuned by cross-linking with other polymers and bioceramics. 63 In this section, we review recent studies on materials composited with CS. Composites with polymers Polymers can be classied into naturally derived polymers and synthetic polymers according to their source. When applied to AC defect repair, natural polymers, such as collagen and gelatin, maintain good cytocompatibility, but have poorer mechanical stability or are easily degraded. Meanwhile, synthetic polymers, including poly-lactic acid (PLA) and poly-caprolactone, have the opposite properties to natural polymers. 64 4.1.1 Natural and nature-derived polymers 4.1.1.1 CS/collagen. As mentioned above, collagen is a major component of AC-specic ECM. It is reported that collagen, typically type II, forms a framework to support the structure and normal function of AC. 65 Furthermore, collagen is a factor that induces chondrogenesis, both in vitro and in vivo, 66 with the water uptake ability persisting. However, collagen has a rapid degradation rate and low mechanical strength. 67 Tangsadthakun et al. 68 composited chitosan with collagen at different viscous grades and studied the degradation behavior of the composite scaffolds in vivo and in vitro. The results showed that chitosan with low M W could improve the compressive modulus and degradation. Furthermore, the collagen/chitosan dosage ratio of 7 : 3 signicantly favored cell adhesion and proliferation on the scaffold. 4.1.1.2 CS/gelatin. Gelatin is a collagen derivative. Denaturation reduces the antigenicity caused by animal collagen, and gelatin is cheaper than collagen. Suh et al. 69 obtained CSgelatin scaffolds by freeze-drying the mixed solution and crosslinking using water-soluble carbodiimide. Sechriest et al. 70 reported that chondrocytes grow slowly on chitosan due to its positive charge. As gelatin is negatively charged, the ionic interaction between these two materials could improve the properties of both for AC defect repair. In the study of cell growth in vitro, cell attachment on C1G1WSC (CS/gelatin ¼ 1 : 1, water-soluble crosslinked) and its lms was similar to that on TCPS (blank well) aer 24 h, and the advantage of the gelatin composite was clear aer 48 h. 71 Xia et al. 72 prepared CS-gelatin (1 : 1) porous scaffolds by freezing and lyophilizing a chitosangelatin solution in precooled glass dishes. Pore sizes ranging from 60 to 200 mm in diameter were obtained. The seeded chondrocytes could adhere, spread, multiply, and secrete their matrices onto the porous chitosan-gelatin scaffolds. 72 However, chitosan did not promote chondrocyte adhesion, differentiation, and proliferation, or matrix secretion. Therefore, the authors could not conclude that gelatin blending optimized chitosan for articular defect repair. 4.1.1.3 CS/alginate. Alginate is a family of poly-anionic copolymers derived from brown sea algae. Takahashi et al. 73 has performed much work on CS/alginate composite scaffolds. The ionic interaction between the negatively charged carbonyl group (-COOH) of alginate and the positively charged amino group (-NH 2 ) of chitosan forms the CS/alginate complex. This leads to a signicant increase in the Young's modulus and yield strength of the materials. 74 Li et al. also studied and compared chondrocyte cell attachment and proliferation on porous chitosan and CS-alginate. SEM images of chondrocyte cells grown on chitosan-alginate and chitosan scaffolds aer 14 days of cell culture are shown in Fig. 7, with the blend scaffold showed signicantly improved cellular compatibility. 75 Furthermore, Tıglı et al. 76 prepared composite scaffolds by introducing chitosan into an alginate matrix in semi IPN form. This type of CS/ alginate scaffold showed improved cell functionality and phenotype retention. Li et al. 77 also studied CS/alginate hybrid scaffolds, comparing the swelling behavior of the composite and chitosan scaffolds. Research 78 has indicated that initial swelling can promote cell attachment and growth, with subsequent stabilization helping to maintain the mechanical properties. Chitosan swells readily in biological uids, while the alginate composite stabilized the overall size aer initial swelling throughout the study period (6 weeks) owing to the interaction of amine groups on chitosan with carboxyl groups on alginate. 77 4.1.1.4 CS/silk broin. Bhardwaj et al. 79 prepared silk broin (SF) using a freeze drying method and explored the in vitro degradation of SF/CS blend scaffolds with different concentrations. The chitosan degradation rate depends on the DD. 80 The DD of chitosan in this study was 85%. The scaffold weight loss and change in pH in 0.05 M PBS containing 1.6 mg mL À1 of lysozyme vs. pure PBS are shown in Fig. 8. The result indicated that the addition of silk broin slowed scaffold hydrolysis in lysozyme, and hindered degradation in PBS (pH 7.4). Therefore, the biodegradation time of the scaffolds was increased. 79 In another study evaluating the chondrogenic proliferation and differentiation of rat mesenchymal stem cells on SF/CS blended porous scaffolds, a signicant difference was found between the proliferation rates of SF/CS blended scaffolds and silk broin scaffolds. 81 Li et al. 62 . PCL is obtained from the ring-opening polymerization of six-membered lactone, 3-caprolactone (3-CL). PCL hydrolysis yields 6-hydroxycaproic acid, an intermediate of w-oxidation, which enters the citric acid cycle and is completely metabolized. 82 Woodruff et al. 83 obtained CS/poly(3-caprolactone) (PCL) poly(3-caprolactone) blended 3D ber-mesh scaffolds and studied their properties in AC application. PCL was found to alter the physicochemical and mechanical characteristics of the scaffold, which could provide a gap because of the brittleness in the wet state (40-50% of strain at break) of CS scaffolds 84 for application in AC regeneration. Furthermore, the positive charge, biocompatibility, and biodegradation of chitosan would compensate for the drawbacks of PCL, such as hydrophobicity. The results showed that blending improved the cell spreading and did not affect cell survival or impair metabolic activity. CS showed the best physiochemical and biological properties. 85 4.1.2.2 CS/poly-lactic acid (PLA). PLA is thermoplastic biodegradable polymer produced synthetically by the polymerization of lactic acid monomers or cyclic lactide dimers. Lactic acid is produced by the fermentation of natural carbohydrates, such as maize or wheat. 82 PLA is biodegradable, recyclable, and compostable, and hydrolyzes to its constituents when implanted in the human body. The degradation products of PLA are non-toxic and will be excreted. 86 There is evidence that PLA is not a promising choice for AC repair scaffold alone, with the PLA matrix shown to be too hard, resulting in interference with the repair process. 87 PLA appears to be a valid choice for combination with the low compression resistance ability of chitosan scaffold to fabricate a more stable skeleton when blended. 88 Therefore, scaffolds that possesses higher porosity or a relatively larger pore size favor chondrocyte adhesion, proliferation, and nutrient exchange. Lou et al. 89 added short chitosan ber into PLA to obtain hierarchical porous scaffolds. The porosity of the prepared PLLA/CS composite scaffolds was over 94%. Furthermore, the addition of chitosan improved the compressive modulus and protein adsorption while stabilizing the pH of the microenvironment under degradation. Other synthetic polymers have been combined with chitosan to form polyelectrolyte complex scaffolds studied in cartilage repair tissue engineering. Examples include poly(glutamic acid), 90 poly(ethylene oxide), 91 and copolymers. When composited with chitosan, synthetic polymers provide mechanical support to the scaffolds, facilitating the formation of hybrid scaffolds into compatible shapes and affording hybrid sponges with good mechanical properties. 92 4.2 Composites with bioceramics 4.2.1 CS/calcium phosphate (CPC). CPC is highly promising for use in human body tissue repair. Composite CScalcium phosphate scaffolds are typically fabricated using coprecipitation methods. 93 Combining chitosan with calcium phosphate enhances its strength and osteoconductivity. Coating the scaffold with type I collagen makes the surface hydrophilic, which improves cell adhesion. 94 Xu et al. 95 mixed calcium phosphate powder with chitosan using different powder/liquid ratios and altered the macropore volume of the scaffold using different mannitol templates. The incorporation of chitosan enhanced the exural strength and elastic modulus compared with the pure CPC scaffold, reducing the natural fragility of the CPC bioceramics. 4.2.2 CS/calcium polyphosphate (CPP). CPP is biodegradable and can eventually be replaced by the repaired tissue. The degradation products of CPP include calcium and phosphate, which are elements essential to organisms. Furthermore, calcium and phosphate do not induce inammatory reactions. 96 Kandela et al. 97 explored a biphasic construct containing a CS/ CPP scaffold and its ability to repair a full thickness osteochondral defect in a sheep knee (stie). The change in the cartilage tissue aer scaffold implantation was studied. 97 The results showed that the implantation steadily integrated with the adjacent cartilage. Other chitosan composite scaffolds Other types of chitosan composite scaffold have also been studied for AC defect repair. Man et al. 101 studied allogenic chondrocytes with a chitosan hydrogel (CS)-demineralized bone matrix (DBM) hybrid scaffold (CS/DBM) to repair rabbit cartilage injury. The scaffold was prepared by inserting DBM into CS hydrogel solution followed by incubation at 37 C for 10-15 min. 101 The CS, DBM, and CS/DBM scaffolds were used in vivo to repair rabbit cartilage injury, and the mechanical properties of the repaired tissues compared, as shown in Fig. 10. Microfracture (MF), a common treatment for cartilage defect, 102 served as the control group. The elastic modulus and hardness of the repair tissue were enhanced for CS/DBM compared with CS and DBM alone, 101 resulting in a repair tissue with improved mechanical stability. Furthermore, Shivaprasad et al. 43 successfully fabricated biocompatible, pristine, and melanin composite silk broin biomaterial scaffolds with antioxidant and electroactive properties. The dual antioxidant and electrical conductivity functions of the composite scaffolds supported the proliferation and induced better differentiation of cells. It would be worth studying the effects of melanin on chitosan or chitosan/silk broin composite scaffolds discussed above in the eld of cartilage tissue engineering. Scaffolds formed of three or more phases have also been studied. Huaping Tan et al. 64 have prepared gelatin/chitosan/ hyaluronan scaffolds for cartilage tissue engineering in several studies. Importance of porous structure As the scaffold acts as an ECM substitute, it should provide channels for the diffusion of gases and nutrients into the cells, the migration of cells themselves, 103 and the elimination of metabolic waste. Therefore, the implantable scaffold should be porous. Furthermore, scaffolds bearing pore structures would have improved biocompatibilities. 104 Several methods have been developed to prepare porous chitosan-based hybrid scaffolds, including freeze, 99 porogen leaching, 103 and gas forming methods. 105,106 Scanning topographies of interconnective porous chitosan scaffolds of different sizes are shown in Fig. 11. 107 Freeze-drying is a method that freezes the solution at a certain temperature, and then lyophilizes the scaffold at a certain lower temperature. The solution solvent is sublimated during lyophilization, resulting in the formation of interconnected pores in the matrix. Factors such as the shape and size of ice crystals, type of solvent and polymers, polymeric solution concentration, freezing temperature, and speed of crystallization are known to affect the morphology and porous architecture of scaffolds. 108 Furthermore, O'Brien et al. 109 developed a novel freeze-drying method that used a continuous cooling temperature instead of a constant freezing temperature. This process produced more a homogeneous and uniform cellular structure. Freeze-gelation is a method in which a chitosan-acetic acid solution is immersed in a gelation solution and then frozen 110 to create a porous structure. The resulting pore size is affected by the freezing temperature. 107,111 Hsieh et al. 111 produced frozen chitosan scaffolds at À80 C, À60 C, À40 C, and À20 C to investigate the relationship between the freezing temperature and tensile properties of the porous chitosan scaffold, with the tensile stress and strain found to increase at higher temperatures. Freeze-drying is a widely applied method in tissue engineering. 112,113 However, it is difficult to maintain the shape and topography using this method due to solution shrinkage, resulting in irregular pore sizes. The porogen leaching method 92 involves mixing the porogen with a matrix, drying the mixture, and then removing the porogen by leaching out with certain solvents. This method has been shown to easily manipulate the pore structure. The pore size was controlled by altering the particle diameter. Commonly used porogens include salts, polyethylene glycol, dibutyl phthalate, and stearic acid. The saltleaching method is a good way to create porosity, because this method uses salt particles instead of organic solvents as porogens. 114 Commonly used salts are NaCl 115 and CaCl 2 . 116 The porogen leaching method cannot obtain completely interconnected pores. Furthermore, porogens can be le behind. 3D printing is more advantageous for obtaining openpore structures. 3D-printed scaffolds allow for rapid and favorable architecture design to optimize cell growth and matrix production, and therefore provide valuable information towards better scaffold design for cartilage repair. 117 Fig. 11 Morphology of porous chitosan scaffold with pores sizes of (a) #10 mm, (b) 10-50 mm, and (c) 70-120 mm in diameter. 107 Scale bar ¼ 100 mm. Furthermore, the pore structure can be designed for 3D printing. Senatov et al. 113 obtained porous PLA-based scaffolds through 3D printing, in which all pores were interconnected. 3D printing can be performed using indirect and direct methods. In the indirect method, molds are printed using commercially available plaster powder before biodegradable polymers are cast into the printed mold, 118 while direct 3D printing eliminates the demolding process. However, the applied polymer materials are limited owing to the organic solvents used in most printheads. Furthermore, 3D printing remains an expensive technique for now. The gas-forming method is mainly used in polymeric matrices to create pores by dissolving gas into a liquid matrix, then venting the gas under reduced temperature and atmospheric pressure. The most commonly used gas is carbon dioxide (CO 2 ) due to its low cost, high stability, and low toxicity. The temperature and pressure could affect the solubility of the gas, and the rate of the depressurization process for gas venting inuences the uniformity of the pore structure. 105 Scaffold pore structures, including pore size, porosity, and homogeneity, have been shown to effect cell adhesion and mechanical properties. Although pores smaller than 50 mm have been recommended to improve the biomechanical strength of engineered constructs, 107 increasing the pore size creates more space for chondrocytes and nutrients. Large pore sizes or porosity in the scaffold favor the exchange of matter. However, this also leads to lower cell attachment and intracellular signaling. 119 The optimum pore size should allow maximal cell adhesion, growth, and differentiation. The optimum pore size for chondrocyte ingrowth has been identied as 70-120 mm. 107 The preparations of porous structures discussed above have their respective pros and cons. In practice, two or more methods are integrated to optimize scaffolds for AC tissue engineering. Commonly used combinations included porogen leaching/ freeze-drying, 120 3D printing/salt leaching, 118 and cross-linking/ freeze-drying. 121 Implant example Biomaterials have a long development-to-application cycle. During the course of study, cell culture and animal implant experiments in vitro or in vivo are typical methods for simulating the effect of biomaterial application. Zhao et al. 122 transplanted chitosan scaffold with rabbit chondrocytes as an experimental group and pure chitosan scaffold as the blank group into defective rabbit AC. Compared with the control group, which received no implantation, the repairing effect was quite significant. The gross morphology and safranin-O staining of repaired cartilage of the control, blank, and experimental groups at 4, 8, and 12 weeks are shown in Fig. 12 Conclusion AC is avascular and has almost no self-repair ability. The uid pressure and compressive tensile properties of the surroundings in vivo and its complex structure are the main challenges facing defect repair. Blend scaffolds that combine chitosan with polymers, ceramics, or other materials have become important in the eld of AC defect repair. Chitosan and its composites for AC defect repair are reviewed. These materials were notably improved when composited with chitosan by altering its degree of deacetylation, average molecular weight, and form. Scaffolds are usually porous in three dimensions, similar to the AC tissue. Furthermore, as AC was divided into four areas from top to bottom, the development of 3D porous and gradient chitosan blend scaffolds closer to the ECM is an important direction for further research. Furthermore, the 3D porous scaffold is bene-cial for chondrocyte proliferation and differentiation. Porous chitosan composite scaffolds show a good development potential for AC defect repair. Conflicts of interest There are no conicts to declare.
2019-04-09T13:08:05.886Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "3a092145e53f7fd07cb84e57b947c4c21d729ee8", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra11593h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29c6ca1dc3b94505352913c0d20e52e5ab1da754", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
52111847
pes2o/s2orc
v3-fos-license
The impact of rural health insurance and the family physician program on hospitalizations, a beforeafter study at the county level conducted in Tehran province, Iran Background: The health insurance and family physician reform in Iran were implemented in 2005. This study was conducted to assess the effect of these reforms on avoidable hospitalizations among the rural population of Eslam-shahr County, Iran. Methods: We conducted a before-after study in Eslam-shahr County’s single existing hospital. This county is a part of the Tehran Province of Iran. The demographic characteristics and diagnostic codes of the rural population that were hospitalized during the 2 years leading to, and after the reforms were extracted from the hospital’s electronic information system. A list of 61 three-character and 131 four-character AHs codes were developed based on the literature review. We estimated a logistic regression model which included gender and age as independent variables to assess changes in the probability of avoidable hospitalizations following reform implementation. Analyses were carried out using STATA version 13. Results: We recorded 817 rural hospitalizations before and 967 hospitalizations after reform implementation, suggesting that hospitalization growth after the reforms was almost 18.4%. The logistic regression results show that the probability of avoidable hospitalizations after the interventions had decreased compared to before the interventions were put into place (OR: 0.46; 95% CI: 0.24-0.88). Also, the probability of AHs among the 60< year-old age group was considerably higher compared to other age groups. No statistical relationship was found between avoidable hospitalizations and gender. Conclusion: The reforms may have had a mixed effect on hospitalization. They may result in increased hospitalizations due to responding to the unmet needs of the population, and simultaneously they may lead to a decrease in avoidable hospitalizations and eliminate the costs imposed by them upon the health system. Introduction Universal health coverage (UHC) aims to ensure that all people, whether rich or poor, have access to effective health services that meet most of their health needs, without being exposed to the risk of financial hardship (1). Actions on Universal Health Coverage can have mixed effects on hospitalization rates. The development of health insurance services will remove financial barriers that prevent access to secondary health care services, and will therefore increase hospitalization rates (2)(3)(4)(5). Expansion of Primary Health Care (PHC) -an approach that has been approved as a global strategy by the WHO to reduce costs and improve health care quality (2)(3)(4) 2 in referral-sensitive hospitalizations by identifying unmet needs (5)(6)(7) and also decreases avoidable hospitalizations by providing timely preventive care (8,9). Avoidable or preventable hospitalizations are hospitalizations that timely and effective primary care could prevent the need for hospitalization (9,10). Avoidable hospitalization rates used as indicators for access or quality of primary health care services (11,12). Iran has enjoyed an extensive network of publicly funded primary health-care services in rural areas since the mid-1980s. A "health house" is a part of the district PHC network. It is staffed by one or two Behvarz's who serve an average population of around 1500 (13). PHC networks have achieved great results in promoting the health of the general population, but problems such as financial barriers that prevent the rural population from receiving secondary care, poor referral systems and epidemiological changes such as increase in the rates of chronic diseases led to the development of reforms in 2005 (14). Based on these reforms, all rural areas and towns with a population of less than 20000 people were covered by a rural health insurance program funded by the Iran Health Insurance Organization (IHIO). The family physician program was simultaneously carried out in these areas, and around 6000 physicians and 4000 midwives were added to the PHC network (15). In order to assess the effect of these reforms on hospitalizations, Rashidian et al. conducted a study in an underserved province in Iran. The results showed that hospitalization rates increase up until one year after the interventions. However, this study did not present any findings about the effect of reforms on avoidable hospitalizations (AHs) (6). In this study, we explored the effects of reforms on total and avoidable hospitalizations in Eslamshahr, a county in the capital of Iran. This county has 131 villages and its rural population is approximately 49237, based on the 2006 census that was conducted one year after the implementation of reforms. Eslam-shahr has only one hospital, which is financed by the Social Security Insurance Organization (SSIO). The population under coverage of the SSIO receives hospital services free of charge. This hospital is the main place where the rural residents of Eslam-shahr County receive secondary health care services, due to its proximity to rural areas. The rural population under coverage of the SSIO does not face any financial barriers in using these hospital services. After actions were taken for the development of UHC in 2005, all rural residents had access to a rural family physician, where they would receive timely primary care, and if necessary, could be referred to the county hospital. Also, all of the rural populations were covered by the rural health insurance system; therefore those rural residents who had previously lacked SSIO coverage no longer faced financial barriers for using hospital services. It should be noted that as of 2005, parts of the rural population were covered by 2 health insurance systems, both the rural health insurance and the health social security insurance system. Although it was not possible to identify those with dual insurance coverage in this study, we did however manage to distinguish the effect of the family physician on both avoidable and referral-sensitive hospitalization rates. Methods We conducted a before-after study, which is a type of quasi-experimental study. The setting of the study was Eslam-shahr. We selected Eslam-shahr county for this study due to the fact that it was the only hospital with a functioning electronic hospital information system during the time period of our study, whereas other hospitals lacked an electronic record system, and their paper medical records had been eliminated due to the expiration of their legal archival records. We extracted the demographic characteristics of any members of the rural population that were hospitalized during the 4 years from April 2003 to April 2008 (2 years before and 2 years after the intervention) from the hospital's electronic information system. These characteristics included their age and gender and diagnostic codes. The diagnostic codes were recorded based on ICD10. Some countries have determined a list of avoidable diagnostic codes, however in Iran there is no research-based predetermined AHs diagnostic code list, and therefore we used data from studies conducted in other countries. We identified a list of 61 three-character and 131 fourcharacter AHs diagnostic codes based on the literature review. These diagnostic codes are used as indicators for assessing the accessibility or effectiveness of primary health care services in Germany, England and Canada (16)(17)(18). Results are presented using descriptive statistics. Also, the logistic regression model has been used to assess whether the probability of AHs changes following implementation of the reforms. We included gender and age as independent variables in the model and we estimated Odds ratio with a 95% confidence intervals. We conducted several diagnostic tests after development of the model. Linktest results showed no specification errors in the Logit model. Hosmer and Lemeshow statistics showed an appropriate goodness of fit, and no co-linearity among the independent variables. All analyses were carried out using STATA version 13 and SPSS version 18 software. Results Results showed that there were 817 rural hospitalizations before the reforms and 967 after the reforms, and total hospitalization growth after the reforms was almost 18.4%. Avoidable/total hospitalizations show a 1.3% decrease following the reforms (Table 1). Total hospitalizations before the reforms were 242 among men and 560 among women. Following the reforms, the number of total hospitalizations among men (264 cases) was lower than among women (682 cases). Hospitalizations in both genders were often related to the referral-sensitive hospitalizations. Referral sensitive hospitalizations before and after the reforms were 230 and 252 hospitalizations, respectively, among men, and 546 and 674 cases, respectively, among women. Before the reforms, the percentage of avoidable hospitalizations was 5.8% among men and 2.5% among wom-en. Following the reform, this percentage dropped to 4.5% among men and 1.2% among women. Based on these results, the percentage of avoidable hospitalizations among men was higher than women, both before and after the reforms, and this percentage decreased after reforms among both genders. Results showed that total hospitalizations were higher among the 20-40 year-old population, both before and after the interventions. The AHs in this age group were lowest throughout the study. The highest AHs occurred among those aged 1-20, before the reforms were carried out. The highest AHs were seen among the 60< year old population, following the interventions ( Table 2). The logistic regression results showed that after demographic characteristics are taken into account, the probability of AHs after the interventions was lower compared to before the interventions (OR:0.46; 95% CI; 0.24-0.88). Also, the probability of AHs in the 60< year-old age group is considerably higher compared to other age groups (p<0.05). No statistical relationship was found between AHs and gender (Table 3). Discussion Results showed an increase in total rural hospitalizations compared to before the interventions. Although the percentage of avoidable/total hospitalizations was low both before and after the interventions, however, we noticed a significant decrease in AHs following the reforms. Since some rural residents had coverage by the Social Security Health Insurance before reform implementation and could therefore access hospital services without any financial barriers, this might explain the low rates of AHs witnessed before and after the interventions. However, we also saw an increase in the referral-sensitive hospitalizations (unavoidable hospitalizations) following reforms implementation, which may result from the reforms' response to the unmet needs of the rural population. A previous study conducted in an underserved province in Iran showed similar findings, with a significant increase in hospitalization rates one year after implementation of reforms (6). In confirmation of our results, a systematic study showed that a strong primary health system (with a greater supply of physicians and continuity of care), resulted in a decrease in Ahs (12). In the US, increased insurance coverage and programs for increasing access to primary care among children resulted in elimination of Ahs (19). In London, providing primary special diabetic services resulted in a decrease in AHs, however, this was not seen in asthma-related hospitalizations (20). Expansion of family health programs prevented AHs among children under 5 in Brazil (21). Study of the aspects and characteristics of PHC showed results that were at times contradictory to ours. The supply and distribution of physicians had little effect on avoidable and total hospitalizations among Medicaid beneficiaries (22). In a study carried out in the US, the supply of physicians had no relation with AHs in rural areas (23). However, some studies exploring the aspects of PHC in detail showed similar results to ours. For example, 2 studies in Canada showed decreased referral to emergency departments (24) and avoidable hospitalizations (25) among those of the elderly who had continuous contact with PHC physicians. It seems that the UHC interventions in the rural areas of Eslam-shahr affected both health care access (by increasing referral-sensitive hospitalizations) and efficiency, by decreasing AHs. Therefore,we can state that, UHC inter- 4 ventions may increase long-term health care costs by enforcing referral-sensitive hospital expenditures, and on the other hand, they may also decrease these costs by preventing AHs. In the US, it has been estimated that the development of UHC resulted in increased ambulatory care sensitive hospitalization and, as a result, in increased health expenditure (26). Also, expansion of the basic insurance system in China resulted in hospitalization growth (27). In our study, the effect of socioeconomic factors on hospitalization was not explored, and only the demographic characteristics of the rural population were considered. Results showed that there is no relation between gender and AHs. Contradictory to our results, in Brazil the effectiveness of the primary care community-based program resulted in decreased AHs for diabetes and pulmonary diseases, with this effect being greater among women (28). In our study, AHs were statistically related to age, with rates being higher among the 60< age group. It seems that this could be due to the higher rates of chronic diseases in this age group, since the diagnosis of these diseases falls under the category of AHs. Our study strength is in the use of the before-after design in measuring the program effects. Also, we determined the reform effects on both avoidable and referralsensitive hospitalizations. This gives policymakers a comprehensive picture of the effect of the reforms on both access and efficiency. However, our study had some limitations. One was the lack of a control group to ensure that other interventions had not affected the intervention group or rural population hospitalization. Another was conducting the study at the county level. The characteristics of the rural population, especially the access of a number of people to another insurance service means that our results cannot be generalized to other counties in Iran. Also, we were unable to measure the effect of factors such as the socio-economic characteristics of the rural populationthose of which were using hospital services -or their initial health status, since we had no access to these data. Despite these limitations, our study results provide a good basis for future studies aiming to assess reform effects, in relation with the contextual characteristics of rural areas in each county. Conclusion The family physician program and expansion of health insurance may have a mixed effect on hospitalization. On the one hand, it could result in increased hospitalizations due to the identification of and response to the unmet needs of the population, and on the other hand it could decrease AHs and eliminate the costs imposed by them upon the health system.
2018-09-15T22:01:13.675Z
2018-04-18T00:00:00.000
{ "year": 2018, "sha1": "3aa440162959b1aac325abf9d6838294fc597f26", "oa_license": "CCBYNC", "oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/arashidian-A-10-3235-1-83a4cf6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd841e0bb3f14294ba13feffcfb78086dfda9419", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
148322
pes2o/s2orc
v3-fos-license
Theory of rapid force spectroscopy In dynamic force spectroscopy, single (bio-)molecular bonds are actively broken to assess their range and strength. At low loading rates, the experimentally measured statistical distributions of rupture forces can be analysed using Kramers’ theory of spontaneous unbinding. The essentially deterministic unbinding events induced by the extreme forces employed to speed up full-scale molecular simulations have been interpreted in mechanical terms, instead. Here we start from a rigorous probabilistic model of bond dynamics to develop a unified systematic theory that provides exact closed-form expressions for the rupture force distributions and mean unbinding forces, for slow and fast loading protocols. Comparing them with Brownian dynamics simulations, we find them to work well also at intermediate pulling forces. This renders them an ideal companion to Bayesian methods of data analysis, yielding an accurate tool for analysing and comparing force spectroscopy data from a wide range of experiments and simulations. 〈F 〉HS c linear-cubic potential 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 To evaluate how well our theory fares in practice, we have performed Brownian Dynamics simulations of particles bound either via a cusp-shaped (a, b) or linear-cubic potential (c, d) and acted upon by an external field/soft spring (a, c) or by a stiff spring (b, d). The rupture force histograms thus obtained were analyzed "locally" (performing a different fit for each loading rate) and "globally" (performing a single fit for several decades inḞ) using both our own theory and the best available steady-state ("slow") theories (specifically, "slow cusp/cubic" denotes the DHS [14] (a, c)/Maitra-Arya [17] model (b, d), evaluated either for a cusp-shaped or for a linear-cubic binding potential, see Supplementary Note 1 for further details). We have furthermore followed standard operating procedures by alternatively fitting the mean rupture force F as a function of loading rate, using the analytical results of the DHS [15] (a, c)/Maitra-Arya model (b, d), again specialized either to a cusp-shaped or linear-cubic binding potential, and using the numerical interpolation derived by Hummer & Szabo [19] ("HS") that extends to arbitrarily high loading rates. Each single fit is represented by a box split in half along its vertical axis, with the box width indicating the range of external loading rates taken into account and the two halves containing the best-fit value for the attraction range x fit b , measured in multiples of its true value x b = 1 nm and the best-fit value E fit for the energy barrier, also measured in multiples of its true value E = 10 k B T . As an example, we have highlighted in green in a the results of a 6-decade fit to simulations of a cusp-shaped binding potential under a linearly increasing external force field. The entry spans 6 decades, from 1 to 10 5 pN s −1 and belongs to the category "slow/cusp", indicating that a Maximum-Likelihood analysis of rupture force data obtained under 6 different loading rates has been performed, using the "cusp" (ν = 1/2) DHS [14] model and yielding the following results: we have generated rupture force histograms analogous to those shown in fig. 3 in the main text. Solid lines show a global fit of our theory with µ as a free parameter (see Table 1). The best-fit parameters thus obtained are (E = 11.2 k B T, x b = 1.13 nm, D = 774 nm 2 s −1 , µ = 5.75) current experimental limit At low pulling speeds (red curve), the bound state remains stable up to the time of bond rupture; on an experimental timescale, the subsequent decrease in pulling force appears instantaneous. Within the ballistic regime, the bound state is no longer stabilized by a free energy barrier; still, a pronounced change in intermolecular binding forces −U (x ≈ x b ) manifests itself in a force maximum at x b for intermediate pulling speeds (γv − κ[y(t rupture ) − x b ] − F R < 0, green curve), whereas at even larger pulling speeds the force continues to rise towards its ultimate limit F(t → ∞) = vγ − F R , but exhibits a kink at x = x b (blue curve). Hence, the characteristic signature of bond rupture is found to vanish gradually beyond the critical pulling force, disappearing completely only in the ultimate limit v = ∞. Supplementary Note 1: Benchmarking. To assess the practical performance of our theory in comparison to established models of dynamic force spectroscopy, we generate synthetic rupture force histograms over a wide range of pulling speeds ranging from 1 to 10 11 pN s −1 (using a direct integration of the underlying Langevin equation, as explained in the Methods section of the main text). In addition to the cusp-shaped binding potential described in the main text, we also consider a linear-cubic binding potential In contrast to the cusp potential, the linear-cubic potential lacks an absorbing boundary, such that rebinding must be prevented by strong repulsive forces within the unbound state. We thus only consider the bond broken once x has passed 1.5 x b . For each of the four different experimental setups (cusp/field, linear-cubic/field, cusp/spring, linear-cubic/spring), we obtain a set of 12 different rupture force histograms, each of which we first analyze separately. Apart from our own theory, we also apply the corresponding results by DHS [14] and Maitra & Arya [17] (and eq. (4-S)), depending on which best applies to the experimental setup at hand. Furthermore, we generate "global" fits by subsuming histograms obtained under a range of different pulling speeds and analyzing them all at once using the maximum-likelihood method proposed in [25]. Since we do not want to bias the results by our choice of starting value, we use a global optimization method (the NMinimize optimizer integrated into Mathematica), only restricting the range of allowed parameters to a rather large region (3 k B T < E < 30 k B T , 0.3 nm < x b < 3 nm, 300 nm 2 s −1 < D < 3000 nm 2 s −1 , 0.3 < µ < 9). For each fit, "global" and "local", 1600 measured rupture forces per loading rate were used for the analysis. Conventionally, global fits are often obtained not by fitting the rupture force distributions themselves, but by analyzing the mean rupture force F [39] (or the most probable rupture force [13]) as a function of the external loading rate. We have thus included this method as well, but only using the conventional (quasistatic) expressions for F [15,17] and the extrapolation by Hummer & Szabo [19], though one may derive analogous results from our theory, if so desired (see the Methods section of the article). Supplementary Fig. 1 provides an overview of the fit parameters thus obtained and their relative deviations from the true model parameters. Supplementary Note 2: Microscopic Kramers rate for a cusp potential in stiff spring limit. In [17], the work of DHS [14] and Friddle [15] is extended to explicitly account for a harmonic force transducer, thus relaxing the commonly made assumption that the pulling device is much softer than the intramolecular bond. Only a linear-cubic binding potential (i.e. the ν = 2/3 case in [14]) is considered in [17], but not the ν = 1/2 "cusp" scenario. To compare our results to the optimized version of the conventional steady-state approximation, we provide here a short derivation of a "cusp-optimized" counterpart to the results given in [17]. The thermal escape of a particle, moving in an energy landscape over an effective energy barrier of height χβ E 1, χ = 1 + κx 2 b /2E , can be described using Kramers theory [14,26], where the rate of escape is given by F(t) = κy(t)/χ, F c = 2E /x b and k 0 is the associated Kramers rate in absence of the force transducer. The rate expression above can be used to compute the accompanying RFD via eq. (1). For a force ramp, i.e. y(t) =ẏt and thus F(t) =Ḟt withḞ = const, the resulting RFD reads Finally, the mean and the variance of the distribution can be deduced along the lines of [15], where q, X and E 1 (z) are defined as in the Methods section of the main text. Supplementary Note 3: Experimental detection of "ballistic rupture events". During the peer-review process the question was raised as to how one might measure, or even define, a "rupture" event when there is no longer an effective free energy barrier to stabilize the bound state. A step-like transition between two discrete, stable equilibrium positions (i.e., the time-dependent energy minimum x within the bound state and the location y(t) of the external force actuator) can arise only from the metastable free energy landscape pertaining to subcritical pulling forces F < F c . Yet, even as the effective free energy barrier vanishes, the intramolecular binding potential U still underlies the combined effective potential U +V , and with it the strong variation in intramolecular forces F(x) = −U (x) characteristic of a well-defined transition region around x b . The question of how to best relate experimentally measured force traces to the first-passage time distribution at x b is an (albeit important) implementation detail to be decided by the practitioner. Still, to illustrate the matter and to provide at least one possible working definition of "ballistic bond rupture", we consider a non-singular, but sharp transition region characterized by finite slopes −U (x ↑ x b ) = F L , −U (x ↓ x b ) = F R . Evaluating the athermal dynamics of x(t), ≡ −γv(t) + F R + κδ x(t) , (8b-S) we find that, as we enter the ballistic regime (i.e., the pulling force at the time of rupture exceeds F c = F L ), a friction-limited dynamic equilibrium ensues, with the steady-state extension of the external force actuator saturating at F = γv − F R . Depending on F R , this implies either a force maximum at x = x b or an ongoing increase in the pulling force that is, however, preceded by a detectable kink as x crosses the barrier position. Hence, the characteristic "rupture signature" seen in the time-resolved pulling force F(t) does not disappear immediately as loading rates increase beyond the critical loading rate, but instead vanishes gradually (as sketched in Supplementary Fig. 4), thus in principle allowing for experimental detection for all but infinite loading rates v = ∞.
2018-04-03T03:03:59.223Z
2014-07-31T00:00:00.000
{ "year": 2014, "sha1": "266ae600625862730261654cb623d33c2cec2662", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms5463.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "266ae600625862730261654cb623d33c2cec2662", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
14207034
pes2o/s2orc
v3-fos-license
Melvin Twists of global AdS_5 \times S_5 and their Non-Commutative Field Theory Dual We consider the Melvin Twist of AdS_5 \times S_5 under U(1) \times U(1) isometry of the boundary S_3 of the global AdS_5 geometry and identify its field theory dual. We also study the thermodynamics of the Melvin deformed theory. Melvin twist, also known as the T-s-T transformation, is a powerful solution generating technique in supergravity and string theories [1][2][3][4][5][6]. The procedure relies on having a U(1) × U(1) compact isometry along which one performs a sequence of T-duality, twist, and a Tduality. The twist is an SL(2, R) transformation on the complex structure of the T-dual torus. As such, the Melvin twist can simply be thought of as an SL(2, R) transformation acting on the Kähler structure of the torus parameterized by U(1) × U(1). Interesting closed string backgrounds, such as Melvin universes, null branes, pp-waves, and Gödel universes can be constructed by applying the Melvin Twist procedure to the Minkowski background. The construction reveals the hidden simplicity of these closed string backgrounds: they are dual to flat spaces. As a result, world sheet sigma model for strings in these backgrounds are exactly solvable and have been studied extensively [7][8][9][10][11][12]. The same procedure can be applied to black p-brane backgrounds to construct various asymptotically non-trivial space-time geometries [13]. Melvin twist applied to the Dp-brane background and the subsequent near horizon limit gives rise to supergravity duals for a variety of decoupled field theories 1 depending on the orientation of the brane and the Melvin twist. If both of the U(1) isometries are along the brane, one generally obtains a non-commutative field theory, typically with non-constant non-commutativity parameter [14][15][16][17][18][19]. If one of the U(1) is transverse to the brane, then one obtains a dipole field theory [20][21][22]. Taking both of the U(1)'s to be transverse to the brane gives rise to the construction of Lunin and Maldacena [23]. The list of models constructed along these lines is summarized in table 1. These theories are S-dual to NCOS theories [24,25]. They are also closely related to "Puff Field Theory" which was studied recently in [26,27]. The hidden simplicity of Melvin twists in the context of gauge theory duals manifests itself as preservation of integrability. The fact that q/β-deformed N = 4 SYM remains integrable was pointed out in [28,29]. A broader class of integrable twists were studied in [30,31]. In this article, we consider the effect of twisting along the U(1) × U(1) ∈ SO(4) isometry of the S 3 . More specifically, we consider AdS 5 × S 5 solution of type IIB theory where λ is the 't Hooft coupling Table 1: Catalog of non-commutative gauge theories viewed as a world volume theory of D-branes in a "X" Melvin "Y" twist background. This table originally appeared in [18]. Precisely the deformation of this type was studied in [30], and as these authors suggested, it is quite natural to interpret this background as being dual to a non-commutative deformation of N = 4 SYM on R × S 3 with the Moyal * -product This interpretation fits naturally with the established patterns seen in other non-commutative field theories [14][15][16][17][18][19]. The naturalness of this interpretation is also echoed in [32]. There is however a problem in making this identification more precise. The gauge/gravity dualities are motivated by the complementarity of black D3-branes of string theory in various regimes of the t'Hooft coupling λ [33]. This allowed for an explicit analysis of the physics of open string degrees of freedom, which gave rise to a concrete realization of non-commutative dynamics in the appropriate decoupling limit. The U(1) × U(1) isometry which we exploited in constructing the χ deformation is an isometry of the near horizon AdS 5 × S 5 geometry but not of the full D3-brane geometry. This makes the direct analysis of the open string dynamics from the world sheet point of view along the lines of [34] impossible. We will show in this article that embedding into full D3 geometry is still possible, by exploiting the underlying SL(2, Z) T-duality structure of the (φ 1 , φ 2 ) torus. This is the string theoretical manifestation of the Morita equivalence in non-commutative field theories. To take advantage of this duality, it is useful to restrict to the case where χ is a rational number. Then, there exists an SL(2, Z) transformation which removes the non-locality. Since this SL(2, Z) dual is a local theory, it is the description most suitable for exploring the deep UV behavior [35]. The SL(2, Z) structure in fact gives rise to a self-similar phase diagram similar to the fundamental domain of the moduli-space of a torus. Similar structures have been shown to arise in NCOS [36] and PFT [27] theories as well. Since rational numbers are dense, this will suffice for the purpose of identifying the field theory dual of (5). In other words, we can use the fact that the effective theory in the IR region of the phase diagram depends smoothly on χ. Let us suppose, for sake of concreteness, that for relatively prime integers p and s. Then, one can find integers r and q so that Acting on the Kahler structure ρ ′ for the background (5) by this SL(2, Z) transformation gives rise to In other words, the supergravity background is transformed to take the form where φ 1 and φ 2 are periodic with respect to 2π. We can change variables and write This solution is therefore recognizable as a Z p × Z p orbifold of AdS 5 × S 5 with pN units of RR-flux threading the S 5 . This type of orbifold, acting on the AdS 5 sector of the geometry, was first considered in [37]. Now, this solution is no less easier to embed in the full D3 solution for its dynamics to be interpreted from the open string point of view than (5), because of the orbifolding with respect to the killing vectors However, its covering space is simply AdS 5 × S 5 with some exact B field. This is easier to embed into the D3 geometry. In order to explore the embedding into the full D3 geometry, it is convenient to first go to the Poincare coordinate of the AdS 5 × S 5 geometry. This can be accomplished by recalling the two different ways of parameterizing the hyperboloid satisfying This implies a map between coordinates In terms of the Poincare coordinates, the supergravity background takes on a simple form and the B-field having the form The fact that dB = 0 ensures that the AdS 5 × S 5 solution is unperturbed. Suppose we rescale which makes the metric take the form It is then possible to extend this solution to full D3 while continuing to let the B-field have the form (18) which continues not to back react. In the large r limit, B becomes What this suggests is that the covering space of (12) is interpretable as N = 4 gauge theory with background field in the decoupling limit. It is straight forward to verify that the equations of motion and the Bianchi identity for the gauge fields are satisfied. However, since the flux is fractional, it must be interpreted as giving rise to a 't Hooft flux [38]. Our remaining task in addressing our original motivation is to work out the implication of (24) in identifying the field theory dual of (5). To facilitate this, it is useful to first work out the map which relates the coordinates on the boundary of global AdS 5 to the the boundary of Poincare AdS 5 . This is achieved by taking the large u limit of (16) which reads Since we will ultimately compactify along the isometry vectors (14), it would be instructive to see how these vectors are oriented in the Poincare coordinates. We illustrate in figure 1 the contour of fixed τ and fixedφ 2 in the θ = 0 hypersurface which amounts to setting It is also useful to specify the metric for the space on which the field theory is defined. Starting with the round metric on R × S 3 ds 2 = R 2 dτ 2 + dθ 2 + sin 2 θdφ 2 1 + cos 2 θdφ 2 2 (27) and applying (26) maps this to a conformally flat metric Therefore, in order to interpret (12) as a field theory on S 3 with a round metric, we should start with (24) on flat Minkowski metric, apply a conformal transformation, followed by a diffeomorphism with respect to the map (26). Luckily, gauge fields have conformal scaling dimension zero [39]. So F is invariant under conformal transformation. We therefore conclude that (12) is dual to N = 4 theory with with coordinatesφ i periodic under shift by 2π/p. The fact thatφ 1 andφ 2 are periodic with respect to shift in 2π/p implies that n 1 and n 2 must be integer multiples of p. However, in the presence of a fractional flux [40,41] the p × p degrees of freedom in the adjoint of SU(pN) splits into adjoints of SU(N) in a box whose size is larger by a factor of p [42,43]. The non-commutative algebra of the p × p adjoint degrees of freedom are precisely isomorphic to the Moyal algebra with rational dimensionless non-commutativity parameter as was shown, e.g., in [44,45]. These arguments are also reviewed in more detail in the appendix. Since the argument is somewhat long winded, the outline of the argument is summarized in the flow chart diagram illustrated in figure 2. Our goal was to show that the Melvin twist of AdS 5 × S 5 is the supergravity dual of NCSYM on S 3 with the non-commutative (φ 1 , φ 2 ) coordinates, illustrated by a blue arrow in figure 2. We relied heavily on the SL(2, Z) structure both on the field theory side and the supergravity side of the correspondence, as well as the rationality of the deformation parameter χ, to reformulate the theory in terms of an orbifold of N = 4 theory. This allowed the duality from the open string/closed string perspective to be made most manifest. By following the chain of duality back to the original description, we derive the original duality of interest confirming [30]. This is the main result of this article. The rationality of the deformation parameter χ and subsequent SL(2, Z) transformation proved to be the powerful handle in defining these theories at the microscopic level. It should be possible to formulate a microscopic formulation of Puff Field Theory along these lines as well [46]. It should be noted that strictly speaking, the deformation/orbifolding along ξ i which we considered in this article breaks all supersymmetries (just as in the pure Melvin case of [18,19]). What this means is that one expects the supergravity background to be unstable to decay, and for the field theory side to suffer from runaway vacua. However, the fact that the supergravity background considered in this article does satisfy the classical equation of motion implies, as was the case for various non-supersymmetric orbifolds [47], that the effects of instability are subleading in 1/N expansion. One could also imagine our analysis for ξ 1 and ξ 2 in AdS 5 × S 5 which preserves some fraction of supersymmetry, such as choosing the ξ 1 to be along the Hopf fiber of S 3 , and ξ 2 to be along the Hopf fiber of the S 3 of SO(4) ∈ SO(6). More specifically, parameterize the metric of AdS 5 × S 5 by coordinates where dΩ 2 5 = dα 2 + cos 2 αdβ 2 + sin 2 αdΩ 2 with and set ξ i = ∂ φ i . Performing a Melvin twist by the amount χ will give rise to a geometry which is to be interpreted as an example of a dipole field theory [20,21]. If the deformation parameter takes on a rational value χ = s/p, this geometry can be mapped, via an SL(2, Z) transformation, to (AdS 5 /Z p ) × (S 5 /Z p ) geometry with torsion preserving 1/4 of the original supersymmetry and should be stable. Other possible Killing vectors along which one can compactify and or twist preserving some fraction of supersymmetries can be found, e.g., in [48][49][50][51]. Along lines similar to [16], many of these constructions would constitute a laboratory for exploring issues of string theory in time dependent backgrounds. The period of t coordinate is given by and the boundary is conformal to S 1 × S 3 with periods β and R = b, respectively. One can then perform the χ deformation on this background, giving rise to a new background where we have changed coordinates to match the asymptotic behavior of (1) and dΣ 2 = dθ 2 + sin 2 θdφ 2 1 + cos 2 θdφ 2 2 1 + λχ 2 cos 2 θ sin 2 θ sinh 4 ρ (43) Just as in the undeformed case, the use of Schwarzschild black hole solution suffers from the Hawking-Page transition at low temperatures, but for T > 1/R, it follows from the standard reasoning that the entropy being proportional to the area of the horizon in the Einstein frame, is unaffected by χ. Adjoint scalars in such a background will satisfy the boundary condition Treating the action to the quadratic order, the plane wave solution with this boundary condition is where and where ω = e 2πim 1 s for qs ≡ 1 (mod p). The energy and momentum carried by these modes (see (15)- (17) of [43]) are which in light of (48) is identical to that of a single degree of freedom in a box of size pL, rather than p 2 degrees of freedom in a box of size L.
2008-01-31T22:02:28.000Z
2008-01-24T00:00:00.000
{ "year": 2008, "sha1": "6d6545408caa03c73682279d91b383edc9bc5eee", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0801.3812", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6d6545408caa03c73682279d91b383edc9bc5eee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
222180939
pes2o/s2orc
v3-fos-license
The effect of preferred music on mental workload and laparoscopic surgical performance in a simulated setting (OPTIMISE): a randomized controlled crossover study Background Worldwide, music is commonly played in the operation room. The effect of music on surgical performance reportedly has varying results, while its effect on mental workload and key surgical stressor domains has only sparingly been investigated. Therefore, the aim is to assess the effect of recorded preferred music versus operating room noise on laparoscopic task performance and mental workload in a simulated setting. Methods A four-sequence, four-period, two-treatment, randomized controlled crossover study design was used. Medical students, novices to laparoscopy, were eligible for inclusion. Participants were randomly allocated to one of four sequences, which decided the exposure order to music and operation room noise during the four periods. Laparoscopic task performance was assessed through motion analysis with a laparoscopic box simulator. Each period consisted of ten alternating peg transfer tasks. To account for the learning curve, a preparation phase was employed. Mental workload was assessed using the Surgery Task Load Index. This study was registered with the Netherlands Trial Register (NL7961). Results From October 29, 2019 until March 12, 2020, 107 participants completed the study, with 97 included for analyzation. Laparoscopic task performance increased significantly during the preparation phase. No significant beneficial effect of music versus operating room noise was observed on time to task completion, path length, speed, or motion smoothness. Music significantly decreased mental workload, reflected by a lower score of the total weighted Surgery Task Load Index in all but one of the six workload dimensions. Conclusion Music significantly reduced mental workload overall and of several previously identified key surgical stressor domains, and its use in the operating room is reportedly viewed favorably. Music did not significantly improve laparoscopic task performance of novice laparoscopists in a simulated setting. Although varying results have been reported previously, it seems that surgical experience and task demand are more determinative. Electronic supplementary material The online version of this article (10.1007/s00464-020-07987-6) contains supplementary material, which is available to authorized users. Worldwide, music is commonly played during surgery [1]. Perioperative music has been extensively investigated in adult surgical patients with several beneficial effects [2][3][4]. However, no definitive conclusion on the effect of music on surgical task performance can currently be drawn due to conflicting study results, inconsistent data reporting methods, and varying study designs in previously published studies [5]. To date, all these studies have been conducted in a simulated setting [5], as surgical performance in a simulated setting correlates to performance during actual real-world surgery and influences postoperative patient outcome [6][7][8][9]. It is unclear whether the reported beneficial effects of music on surgical performance are due to an auditory stimulus and not music per se, as all but one [10] of the previous studies used silence as a control [11][12][13][14][15][16][17][18]. Given that high noise level settings are commonly prevalent in the operation room (OR) [19], it could be argued that using silence as a control factor is therefore not appropriate when evaluating the effect of music on surgical performance. Some surveys have shown that music is well liked by surgical personnel and can improve focus during surgery [1], while others mentioned that it can be distracting and reduce vigilance [20,21]. Therefore, music during surgery could potentially influence mental workload, which can be defined as the attention that can be directed to a surgical task and the balance of the attention amount used and additionally available when necessary. Increased mental workload is associated with decreased surgical task performance [22]. While perioperative music has a significant beneficial attenuating effect on the physiological stress response in adult surgical patients [3], its effect on mental workload and stress while performing a surgical task has only sparingly been investigated [23]. Laparoscopic surgery requires different skills compared to conventional open surgery due to the use of long instruments and the fulcrum effect, two-dimensional screen visualization which can impair depth perception, and limited tactile feedback [24]. Therefore, simulation using either a box trainer or virtual reality is increasingly used to provide a safe environment for the early learning curve phase. Successfully completing the Fundamentals of Laparoscopic Surgery program is required to become board certified as a general surgeon in the United States [25]. The acquired competencies in a simulated setting seem to be transferable to the real word setting with favorable effects on skill, knowledge, and patient outcome [6,9,26]. The purpose of this randomized crossover study is to investigate the effect of participantselected recorded music versus recorded OR noise on laparoscopic task performance, mental workload, and heart rate variability (HRV) in a simulated setting. Materials and methods This study was approved in September 2019 by the Medical Ethics Committee Erasmus MC (MEC-2019-0537) and prospectively registered with the Netherlands Trial Register (Trial NL7961). The study was performed in accordance with the ethical standards of the Helsinki Declaration of 1975. No study protocol amendments were required. Reporting adhered to the 2010 Consolidated Standard of Reporting Trials (CONSORT) extension for randomized crossover trials [27]. Study design A study procedure timeline overview is presented in Fig. 1. A four-sequence, four-period, two-treatment, randomized controlled crossover study design was used to investigate the effects of recorded, participant-selected music versus recorded OR noise on laparoscopic task performance, mental workload, and HRV. Medical students who were novices to laparoscopy and provided written informed consent were eligible for study participation. Severe hearing impairment, visual impairment, physical handicap that impairs laparoscopic task performance, or use of cardiac medication were considered as exclusion criteria. Participants were instructed to bring music they would like to listen to while performing a laparoscopic task and to abstain from alcohol for 12 h prior to the experiment. The 10 min OR noise recording was selected from a list by three authors (VF, PO and JJ) with prior surgical experience in the OR to represent noise during a routine laparoscopic surgical procedure (i.e., no orthopedic drilling noise). Laparoscopic task performance was assessed with a validated, custom-made laparoscopic box simulator using the peg transfer task [23], during which a blue and red peg are moved with a grasper forceps to a predefined location shown on a monitor. This task is part of the Fundamentals of Laparoscopic Surgery program for surgical residents in the United States [28]. Motion data to assess laparoscopic task performance were captured using a Leap Motion Device (LMC, Leap Motion Inc., LM-010), a compact sensor modified and customized for motion analysis, connected to a computer with monitor, and a webcam (Gemini Gembird) functioning as camera with a frame rate of 60 Hz. Motion data were progressed using a custom-made 1 3 software program (OCRAM technologies) combined with Python version 2.7. After signing the informed consent form, a chest band was fitted to continuously measure HRV throughout the entire experimental session [29]. A custom demographic questionnaire evaluating music importance and preferences, listening to music while studying, and whether a music instrument is or was played, was filled out. Participants were randomly allocated using the sealed envelope method and a 1:1:1:1 allocation ratio to one of four sequences. Each sequence consisted of a preparation phase followed by two periods of recorded, participant-selected music and two periods recorded OR noise, with the order of exposure decided by the previously mentioned randomization. To account for the learning curve, all participants completed a preparation phase consisting of 30 peg transfer tasks, alternating between the right and left hand (i.e., the first peg transfer was performed using the right hand, the second using the left hand, the third using the right hand again and so on), as it was previously observed that the learning curve flattened after 20 repetitions [23]. During each period, 10 alternating peg transfer tasks were performed while listening to either music or OR noise using noise-canceling headphones (Bose Quietcomfort 35ii). Volume level was adjusted at the start by the participant and was therefore consistent during the entire experiment. The Surgery Task Load Index (SURG-TLX) questionnaire evaluating mental workload was filled out after the preparation phase and each period for a total of five times, which led to a washout period of at least several minutes. Outcome parameters The primary outcome measure was time to task completion, defined as the time in seconds (s) required to complete a 10 peg transfer task period, consisting of alternating peg transfers with the dominant and non-dominant hand. Time to task completion of the peg transfer task is the main score attribute in the Fundamentals of Laparoscopic Surgery program [28]. Furthermore, path length, the total distance traveled in millimeters (mm) by the instrument tip, speed, the ratio of path length and time to task completion (mm/s), and motion smoothness, the normalized jerk or the rate of instrument tip acceleration change (mm/s 3 ), were measured. To assess the benefit of the preparation phase, motion analysis of the first 10 peg transfers in this phase was compared to the last 10 peg transfers additionally. Mental workload was assessed using the SURG-TLX, an in laparoscopic surgery validated, adapted version of the National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire [30]. This weighted questionnaire assesses six dimensions of workload (mental demands, physical demands, temporal demands, task complexity, situation stress and distractions) using a visual analog scale (VAS) and was filled out by all participants after the preparation phase and each period. Heart rate and HRV, defined as the variation in time between each heartbeat (NN), were continuously measured from the preparation phase start until experiment end using the commercially available, validated BM-CS5EU wireless chest band (BM innovations, Acentas Study procedure overview timeline. Timeline detailing study procedures. Depending on the sequence, consisting of 4 periods, participants were exposed to either music or operation room noise. Preparation phase = 30 alternating peg transfer tasks. Period = 10 alternat-ing peg transfer tasks, M Exposure to recorded, participant-selected music, C Exposure to recorded operation room noise, SURG-TLX surgery task load index, HRV heart rate variability, measured continuously throughout the experiment GmbH) [29]. Short-term HRV measurements [31], lasting approximately five minute during each of the four periods as well as the first five and last five minutes of the preparation phase, were analyzed (ATS 2.4.6., BM Innovations). HRV can represent the physiological state of autonomic nervous system activity and has been used to assess mental strain in surgeons during laparoscopic task performance [32]. A lower HRV implicates dominance by the sympathetic nervous system and has been regarded as higher mental strain. HRV quantification was presented using the time-domain variable standard deviation of all NN intervals (SDNN) in milliseconds (ms). Blinding and data analysis Obviously, the participants in this experiment could not be blinded. Headphones were employed partly to blind the research assistant overseeing the experiment. However, as participants brought their preferred music using different devices, transferring music to the laptop which contained the OR noise recording in order to be played during the experiment was impractical. Therefore, the music intervention was directly played while the headphones were attached to the participant's phone or music player. Although the research assistant was separated from the participant by an opaque screen during the experiment in order to reduce any influence to the fullest extent, the assistant was not considered to be blinded. All questionnaires were filled out using a secure, computerized questionnaire by the participants themselves and were therefore not administered by the research assistant. Heart rate and HRV data were processed through a validated software program. Motion data analysis was computerized using a software script validated in previous studies [23], while the person responsible for data retrieval and preparing it for analysis was blinded to the allocation sequence. All data were only analyzed after the last participant had completed the experiment. Data were statistically analyzed using the IBM Statistical Package for the Social Sciences (SPSS) version 24.0. Data were presented as mean and standard deviation (SD) if data were normally distributed, and median and interquartile range (IQR) if not. Normality of data was assessed using the Kolmogorov-Smirnov test and visually in Q-Q plots. Continuous variables were compared using a paired-samples T test or Wilcoxon signed rank test, as appropriate. Within subject differences were presented by subtracting the control group from the intervention group. Categorical variables were presented as absolute number and percentage. Two tailed testing was used with statistical significance inferred at p < 0.05. Sample size calculation Based on our previous study using the same laparoscopic box simulator [23], an effect size of 0.3 was deemed clinically relevant. With alpha set at 0.05, power of 0.80 and twosided dependent testing, 90 participants would be required. Given that there were four randomization sequences, we chose to set the sample size at 92 participants to allow for equal distribution among the sequences. Taking into account a 10 percentage exclusion rate, total sample size was set at 104 participants. Results From October 29, 2019 until March 12, 2020, 107 participants were recruited. Ten participants were excluded because of equipment failure at the start of the study. Motion analysis and mental workload assessment using the SURG-TLX was performed of all 97 participants who completed the study. Due to missing data, heart rate and HRV analysis was performed of 93 participants (Fig. 2). Demographic characteristics An overview of demographic characteristics of the full cohort (n = 97) can be found in Table 1. Median age was 20 (IQR 18 to 21), with the majority of the medical students being in their first three years of study (77%), right-handed (85%), and female (57%). A little over half of participants (54%) had experience with a musical instrument, with 31 (32%) currently playing and 21 (22%) previously playing an instrument. Music was deemed important in daily life with a median numeric rating scale (NRS) of 8 (IQR 7 to 8), with 68 (70%) participants listening to music while studying. Favorite genres while studying were classical (20%) and pop (16%), while 18% specified music that could not be classified under commonly described genres. Top music genres chosen for this experiment were pop (47%), classical (21%), and hip hop (9.3%) (Online Appendix A). Laparoscopic task performance Laparoscopic task performance improved during the preparation phase ( = 0.125). Additionally, there was no significant difference by music compared to OR noise in laparoscopic task performance parameters of the dominant and non-dominant hand, when these were assessed separately from each other. When assessing the participants who preferred to listen to music when studying (n = 68) as a separate group, no significant difference was observed. No difference was observed when taking experience with playing a musical instrument into account, or gender (Table 4). Mental workload, heart rate, and HRV A significant beneficial effect of music was observed on mental workload as the weighted SURG-TLX score was lower (27. In four participants (4.1%), heart rate and HRV data were not registered, and data were therefore analyzed of 93 participants. None of the included participants had known cardiac diseases or arrhythmias or used any cardiac medication. Median duration of HRV measurement was 4.25 min [3.59 to 5.11] over the experiment (93 measurements per period) as a whole. Of the 372 total heart rate and HRV measurements, the measurement duration of 173 (47%) were at least 4.5 min or more, 166 (45%) were between 3.5 and 4.5 min, and 33 were below 3.5 min (8.9%). Heart rate during the (Table 3). Discussion This randomized controlled crossover study with the largest sample size to date assessed the effect of participantselected recorded music on laparoscopic task performance and mental workload in a simulated setting. No statistically significant beneficial effect of participant-selected music was observed regarding laparoscopic task performance while compared to OR noise in novice laparoscopists. Previous studies, all performed in a simulated setting, reported varying results [5]. Two studies with a similar study design and comparable tasks by the same lead author evaluated the effect of music on laparoscopic task performance. A beneficial effect on task accuracy in expert surgeons was observed [11], but not in junior residents with no previous laparoscopic experience [12]. No beneficial effects were observed in junior novice surgeons asked to perform part of a laparoscopic cholecystectomy [15], nor in 12 surgeons with varying experience placing laparoscopic knots [10]. Although considered a basic skill, laparoscopic knot tying is reportedly the most difficult laparoscopic skill to master [33,34]. In aforementioned studies, preselected music by the research team was used. A positive trend between likability of the music and a beneficial effect was noted [15]. In practice, it seems less likely that surgeons would listen to music that they do not prefer. Surgeons did choose the music played in the OR in a majority of cases [35][36][37]. Hence, participant-selected preferred music was used which we believe to be more clinically relevant. Recently, we observed a significant beneficial effect on time to task completion (4.68%, p = 0.037) and path length (6.35%, p = 0.019) of participantselected music versus silence in 60 medical students in our previous study. Surgical experience level was comparable, as they were also novices of laparoscopy, and a similar study setup was employed, although the modified peg transfer task was only performed 5 times with solely the dominant hand [23]. It could be argued that the different results compared to this study on task performance can partly be attributed due to a more demanding task, with a higher SURG-TLX and heart rate in this study [23]. Therefore, given the previously mentioned studies, it might be possible that depending on experience and task complexity, music could be beneficial when the surgical task is considered to be relatively easy and manageable, but that this effect disappears when the motor task is more difficult and increasingly demanding on mental workload. An important component during laparoscopic surgery besides motor task execution and performance is the cognitive decision making to determine which motor steps should be executed. Reducing mental workload, often reported as stress by the surgeon, will leave more mental resource capacity for both components [38]. Indeed, laparoscopic task performance has been correlated to stress experienced by the surgeon [39], with identified key stressors in the form of time pressure, noise and distractions impairing dexterity, and increasing error rate [40]. Mental workload assessed using the SURG-TLX questionnaire was significantly reduced by music, which was especially profound in the domains mental demands (within subject difference -5.0, p = 0.000) and distractions (within subject difference -10.0, p = 0.000), while reflected to a slightly lower but still significant degree in temporal demands (within subject difference -2.5, p = 0.010). While secondary outcome measure results should always be interpreted with caution, these findings mimic our previous study which also observed a beneficial effect by music on mental workload during laparoscopic task performance [23]. Previous surveys also observed favorable responses in general towards the use of music by surgeons, especially in regard to stress [36,41,42]. Although reporting bias cannot be entirely ruled out, the SURG-TLX follows the trend of objective parameters like salivary cortisol levels [43]. HRV seems to be an adequate method to assess mental surgical stress as well [44]. While heart rate was statistically significantly higher (within subject difference 1.0 bpm, p = 0.046) and HRV lower (within subject difference − 2.5, p = 0.015) in the music group, the absolute difference observed cannot be considered clinically relevant. It was expected that each period in this experiment would allow for short-term HRV analysis (nominal 5 min duration), but 54% of HRV measurements lasted 4.5 min or less. Given that the validity of ultra short-term HRV analysis has been questioned [31], as well as the lack of correlation with mental workload in this study, interpretation of these results as a reflection of mental strain in our study should be done with caution. Major strong points of this study was the largest sample size to date and the rigorous study design, which reduces a potential carryover effect [45]. While computerized randomization would preferably be used, we considered nonrandom allocation risk to be minimal. All participants acted as their own control. The research assistant overseeing the experiment execution had no incentive to influence allocation as they had no information on the participant during the experiment, given that all questionnaires were filled out using secure computerized questionnaire. These data were only revealed and analyzed after all inclusions had been completed. The envelope deciding allocation sequence was chosen before the preparation phase, preventing any potential influence of this phase on the allocation sequence. A maximum envelope number per sequence based on the sample size calculation assured equal allocation. A previously validated, custom-made laparoscopic box trainer was used [23], with real surgical instruments and pegs allowing for realistic tactile sense and haptic feedback that is not provided by all virtual reality simulators. To get acquainted with the box trainer and eliminate the learning curve as the foremost potential biasing factor, a preparation phase was incorporated. The number of peg transfer tasks necessary for this was based on a previously conducted study with the same box trainer [23]. Its success is evident through the fact that time to task completion rapidly decreased in the preparation phase, while it stayed almost consistent during the experiment (for either treatment factor). Since previous studies did not employ a preparation phase, it is difficult to ascertain whether the previously reported effects partly reflect the learning curve. Moreover, participant-selected instead of researcher-selected recorded music was used and the volume adjusted by the participants themselves to more accurately represent the real-world setting, while recorded OR noise acted as a control instead of silence in order to account for auditory stimulation as a factor. Nonetheless, several limitations can still be observed. The peg transfer task was chosen, which does not require surgical knowledge that could potentially influence task performance. However, this task with an average observed duration of approximately 3.5 min per period takes significantly shorter than any surgical procedure. Still, earlier studies reported that even relatively simple, short-lasting tasks and drills like these can improve relevant laparoscopic surgical tasks and should therefore not be disregarded [46,47]. We chose to perform the study in medical students who were inexperienced with laparoscopy in order to reduce potential previous experience influencing laparoscopic task performance. Studies evaluating noise in the OR found higher subjective distraction levels in assisting surgeons with less experience compared to the main, more experienced surgeons [48], while the negative impact on clinical reasoning was lower when anesthesiological residents were more experienced [49]. It has been theorized that more experienced surgeons can block out noise and music more effectively [10], theoretically decreasing potential effect size and increasing the required number of participants. It would have been impractical therefore to try to investigate the effects of music using more experienced residents or surgeons in such large numbers without the present data and our recently published study [23]. Finally, a major factor affecting teamwork in the OR is communication, with a considerable percentage of surgical errors involving communication between surgical personnel [50]. This factor could not be evaluated. These limitations make extrapolation of the observed results to the real-world setting less appropriate, limiting conclusions to a simulated setting. Although varying results regarding the effects of music on laparoscopic task performance have been reported, it seems that surgical experience and task demand can be more determinative. Future studies should take these factors into account and evaluate surgeons with different experience levels in a more lifelike setting. While several studies evaluated the effect of music on laparoscopic task performance through short-lasting laparoscopic and surgical tasks to date [5], important elements like simulated surgical procedures, communication, and performance of the entire OR team have only sparingly been investigated [51,52]. Auditory intervention should preferably consist of music combined with OR noise versus OR noise through speakers, with music chosen by both the surgeon and OR team. Music did significantly reduce mental workload and several previously identified key stressors of surgery, and its use in the operating theater is reportedly viewed favorably. Higher perceived stress is associated with a decreased HRV even throughout the night, indicative of a protracted recovery time [44]. As music can attenuate the stress response to surgery in patients undergoing surgery, future research should incorporate its effect on mental workload through HRV with attention to recovery from surgical task performance as well. Conclusion In this four-sequence, four-period, two-treatment, randomized controlled crossover study of 97 laparoscopy novices, recorded preferred music significantly reduced mental workload overall and in key surgical stressor domains during laparoscopic task performance in a simulated setting when compared to OR noise, but no beneficial effect on task performance itself was observed.
2020-10-08T13:05:55.023Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "c015ab3a83dd5cfa90e2adf93312598a95e1bf6b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00464-020-07987-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4d2eef14ebcaa593e86d1076bdb2fd06ef8625ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24574584
pes2o/s2orc
v3-fos-license
Perioperative outcomes and type of anesthesia in hip surgical patients: An evidence based review Over the last decades the demand for hip surgery, be it elective or in a traumatic setting, has greatly increased and is projected to expand even further. Concurrent with demographic changes the affected population is burdened by an increase in average comorbidity and serious complications. It has been suggested that the choice of anesthesia not only affects the surgery setting but also the perioperative outcome as a whole. Therefore different approaches and anesthetic techniques have been developed to offer individual anesthetic and analgesic care to hip surgery patients. Recent studies on comparative effectiveness utilizing population based data have given us a novel insight on anesthetic practice and outcome, showing favorable results in the usage of regional vs general anesthesia. In this review we aim to give an overview of anesthetic techniques in use for hip surgery and their impact on perioperative outcome. While there still remains a scarcity of data investigating perioperative outcomes and anesthesia, most studies concur on a positive outcome in overall mortality, thromboembolic events, blood loss and transfusion requirements when comparing regional to general anesthesia. Much of the currently available evidence suggests that a comprehensive medical approach with emphasis on regional anesthesia can prove beneficial to patients and the health care system. outcomes and studies concur on a positive outcome in overall mortality, thromboembolic events, blood loss and transfusion requirements when comparing regional to general anesthesia. Abstract Over the last decades the demand for hip surgery, be it elective or in a traumatic setting, has greatly increased and is projected to expand even further. Concurrent with demographic changes the affected population is burdened by an increase in average comorbidity and serious complications. It has been suggested that the choice of anesthesia not only affects the surgery setting but also the perioperative outcome as a whole. Therefore different approaches and anesthetic techniques have been developed to offer individual anesthetic and analgesic care to hip surgery patients. Recent studies on comparative effectiveness utilizing population based data have given us a novel insight on anesthetic practice and outcome, showing favorable results in the usage of regional vs general anesthesia. In this review we aim to give an overview of anesthetic techniques in use for hip surgery and their impact on perioperative outcome. While there still remains a scarcity of data investigating perioperative outcomes and anesthesia, most studies concur on a positive outcome in overall mortality, thromboembolic events, blood loss and transfusion requirements when comparing regional to gener-al anesthesia. Much of the currently available evidence suggests that a comprehensive medical approach with emphasis on regional anesthesia can prove beneficial to patients and the health care system. INTRODUCTION The increasing demand for hip arthroplasties over the last decades has sparked the creation of new and innovative anesthetic techniques and analgesic pathways with the goal to support best possible outcomes among this frequently elderly patient population. As a result, today different perioperative treatment pathways are available to physicians and their patients. In this context, the focus has shifted to techniques based on regional anesthetic and analgesic techniques. This trajectory has been fueled by a number of advantages including effective, long-lasting and focused pain control, decreased need for systemic analgesics and earlier mobilization [1] . While traditional views of anesthetic interventions have seen them in a more supportive role, allowing for surgery to take place and to alleviate pain postoperatively, an increasing body of literature has highlighted numerous beneficial effects of the use of regional anesthesia beyond the outcome of analgesia. Following this context, the use of regional compared to general anesthesia has been associated with beneficial results such as a lower incidence of mortality, reduced blood loss, thromboembolic events, cardiopulmonary complications, infections and favorable economic outcomes. However, evidence remains rare and there exists a paucity of publications focusing on comparatively reviewing perioperative outcomes among different types of anesthesia in hip surgical patients. In this manuscript we will focus on the discussion of available types of anesthesia in the hip surgical patients and discuss their epidemiologic distribution. We aim to present and discuss common perioperative complications and evaluate the literature with respect to different anesthetic and analgesic techniques and their impact on these outcomes, including medical and economic factors. ANESTHESIA TYPES Surgeries involving the hip joint have dramatically increased over time and are expected to continue to rise in incidence within the coming decades. Fueling these trends, among other factors, are their high success rate both in elective as well as in traumatic settings and the fact that the target population, including the elderly is rapidly expanding. It is estimated that by 2030 the demand for primary total hip arthroplasties in the United States will grow by 174% to 572000. Equally, the need for total hip revisions is projected to more than double to 137% in the same time frame [2] . Based on demographic changes and trends in the decades to come, the annual rate of hip fractures has been projected to increase worldwide from 1.66 million in 1990 to 6.26 million in the year 2050 [3] . This is of special concern as of all osteoporotic fractures, hip fractures have been identified as the most expensive fracture type as measured by hospitalization costs [4] . In addition, compounding the associated burden on the health care system exerted by the sheer volume alone, recent trend data are suggesting an increase in the average comorbidity burden and incidence of many serious complications among hip surgical patients [5] . Therefore, any intervention that may impact on perioperative outcomes is bound to profoundly affect the public health of entire countries. Epidemiologic information on the utilization of various types of anesthesia in hip surgical patients is more difficult to come by, as such information is not easily re-trievable from data collection constructs. However, newer and more detailed databases have afforded researchers a rare glimpse of current anesthetic practice on a national level. A recent analysis of population based data, which included 382236 patient records undergoing primary hip or knee arthroplasty in the United States, showed that approximately 11% were performed solely under neuraxial, 14.2% under combined neuraxial-general and 74.8% under general anesthesia [6] . This shows that even today, despite a trend to regional anesthesia, the majority of operations in the United States are carried out using solely general anesthesia. These percentages differ greatly between hospitals and likely among countries. While reasons for these findings have to remain speculative at this time, the choice of anesthesia might be based in part on historic developments and local or personal preferences. Similar disparities have been reported for the anesthetic care of hip fracture patients [7] . Information on the use of peripheral nerve blocks in hip arthroplasty patients is even scarcer, but it is likely that the proportion of patients receiving such interventions remains low. BENEFITS AND PITFALLS Despite some conflicting reports, a growing number of studies indicate that neuraxial anesthesia may prove beneficial to patients undergoing major joint replacement [8][9][10][11][12][13] . However, and as mentioned previously, neuraxial anesthetic techniques remain widely underutilized on a national level. Reasons for this underutilization remain speculative but include a number of practical and perception based variables. As with all medical interventions, risks and benefits have to be taken into account when applying anesthetic and analgesic techniques in the context of ones practice. Below follows a brief discussion of commonly utilized approaches. Surveys conducted with orthopedic surgeons noted primarily the perceived delay to achieve surgical readiness and lack of reliability as hindering to the wider acceptance of regional anesthesia. Still most surgeons queried stated to understand the benefits and they are supportive of the use of these methods [14] . One of the factors why many patients and physicians might be reluctant to use neuraxial anesthesia is the fear of urinary retention and bladder catheterization. Contrary to prior belief it has been shown that patients undergoing hip arthroplasty have a low risk of urinary retention after neuraxial anesthesia and there has been no significant difference compared to general anesthesia [15] . The risk of epidural/spinal hematoma formation is also frequently quoted as a concern, although this event is arguably rare. In a series of over 100000 patients undergoing orthopedic surgery with neuraxial anesthesia only 8 patients out of 97 patients reporting neurologic deficits were found to have epidural blood or gas collec-tion. Of these affected individuals, all patients were using at least one potentially coagulation-impairing medication, but only one took an antiplatelet drug. In this series no patient sustained lasting nerve damage. This data suggests a slightly higher risk of complication then in an obstetric surgery setting, where patients are younger and healthier [16] . Furthermore, it has been shown that peripheral nerve blocks are safe to use even in patients requiring thromboprophylaxis after joint arthroplasty. In approximately 7000 procedures among patients receiving warfarin, aspirin, fondaparinux, dalteparin and enoxaparin no perineural hematomas have been recorded in continuous lumbar plexus, femoral and continuous or single sciatic blocks [17] . The general neurological complication risk of a central nerve blockade has been reported to lie below 0.04% and the rate of neuropathy after peripheral nerve block below 3%, with even less leading to permanent nerve damage. In fact, only one such case was reported in a review of 16 studies after peripheral nerve block with sample sizes ranging from 20 to 10309 blocks [18] . A number of specific regional anesthetic procedures have been described, all with advantages and pitfalls. The psoas compartment block has been described as analgesically potent as an epidural technique during hip surgery, but reports caution regarding the possibility of severe complications, with the main risk being intrathecal or intravasal application of cardiotoxic doses of local anesthetics. With the advanced use of ultrasound however, these deep blocks may become even safer and their role in an intraoperative setting during hip surgery will have to be further evaluated [19] . Due to the perceived risk involved in epidural, spinal or lumbar plexus blocks under anticoagulants, the femoral block has been developed as a possible alternative and has shown promising results considering postoperative analgesia, but has been criticized as an impediment to early postoperative ambulation [20] . Some data suggest that a 4-d continuous lumbar plexus block may be compatible with successful postoperative ambulation. Recent studies did not have enough power though, to show statistical significant superiority compared to overnight use [21] . Further, it has to be noted that under peripheral nerve block, for example a continuous lumbar plexus block, the risk of postoperative falls seems to be increased compared to non-continuous or no block used in patients with major lower extremity orthopedic surgery. However, the attributable risk of 1.7% seems to be within expectable range after major orthopedic surgery [22] . In keeping with the trend of delivering anesthetic potency as close to the source of pain as possible, investigators have studied if pain could be reduced in minimally invasive hip arthroplasty patients receiving spinal anesthesia and an epicapsular catheter delivering ropivacaine to the wound. This approach showed a statistically significant reduction in postoperative morphine intake compared to administration of a placebo agent [23] . To date, only few trials have shown corresponding results either by one-time local injection or continuous applica-tion, highlighting the fact that near-wound infiltration techniques warrant further studies for optimization. Due to the early stage of these techniques, no standard approaches or guidelines have been defined to date [24] . But many different approaches to regional anesthesia have shown promising results in postoperative pain reduction [25] . In trying to provide guidance on best practices for hip surgical patients, the PROSPECT workgroup, focusing on procedure specific postoperative pain management, has recommended the use of peripheral nerve blocks as the primary choice for postoperative pain management in patients undergoing total hip arthroplasty, followed by spinal or epidural anesthesia depending on risk factors and comorbidities. The newer local infiltration techniques still warranted a grade A recommendation, if applicable for postoperative pain management. Even though the need to identify the proper intraoperative anesthetic method is focused on the consideration of the comorbidities of an individual patient and postoperative analgesia is therefore considered to be a secondary concern [26] . OUTCOMES While traditionally viewed as a means to provide surgical conditions, increasing evidence suggests that the choice of anesthesia significantly impacts on perioperative outcomes and thus may be viewed as a major component in an attempt to optimize patient care. Below follows a brief summary of the available evidence in respect to a number of important endpoints. In-hospital mortality and 30-d mortality Utilizing data from the UK collected between the years 2003 and 2011, a retrospective analysis of 90-d mortality in total hip replacements for osteoarthritis identified 4 major modifiable clinical factors for an improved outcome: A posterior surgical approach, mechanical and chemical thromboembolic prophylaxis and spinal anesthesia. Positive changes in management of the procedures could be shown as a steady decrease in 90-d mortality from 0.56% in 2003 to 0.29% in 2011 [27] . On the contrary, preexisting factors such as advanced age, male gender and a history of cardiorespiratory disease were associated with an increased risk of mortality within thirty days after elective hip arthroplasty [28,29] . Interestingly, a new study evaluating the impact of the type of anesthesia on joint arthroplasty patients in the US, identified beneficial effects on major complications including 30-d mortality among all age groups of patients irrespective of comorbidity status, thus supporting the use of neuraxial anesthesia in all patient groups. Arguably though, the positive effect size was larger among older, sicker patients with cardiopulmonary diseases compared with younger, healthier patients [30] . Further, a population based comparative effectiveness pairing spinal anesthesia to general anesthesia, noting a reduction of blood loss and transfusion requirement, as well as higher postoperative hemoglobin levels on days 1 and 2 [36] . Since these differences have been reported to occur even in similar systemic blood pressure anesthesia, some authors have suggested differences in the distribution of blood flow caused by spontaneous versus positive pressure ventilation [37] . Especially in patients undergoing total hip replacement the use of neuraxial anesthesia has shown a reduction in blood loss as well as transfusion rates [38] . The posterior lumbar plexus block has also been shown to be associated with reduced perioperative blood loss, perhaps in part due to its hemodynamic stability evoking pain control benefits and related decrease in sympathetic discharge [39] . Researchers have speculated that hypothermia in patients might contribute to coagulopathies and might have an impact on perioperative blood loss. While some studies seem to affirm these effects, others have failed to show significant differences in normothermic to hypothermic patients. Until further studies have been conducted it seems safe to strive for normothermic surgical patients [40] . Anesthesia generally affects body temperature, though neuraxial anesthesia seems to impair thermoregulatory control less than general anesthesia [41] . All in all, the reduction in blood loss and transfusion requirement associated with neuraxial anesthesia is one of the best established concepts. A previously discussed comparative effectiveness analyses showed a significant difference in blood product transfusion with a 14% reduction in neuraxial versus general anesthesia. Also neuraxial anesthesia, even in combination with general anesthesia, showed beneficial outcomes with an increased risk for transfusions (odds ratio 1.4) after total hip arthroplasties for general anesthesia alone when compared to combined neuraxial/general anesthesia [6] . Thromboembolic events A number of pre-existing risk factors that have been shown to be associated with the development of thromboembolic events after hip surgery include a history of prior venous thromboembolism, obesity, delayed ambulation and female sex. Factors associated with lower risk could be identified in Asian/Pacific Islander ethnicity, the use of pneumatic compression among non-obese patients after surgery and extended thromboprophylaxis after hospital discharge. With these predisposing factors in mind some chemical markers have helped to identify high-risk patients, including elevated plasma D-Dimer and hyperlipidemia [42,43] . Many studies have shown differences in thromboembolic risks comparing the use of general versus neuraxial anesthesia [44] . Some authors suggest that the systemic effect of local anesthetics, as is seen during epidural anesthesia, might also lower surgery induced hypercoagulation in patients, leading to the aforementioned favorable difference in thromboembolic events. In patients undergoing epidural anesthesia after major orthopedic surgery study has shown a trend of reduction in 30-d mortality in hip arthroplasty with neuraxial compared to general anesthesia alone, with respective mortality rates of 0.2% and 0.3% [6] . This mentioned positive effect of neuraxial anesthesia could also be shown in patients after hip fracture, a procedure typically affecting an elderly population [7] . Meta-analyses have shown that spinal anesthesia is associated with significantly reduced early mortality, fewer incidents of deep vein thrombosis, less acute postoperative confusion, a tendency to fewer myocardial infarctions, fewer cases of pneumonia, fatal pulmonary embolism and postoperative hypoxia. In this population general anesthesia and respiratory diseases were identified as significant predictor of morbidity [31] . Partially due to the fact that patients after traumatic injury are struggling with a number of contributing complications, this patient population suffers from a significantly higher mortality risk. In recent studies 30-d mortality has been reported as high as 13.3% and 3-6 mo mortality at around 15.8% in geriatric patients after hip fracture surgery. Indicators for this included advanced age, male gender, nursing home or facility residence, poor preoperative walking capacity, poor activities of daily living, higher ASA grading, poor mental state, multiple comorbidities, dementia or cognitive impairment, diabetes, cancer and cardiac disease. This extensive comorbidity burden helps to explain an overall mortality within 2 years of up to 34.5% [32] . In hip fracture patients, trials noted a beneficial outcome in patients receiving regional anesthesia, with the main benefit lying in reduced 1-mo mortality and incidence of deep vein thrombosis [13] . A recent comparative effectiveness trial of general versus regional anesthesia in hip fracture patients documented an in-hospital mortality rate of 2.4%. There were lower adjusted odds of mortality and pulmonary complications in patients receiving regional anesthesia. The rate of patients operated on with regional anesthesia was however noted to be at only 29%. In the subgroup analysis, regional anesthesia, i.e., neuraxial, proved to be especially beneficial in patients with intertrochanteric fractures but no significant benefit in patients with femoral neck fractures could be shown [7] . Of interest may be that among elderly patients undergoing hip or knee surgery neither general nor regional anesthesia does seem to contribute to impairment of cognitive and functional competence [33] . Blood loss and transfusion need For many years it has been repeatedly noted, that the type of anesthesia significantly impacts on intra and perioperative blood loss. These effects have primarily been attributed to hemodynamic differences, with lower and more stable blood pressures achieved through regional anesthesia resulting in less blood loss [34] . Others have suggested a negative effect of general anesthesia utilizing nitrous oxide in the anesthetic gas mix to hinder erythropoiesis during endogenous recovery of red blood cells as a contributing factor [35] . Studies showed favorable results coagulation parameters were reported as not significantly altered from baseline [45] . Observational studies have failed to this day to show differences in homeostatic markers undergoing general or neuraxial anesthesia, leaving the reasons for the observed clinical differences to be discussed and studied [46] . Cardiopulmonary complications The most frequent causes of death in modern joint replacement surgery are related to cardiopulmonary complications, even when excluding pulmonary embolism [47] . From a cardiovascular perspective, it has been shown that the use of general anesthesia in combination with an epidural block increased the probability of patients experiencing clinical significant hypotension during anesthetic induction as compared to patients receiving either anesthesia alone. Still, no differences in heart rate or frequency of bradycardia have been observed [35] . Recent population based data have failed to show differences in the risk for myocardial infarction in patients receiving general or neuraxial anesthesia. However, a 13% reduction in risk for non-ischemic cardiac events such as arrhythmias was noted [6] . From a pulmonary perspective, regional anesthesia has been shown to be the preferable type of anesthesia in hip fracture patients with COPD and seems to be also associated with less pulmonary complications in all hip fracture patients [7,48] . In patients undergoing total hip arthroplasty the use of general anesthesia vs neuraxial anesthesia showed a favorable outcome in respect to pulmonary complication risk with an adjusted odds ratio of 3.34. Since this significant beneficial effect could not be shown when a combination of neuraxial and general anesthesia was used, the reduced need for airway instrumentation and mechanical ventilation leading to less risk for aspiration, pneumonia or atelectasis might be possible underlying factors. Additionally, the reduction in postoperative opioid use might be a further reason for reduced pulmonary compromise and reduced utilization of critical care services [6,49] . Infections Surgical site infections are feared complications associated with significant morbidity and mortality [50] . After adjustment for influencing factors, the odds of surgery site infections have been reported 2.21 times higher in patients receiving general anesthesia when compared to epidural or spinal anesthesia [51] . The overall rate of infections (including surgical site and systemic) in elective hip surgery has been shown to be significantly increased with an adjusted odds ratio of 1.45 when comparing general anesthesia with neuraxial anesthesia alone [6] . Some explanation for the aforementioned effects may be, that in-vitro and in-vivo experiments showed local anesthetics to modulate inflammatory response. Since epidural administration of local anesthetics leads to blood levels close to intravenous application, a systemic effect of these local anesthetics has to be considered. There have been beneficial reports of systemic use of local anesthetics in sterile inflammation. However, it has been hypothesized, that with bacterial contamination this might lead to an increased risk of infection [52] . Therefore it has been questioned whether neuraxial anesthesia is safe in patients with pre-existing infections such as infected prosthesis. Studies showed, that in these settings there was only a minimal risk of central nervous infections based on clinical criteria [53] . Furthermore there has been no difference noted in cell-mediated or humoral immune response comparing spinal or general anesthesia [54] . Economic outcomes The international trend to reduce length of stay in surgical patients also applies to hip surgery. With multi-modal anesthesia, minimal invasive-surgery and home rehabilitation it has been shown that up to 44.4% of patients following total hip arthroplasty can be discharged within 24 h. Many patients can be discharged with indwelling peripheral nerve catheters and up to three-quarters of these patients do not require outpatient or home nursing care. Negative predictive factors for early discharge seem to be female gender, increasing age, increasing estimated blood loss and ASA III or IV [55] . Concerns that complicated procedures may raise operating costs can be addressed by strategy and structural changes in the perioperative process, as has been shown in using an induction room in which pre-operative neuraxial anesthesia is being performed adjacent to the operating room [56] . In contrast to perceived delays total hip replacement surgery operating times were significantly reduced in patients receiving regional anesthesia [12] . Some studies argue that spinal anesthesia is associated with a benefit reflected in significant cost-reduction both in anesthesia times and recovery compared to general anesthesia in total hip or knee replacement operations [57] . When studying population data, results suggest a lower incidence of increased cost in neuraxial patients combined with a lower risk for prolonged length of stay [6] . In addition to lower complication rates and decreased resource utilization associated with the latter (as expressed in lower intensive care unit utilization and need for mechanical ventilation), economic benefits achieved with neuraxial anesthesia seem to make a sound economic argument [49] . CONCLUSION Randomized controlled trials on the differential impact of the type of anesthesia on outcomes are rare, underpowered and often present single-institutional data from specialized institutions. Meta-analyses and population based comparative effectiveness studies however, have shown that regional anesthesia seemingly improves perioperative outcomes in hip surgical patients. Most studies concur on positive outcome in overall mortality, thromboembolic events, blood loss and transfusion requirements. Despite some criticisms of the retrospective
2018-04-03T02:23:40.293Z
2014-07-18T00:00:00.000
{ "year": 2014, "sha1": "e7af06272ac0ced0f1b4a47f95cd01f19f75efd2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5312/wjo.v5.i3.336", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "00adac8cf777779fbee8149c9bfe58260c646ca4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2389385
pes2o/s2orc
v3-fos-license
Serum hyaluronan and collagen IV as non-invasive markers of liver fibrosis in patients from an endemic area for schistosomiasis mansoni : a field-based study in Brazil Schistosoma mansoni eggs are continuously trapped in the liver. They are surrounded by granulomatous inflammatory reactions that eventually lead up to portal branch obstruction. The consequence is chronic deposition of collagen in periportal spaces, seen as fibrous plaques, while liver architecture is considerably preserved (Bogliolo 1957, Lambertucci 1993, Andrade et al. 1997, Lambertucci et al. 2001, Andrade 2004, Gryseels et al. 2006). The correspondent clinical manifestation, hepatosplenic schistosomiasis, includes portal hypertension and its complications splenomegaly, collateral circulation, oesophageal varices and ultimately, death from variceal bleeding. Abdominal ultrasound has been compared to liver biopsy and proved useful in the diagnosis of liver injury in schistosomiasis (Homeida et al. 1988, Abdel-Wahab et al. 1989). Considered a simple, safe and low-cost method, after the advent of portable equipment, it has been of invaluable help in the screening of populations living in endemic areas and in field-based studies of the disease. Taken as a nearly ideal tool in the diagnosis and classification of schistosomiasis, it is now used as a surrogate for the gold-standard in the diagnosis of schistosomiasis-related liver fibrosis (Doehring-Schwerdtfeger et al. 1989, Abdel-Wahab et al. 1992, Richter et al. 1992, Abdel-Wahab & Strickland 1993, pinto-Silva et al. 1994, Richter 2000, Kariuki et al. 2001, Cota et al. 2006, Marinho et al. 2006, Ruiz-Guevara et al. 2007). Fibrogenesis in the liver arises from the activation of hepatic stellate cells. The activated cell proliferates and undergoes phenotypical transformation to acquire the myofibroblast phenotype, which provides it with the migration and secretory properties of a myofibroblast. In addition to collagenous and non-collagenous matrix components, myofibroblasts produce and secrete matrix degradation enzymes and pro-fibrotic cytokines, thus promoting environment modification, amplification and perpetuation of fibrogenesis (Gressner & Bachem 1990, Friedman 1993, Gressner 1998, Friedman 1999, 2000, Kisseleva & Brenner 2007). Stellate cell activation is a common response to various mechanisms of injury, like necrosis, inflammation or peroxidation. It is started and mediated by cytokines and growth factors produced by most cell types present in the liver: inflammatory cells, endothelial cells, platelets, injured liver parenchymal cells and previously activated stellate cells (Nakatsukasa et al. 1990, Friedman 1993, Blazejewski et al. 1995, Friedman 1999). Serum hyaluronan and collagen IV as non-invasive markers of liver fibrosis in patients from an endemic area for schistosomiasis mansoni: a field-based study in Brazil Evolving knowledge of liver fibrosis pathophysiology (Grimaud 1987, Friedman 1993), allied to advances in laboratory methods for quantification in body fluids, led to the identification of useful markers of liver fibrosis activity.Examples exist of different chemical specialties, such as hyaluronan (HA), a glycosaminoglycan, the glycoproteins laminin and YKL-40, collagenous molecules, such as collagen (C) types I, III, IV and V, the metalloproteinase enzymes and their tissue inhibitors, the cytokine TGF-ß.Many of them have been tried and have proved useful in the diagnosis and grading of liver fibrosis secondary to several conditions, including chronic viral hepatitis, alcoholic cirrhosis, non-alcoholic steatohepatitis and schistosomiasis (Stone 2000, Afdhal & Nunes 2004, Grigorescu 2006).However, their ability to identify liver fibrosis in schistosomiasis, when applied to population-based studies in endemic areas, deserves further investigation. In this paper, we compare the serum levels of two non-invasive markers of fibrosis, HA and C-IV, to ultrasound diagnosis of liver fibrosis, in a highly endemic area for schistosomiasis in the state of Bahia (BA) in Brazil.The main objective was to determine the importance of those markers as a screening procedure to identify patients with liver fibrosis in endemic areas for schistosomiasis mansoni. PATIENTS, MATERIALS AND METHODS Patients -A total of 3,766 subjects from Brejo do Espírito Santo, a rural community of Santa Maria da Vitória, BA, Brazil, were examined every four years since 1976 by one of the authors of this paper (Ap).The prevalence of schistosomiasis, determined by parasitological stool examinations (Katz et al. 1972), was 75% in 1976.After the implementation of disease control measures, a significant improvement occurred and the prevalence of schistosomiasis dropped down to 1. 8% by 2004(Ruiz-Guevara 2005). In October 2004, 79 individuals were selected and invited to participate in this cross-sectional study.Subjects were submitted to clinical and ultrasound examination.Serum samples were obtained from all participants and stored at -20ºC until transportation to Belo Horizonte, Minas Gerais (MG) where processing took place.Besides HA and C-IV levels in serum, serological tests for viral hepatitis -HBsAg, anti-HBc (HBS and anti-HBc EIA, Medical Biological Service, Milano, Italy) and anti-HCV (Detect-HCV 3.0, Adaltis, Montreal, Canada) -were performed.Liver cirrhosis was excluded by ultrasound examination.Liver fibrosis was diagnosed and graded according to ultrasound classification following the World Health Organization (WHO) patterns for liver fibrosis in schistosomiasis mansoni (Niamey Working Group 2000).The information was stored in a databank especially elaborated for this study, using the appropriate software (Epi Data, http//www.epidata.dk). Physical examination -physical examination was conducted by two of the authors of this paper (Ap, JRL).Discordant patients were re-examined and a consensus was reached in all cases.Abdominal palpation was performed with patients in the dorsal decubitus, during deep breath.The liver and spleen were sought below the costal margins and when palpable, their lengths were measured. Ultrasound -Sonographic examination was conducted by a radiologist trained in the application of the Niamey protocol (Niamey Working Group 2000).One portable GE Logic Book (GE Healthcare, Chalfont St. Giles, UK) was used with a 2.5-5 MHz multifrequency convex transducer, which allows storage of raw data in Dicom format for future re-evaluation.Subjects were allocated in four groups according to ultrasound classification: (i) no fibrosis, including those classified as WHO pattern A, (ii) light fibrosis, composed of those classified as C, D or Dc, (iii) moderate fibrosis, with those classified as E or Ec and (iv) intense fibrosis, with patients classified as F. HA and C-IV -The serum fibrosis markers were both tested using commercially available ELISA kits, in accordance with the manufacturer's recommendations (HA-ELISA  and Collagen IV ELISA  , Echelon Biosciences Inc, Salt Lake City, EUA). Statistical analysis -Data analysis was performed using SpSS 12.0 for Windows (SpSS Inc, Chicago, EUA, 2005).The significance level of 0.05 was considered throughout the analysis.Continuous variables were described as mean (± standard deviation) or median (25-75%) and compared using Student's t or Kruskal-Wallis tests, as appropriate.Chi-square was used for comparison of categorical variables.Comparison between groups was accomplished using the ANOVA test after natural logarithm transformation of variables with non-normal distributions.Differences between groups were sought with the Tamhane test for non-parametric data.A global test accuracy and cut-off value was determined through the analysis of the area under the ROC curve (AUC).pearson and Spearman coefficients were employed for correlations, as appropriate.Linear regression modelling was employed for multivariate analyses.Clinical and ultrasound variables with significance levels up to 0.20 were included.The clinical variables included were as follows: liver to right costal margin, liver to xiphoid process and spleen to left costal margin distances.The ultrasound variables included were as follows: left and right liver lobe length, spleen length, portal, splenic and superior mesenteric vein diameters, periportal fibrosis (subjective evaluation), WHO patterns of fibrosis, gallbladder wall thickness, periportal thickness in the hilum and on first and second order branches.Data was also adjusted by age and body mass index (BMI), based on literature evidence of HA variations with aging and nonalcoholic steatohepatitis (Fraser et al. 1997, Sakugawa et al. 2005, Suzuki et al. 2005). Ethics -This study was approved by the Ethical Board of the Universidade Federal de Minas Gerais.All participants had given written authorisation at the time of inclusion in the study, following the recommendations contained in the Helsinki protocol (WMA 1964).; c: daily alcohol intake: > 40 g for men and 20 g for women; d: one way analysis of variances (ANOVA) test; e: p = 0.002; f: p = 0.005; g: p = 0.006; h: p = 0.002; i: p = 0.000; j: p = 0.002; k: p = 0.000; l: p = 0.000. RESULTS Of those included, 38 of the 79 individuals were male (47.5%).The ages ranged from 21-82 years (49 ± 13.4) and the mean BMI was 22.1 (± 3.0).Twelve (15%) were white (skin colour), as per observer.Digestive bleeding and blood transfusions were more frequently reported in patients with more intense fibrosis (Table I). Ultrasound -A comparison between groups of fibrosis showed statistically significant differences for the presence of collateral circulation, spleen, portal, splenic and mesenteric vein diameters.In sum, six patients (2 with light and 4 with moderate fibrosis) had previously undergone a splenectomy, two patients (1 with moderate and 1 with intense fibrosis) had portal vein thrombosis, three individuals without fibrosis had steatosis and five (1 light and 4 moderate fibrosis) had a heterogeneous liver.In addition, one patient with moderate fibrosis had abdominal lymph node swelling.Clinical, epidemiologi-.Clinical, epidemiological and ultrasound data are summarised in Table I. HA -Serum HA levels ranged from 72.3-1.074.9 ng/ mL [152.6 (117.0-230.1)ng/mL].A comparison of medians revealed statistically significant differences between individuals without fibrosis and all groups of fibrosis (p < 0.001) as well as between light and intense fibrosis (p = 0.029) (Fig. 1, Table II).HA had a positive and significant correlation with an ultrasonographic diagnosis of portal hypertension (Table III). Independent predictors of an increase in the HA level were periportal fibrosis (subjective radiologist evaluation), age and collateral circulation.The constant, variables, regression coefficients and significance levels are depicted in Table IV. C-IV -Serum levels of C-IV ranged from 234.8-1.549.4 ng/mL (798.5 ± 300.9 ng/mL).A comparison of means between groups did not show a statistically significant difference (p = 0.692). DISCUSSION In this study, patients with liver fibrosis presented high serum levels of HA.No correlation was found between C-IV levels and the presence or grade of fibrosis in our patients. Serum levels of HA might not express fibrosis activity and extracellular matrix production in the liver.This point has been addressed by Torre et al. (2008).They have tested serum samples collected simultaneously from the peripheral and hepatic veins during portal venous system hemodynamic investigation in 15 patients.The results showed an excellent correlation between hepatic and peripheral levels for HA (r = 0.971; p < 0.00001) and other markers.This information, together with the consistently good correlation of HA levels and fibrosis in imaging and histological studies, lead us to accept serum levels as a marker of liver fibrosis in pertinent contexts. The serum HA level has been compared to ultrasound diagnosis of liver fibrosis in schistosomiasis mansoni with inconsistent and even contradictory results.Ricard-Blum et al. (1999) found a good correlation between HA and ultrasound scores of fibrosis in an endemic area for schisto- somiasis in Madagascar.Burchard et al. (1998) found no correlation and suggested that the increase of serum HA levels was actually related to inflammatory activity or liver function alterations that are absent in schistosomiasis, rather than to fibrosis itself.Yet, WHO patterns for ultrasound in schistosomiasis (Niamey Working Group 2000) have not been employed in the investigation of non-invasive markers of fibrosis.An increase in HA levels during liver parenchyma inflammation has been investigated in viral hepatitis patients with or without schistosomiasis.The results have demonstrated that serum HA concentrations preferably correlate with fibrosis, even when concurrent inflammatory activity exists (pascal et al. 2000, Tao et al. 2003, Eboumbou et al. 2005, Zheng et al. 2005). Köpke-Aguiar et al. ( 2002) evaluated serum HA in the diagnosis of hepatosplenic schistosomiasis using the ROC curve analysis.HA was considered a useful marker of portal hypertension caused by schistosomiasis, as it had a diagnostic efficacy of 0.78. Herein, we found that HA correlated positively with an ultrasound diagnosis of portal hypertension.The best HA diagnostic accuracy, however, was found for the identification of fibrosis.Fibrosis is a consequence of immune modulation and tissue repair of the egg-induced granulomatous reaction in vascular and perivascular tissue (Lenzi et al. 1998, 1999, Wynn et al. 2004).HA clearance has been reported to rely on endothelial function (Fraser et al. 1986).A high HA serum concentration in patients with fibrosis might actually signal declining endothelial function due to vascular obstruction by schistosoma eggs and chronic inflammation.Likewise, portal hypertension might represent a consequence of the same process, thus leading to a positive correlation with HA concentrations. The commercial competitive ELISA assay we used counts on industrial standards and has the advantage of not using radioactive reagents, as in the radiometric assays previously employed (Burchard et al. 1998, Ricard-Blum et al. 1999).Köpke-Aguiar et al. (2002) used an in house-developed sandwich ELISA test.The 20 µg/L cut-off point they have reported differs from the one we found, 115.4 ng/mL.Discordances may be attributed to the assay methods.Notwithstanding, differences in diagnostic accuracy must be interpreted with caution.HA is a main component of extracellular matrix, which increases substantially during fibrosis of any aetiology (Friedman 2003).Hence, in the present paper, the ability of HA to diagnose fibrosis in the endemic area was tested by comparing it to ultrasound.The smaller accuracy of HA to identify portal hypertension suggests that the clinical features may not be directly determined by, although positively correlated to, the amount of liver fibrosis. To our knowledge, there is no study comparing serum HA to liver histology in human schistosomiasis mansoni.However, such a comparison has been carried out by others for different liver diseases.Their results were consistent and reliable for diagnosis of the presence and intensity of fibrosis in viral hepatitis (Guéchot et al. 1996, Zheng et al. 2002), alcoholic cirrhosis (parés et al. 1996(parés et al. , Stickel et al. 2003) ) and non-alcoholic steatohepatitis (Sakugawa et al. 2005, Suzuki et al. 2005).Nevertheless, in the present paper, serum HA was not able to detect fibrosis progression.Studies comparing histology and ultrasound, albeit presenting reliability for the ultrasonographic diagnosis of the presence of fibrosis, have also pointed to the inaccuracy of the method in grading the intensity of fibrosis (Homeida et al. 1988, Abdel-Wahab et al. 1992, Voieta 2008). There is no consensus on the use of ultrasound as a gold-standard for the diagnosis of liver fibrosis in schistosomiasis (Lambertucci et al. 2000(Lambertucci et al. , 2001(Lambertucci et al. , 2008)).A recent investigation comparing ultrasound to magnetic resonance and liver histology further revealed drawbacks of ultrasound diagnosis and grading of Symmers fibrosis (Lambertucci et al. 2002, 2004, Silva et al. 2006, Voieta 2008).Magnetic resonance, however, demands resources unavailable in most endemic countries and large scale liver biopsy in field-based studies contradicts elementary ethical principles.In addition, percutaneous liver biopsy itself carries its own technical shortcomings, which are related to the variability and reproducibility of results (Afdhal 2004, Cheung et al. 2008). Serum C-IV has been used to diagnose liver fibrosis in diseases that evolved with sinusoid capilarisation (Hiramatsu et al. 1995, Murawaki et al. 2001, pereira et al. 2004, Santos et al. 2005, Halfon et al. 2006, Yoneda et al. 2007).Comparing serum C-IV levels between groups, according to the clinical classification of patients with mansonic and hematobic schistosomiasis, Shahin et al. (1992) found high serum levels in patients with hepatosplenic schistosomiasis.In Tanzania, a correlation between serum C-IV and clinical and sonographic signs of hepatosplenic schistosomiasis was observed, but it did not have enough sensitivity to be useful as a screening tool for fibrosis (Kardorff et al. 1999).Two independent groups in China found C-IV to be higher in subjects infected with Schistosoma japonicum (Guangjin et al. 2002) and to correlate with re-infection after treatment of schistosomiasis japonica with praziquantel (Li et al. 2000).Wyszomirska et al. (2005Wyszomirska et al. ( , 2006) ) investigated C-IV for the detection of schistosomiasis-related liver fibrosis in a tertiary care hospital in Brazil.patients had higher levels than controls.Nevertheless, significant differences were found only in patients with hepatosplenomegaly and complications of portal hypertension.In further investigation of those patients who underwent a splenectomy, the authors reported a reduction of pre-surgical levels, suggesting that C-IV expression and deposition in the liver may be influenced by the spleen.However, as in the present paper, no correlation was found between serum C-IV levels and ultrasound diagnosis of fibrosis. We speculate that the high frequency of previous splenectomy in patients with moderate and intense fibrosis in the present paper may have hindered the identification of an increase in serum C-IV levels in parallel with the intensity of liver fibrosis. We conclude that HA has a place in the diagnosis of schistosomiasis-related liver fibrosis in field-based studies.C-IV did not identify liver fibrosis in the study subjects.A comparison of serum markers of fibrosis to histology and to magnetic resonance in selected cases will shed light on their role in the diagnosis of liver fibrosis.HA may be used as a screening test to select patients with fibrosis and, hence, direct diagnostic resources to individuals with the highest chances of having hepatosplenic schistosomiasis. Fibrosis Chi square test; b: World Health Organization patterns for ultrasound examination (Niamey Working Group 2000) TABLE I Clinical, epidemiological and ultrasound characteristics of the selected individuals from Brejo do Espírito Santo,Bahia, Brazil, October, 2004 TABLE II Comparison of hyaluronic acid concentration between groups of fibrosis a
2017-06-17T12:46:49.033Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "2d3f6be4316d8f64ba6982332d1cda2da5f91897", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/mioc/a/hbk7CmnngTTGLsGfrGXgtpH/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2d3f6be4316d8f64ba6982332d1cda2da5f91897", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4877816
pes2o/s2orc
v3-fos-license
BC Walks: Replication of a Communitywide Physical Activity Campaign. Introduction Individuals not engaging in recommended amounts of moderate-intensity physical activity are deemed insufficiently active and are at greater risk of chronic disease. Social marketing strategies may promote positive changes in physical activity levels among insufficiently active individuals. Methods A quasi-experimental design was used to determine whether the results of a previous communitywide physical activity social marketing campaign conducted in Wheeling, WVa (population, 31,420) could be replicated in the larger community of Broome County, New York (population, 200,536). BC Walks promoted 30 minutes or more of moderate-intensity daily walking among insufficiently active residents of Broome County, New York, aged 40 to 65 years. Promotion activities included paid advertising, media relations, and community health activities. Impact was determined by preintervention and postintervention random-digit–dial cohort telephone surveys in intervention and comparison counties. We assessed demographics, walking behavior, moderate and vigorous physical activity, and campaign awareness. Results The paid advertising included 4835 television and 3245 radio gross rating points and 10 quarter-page newspaper advertisements. News media relations resulted in 28 television news stories, 5 radio stories, 10 newspaper stories, and 125 television news promotions. Exposure to the campaign was reported by 78% of Broome County survey respondents. Sixteen percent of Broome County participants changed from nonactive to active walkers; 11% changed from nonactive to active walkers in the comparison county (adjusted odds ratio, 1.71; 95% confidence interval, 0.99–2.95). Forty-seven percent of Broome County respondents reported any increase in total weekly walking time, compared with 36% for the comparison county (adjusted odds ratio, 1.66; 95% confidence interval, 1.14–2.44). Conclusion The BC Walks campaign replicated the earlier Wheeling Walks initiative, although increases in walking were smaller in the BC Walks campaign. Introduction Physical activity levels in the United States have remained relatively unchanged during the past 20 years (1). Physical inactivity is responsible for approximately 250,000 deaths annually in the United States (2) and is likely a major contributing factor in the epidemics of overweight, obesity, and diabetes (3). In 1995 and 1996, major public health reviews recognized the health benefits of moderate-intensity physical activity, such as walking 30 minutes daily (4)(5). The recommendation of the Centers for Disease Control and Prevention (CDC), American College of Sports Medicine (ACSM), and the U.S. surgeon general is to engage in moderate-intensity physical activity at least 30 minutes per day 5 days per week. Individuals not meeting the recommendation are deemed insufficiently active and at greater risk for chronic disease (4)(5). The pervasiveness of physical inactivity represents a populationwide problem (6). Most health promotion efforts, focused on individual and small group programs, have little impact (7). By contrast, populationwide methods to increase moderate-intensity daily physical activity represent a primary prevention strategy to address the growing problem of energy imbalance as well as the increases in chronic disease morbidity that confront industrialized societies (5). Early populationwide cardiovascular interventions had limited capacity to promote physical activity (8)(9)(10)(11). More recent campaigns, including CDC's national VERB campaign, have documented significant changes in physical activity (12)(13)(14)(15)(16)(17)(18). Social marketing strategies may promote populationwide public health change (19). Through the use of targeted mass media and environmental supports, social marketing strategies can increase public awareness and set a public agenda for improving healthy lifestyles (19)(20)(21)(22). One social marketing initiative, Wheeling Walks, involved paid mass media, public relations, and community health activities, along with efforts focused on policy and environmental changes in the community of Wheeling, WVa (23)(24)(25). This model campaign targeted older adults with 8 weeks of media messages encouraging 30 minutes or more of daily moderate-intensity walking. Ninety percent of individuals surveyed in Wheeling reported exposure to the campaign through mass media, and the campaign resulted in a 14% increase in the number of participants who changed from nonactive to active walkers. Although model programs may succeed in their own communities, they frequently fail in other communities. To determine the effectiveness of the Wheeling Walks campaign methodology in another community, BC Walks was implemented in Broome County, New York. This paper reports on the short-term evaluation of the BC Walks campaign. Target population BC Walks was conducted in Broome County, New York (population, 200,536). The campaign targeted insufficiently active adults aged 40 to 65 years, who comprise 32% of the county's population, or 36,080 adults. Chautauqua County, New York (215 miles from Broome County), was selected as the comparison community because it has similar demographic characteristics and physical activity levels (Table 1), and it has separate and distinct newspaper, radio, and network and cable television markets. The Institutional Research Board for the Protection of Human Subjects of United Health Services Hospitals approved the intervention. Design BC Walks used a quasi-experimental design and social marketing principles to promote walking during the 8week period of May 1 through June 26, 2003. The campaign consisted of paid media, public relations, including a speakers' bureau, and community health activities. We purchased 953 30-second advertisements on primetime network television (equal to 4835 gross rating points) and 1645 60-second radio advertisements (3245 gross rating points). (Gross rating points are a media industry measure to assess message penetration.) In addition, we purchased 10 quarter-page advertisements in the local daily newspaper and 1314 30-second advertisements on cable television. Public relations focused on communicating the campaign message in news media. Community health activities were designed to provide social networks, to offer social support, and to reinforce the campaign message. Costs Total intervention expenditures were $155,656 (without evaluation). Media costs were $126,676, with $70,895 spent on network television advertisements, $13,900 on cable television advertisements, $32,186 on radio advertisements, and $9675 on newspaper advertisements. Personnel costs were $29,000. Thus, $4.31 was expended for each of the 36,080 targeted residents of Broome County. Measurement The impact of the intervention was determined by baseline and follow-up random-digit-dial telephone surveys in Broome and Chautauqua counties. The surveys were designed specifically to evaluate BC Walks. One month before the campaign (baseline), we screened the first respondent aged 40 to 65 years to answer the telephone at a household to determine the person's physical activity level. Respondents who met the recommendation of CDC, ACSM, and the surgeon general (4-5) for moderate-intensity physical activity (30 minutes five times per week) or vigorous physical activity (20 minutes three times per week) were excluded from the study and were not interviewed. (CDC does not include walking as one of the criteria for establishing physical activity status.) One month following the campaign (follow-up), the same respondents interviewed at baseline were contacted again by telephone in a panel design. TNS Intersearch Corporation (Westchester, Ill) conducted the surveys using a computerassisted technology interview (CATI) system. The telephone survey included 56 questions at baseline and 48 questions at follow-up. Baseline respondent telephone numbers were abandoned after 10 telephone calls at follow-up. Demographics, walking behavior, and moderate and vigorous physical activity were assessed with standard questions from CDC's Behavioral Risk Factor Surveillance System (BRFSS) (26). Questions from the Wheeling Walks intervention were used to address exposure and knowledge about campaign components (23,25). We asked questions about the source of media messages reported in Broome County only. Statistical analysis The homogeneity of the demographic, health, and physical activity characteristics of the two communities was tabulated using bivariate contingency table chi-square tests. The initial outcome measures related to campaign exposure and recall. We established the change in walking behavior as a second outcome measure. We constructed dichotomous and continuous outcome variables. Dichotomous outcomes included the following: 1) change from nonactive walker to active walker from baseline to follow-up; 2) an increase in weekly walking time from baseline to follow-up; and 3) an increase of 30 minutes of weekly walking time from baseline to follow-up. An active walker was defined as an individual who walks at least 30 minutes per day 5 days per week. Continuous outcome measures, reported as change from baseline to follow-up, included the following: 1) number of days walked per week; 2) minutes walked per day; 3) minutes walked per week; and 4) total weekly minutes per week engaged in moderate or vigorous physical activity. Based upon previous studies (16,25,27), we stratified participants into four physical activity groups according to baseline activity level. Group 1 included participants who walked fewer than 10 minutes daily at baseline; Group 2 included participants who walked from 10 to 29 minutes daily at baseline; Group 3 included participants who walked from 30 to 60 minutes daily at baseline; and Group 4 included participants who walked more than 60 minutes daily at baseline. Analyses were conducted using SAS software version 8.02 (SAS Institute Inc, Cary, NC). We compared the significance of dichotomous outcomes using chi-square tests and multiple logistic regression models to adjust for covariates. Continuous outcomes were compared between communities using the Wilcoxon rank sum test for median times and linear regression to adjust for covariates. The upper limit of weekly walk times was truncated to 840 minutes (28). Covariates were adjusted for logarithmtransformed body mass index (continuous), employed (binomial), fair to poor general health (binomial), and active walker at baseline (binomial). Process evaluation data The media relations campaign activities resulted in 28 television news stories, 5 radio news stories, 10 newspaper stories, and 125 television news promotions. Pledges to walk daily were made by 10,800 individuals. The BC Walks Web site had 11,360 hits, and 961 individuals logged their minutes walked. The campaign speakers' bureau made 42 presentations that were attended by 1492 people. There were 30 worksite walking programs, which included 1207 employees and their family members who pledged to walk. Five schools, with approximately 2000 students, developed walking programs. In addition, the campaign distributed 250 prescription pads promoting the campaign to physicians and nurse practitioners in Broome County. Outcome evaluation data Of the 1396 eligible respondents (aged 40 to 65) who were surveyed at baseline, 949 (68%) were identified as being insufficiently active. Of these 949 respondents, 575 resided in Broome County, and 374 resided in Chautauqua County. At follow-up, 393 (68%) of baseline respondents in Broome County and 207 (55%) of baseline respondents in Chautauqua County were reinterviewed. Table 2 shows the distribution of demographic and behavioral characteristics for participants with and without follow-up data. Most demographic characteristics were similar between intervention and control communities and between dropouts and completers (participants with follow-up information). However, compared with Chautauqua County, more Broome County completers reported fair to poor health and more reported being employed. Campaign awareness In Broome County, the number of survey respondents who reported viewing any nonspecific media messages about walking or being more active increased from 61% at baseline to 81% at follow-up, compared with a decrease of 62% to 56% in Chautauqua County (P < .001 for the difference between the two counties) (data not shown). At baseline and followup, campaign recall was queried. Broome County survey respondents were asked if they had heard of BC Walks, and Chautauqua County respondents were asked if they had heard of Jamestown Walks, a fabricated name. Among Broome County respondents, 36% of respondents at baseline and 78% respondents at follow-up reported exposure to BC Walks; among Chautauqua County respondents, 19% of respondents at baseline and 17% of respondents at follow-up reported exposure to Jamestown Walks (P < .001 for the difference between the two counties). Sixty-two percent of respondents reported exposure to television advertisements; 28% reported exposure to radio advertisements; 36% reported exposure to newspaper advertisements; 43% reported exposure to television, radio, or newspaper news stories; 5% reported exposure to worksite programs; and 4% reported exposure to educational programs and the speakers' bureau. Behavior change Overall, there was a positive trend in the number of days spent walking in both the intervention and comparison community (Table 3). Although Group 1 participants in Broome County reported walking 1.8 days more per week at follow-up, and Group 1 participants in Chautauqua County reported walking 1.5 days more per week at followup, the 0.3-day difference between the two counties was not significant. Total minutes walked per week increased in Group 1 by 94 minutes and in Group 2 by 61 minutes in Broome County, compared with 70 minutes in Group 1 and 27 minutes in Group 2 in Chautauqua County. The 24minute difference between Group 1 in each county and the 34-minute difference between Group 2 in each county, however, were not significant. Only the more sedentary Groups 1 and 2 showed gains in total minutes walked per week; declines were observed in the more active Groups 3 and 4. No significant differences or trends were observed for the nonwalking-related measures of moderate and vigorous activity. Discussion There was a short-term impact of the BC Walks campaign in Broome County that was not seen in the comparison community. Process and impact data suggest that BC Walks faithfully replicated the Wheeling Walks intervention methodology in a larger community but that BC Walks achieved smaller effects. Few studies have examined the repeatability of the additional walking questions asked in the BRFSS. In one Australian study using a representative population sample, the BRFSS walking question had acceptable repeatability, with an intraclass correlation of 0.45 (95% CI, 0.30-0.58). This was the second most reproducible of five survey-based walking estimates (29). Also, in this study, the additional walking questions on the BRFSS were more reproducible than the moderate-and vigorous-intensity physical activity questions in the BRFSS. Furthermore, the Australian estimate was also similar to one estimated for 106 American women, with an intraclass correlation for walking of 0.40 (95% CI, 0.23-0.55) (30). Moderate reliability and validity of the physical activity module of the BRFSS has been reported (31). The increase in walking was smaller in Broome County than in Wheeling (23). No statistically significant difference was observed for categorical change from nonactive to active walker in the BC Walks campaign; the Wheeling Walks campaign resulted in a significant increase of 14% in the number of participants who changed from nonactive to active walker. Regression to the mean may explain the declines in walking behavior observed in the more active Groups 3 and 4. Although data generally do not suggest weight management and reduction in obesity are associated with 30 minutes of moderate-intensity physical activity such as walking (5), the greater magnitude of behavior change for Wheeling Walks would suggest more postcampaign caloric expenditure. But even small populationwide changes in behavior have significant implications for reducing weight gain and the prevalence of obesity. In fact, the average American is gaining 0.5 to 1.0 kg per year (32). Increasing daily walking by approximately 5 minutes would account for 15 kcal, which would neutralize annual weight gain. Preintervention to postintervention awareness of the BC Walks campaign among Broome County telephone respondents increased from 36% to 78%. No change of awareness was observed in the comparison community, indicating that the Broome County community was aware of the campaign message. BC Walks purchased the same amount of television and radio gross rating points as Wheeling Walks. Awareness of BC Walks was nearly as high as awareness of Wheeling Walks, with 78% of Broome County respondents reporting exposure to BC Walks and 90% of Wheeling respondents reporting exposure to Wheeling Walks. BC Walks clearly created interest among media gatekeepers and generated 168 news stories and promotions on the campaign message during the 8 weeks of the campaign. During the first 8 weeks of Wheeling Walks, 280 television, radio, and print news stories were generated (25). A further comparison of campaign awareness based on type of media (Table 5) shows that although percentage of respondents reporting overall exposure was lower for BC Walks (78%) compared with Wheeling Walks (90%), the percentages are similar in both communities for reported exposure to television and radio. An exception is exposure to local news coverage: 81% of respondents in Wheeling indicated that they saw or heard news stories about the campaign; only 43% of the respondents in Broome County reported such exposure. Several factors may have contributed to the diminished campaign impact of BC Walks compared with Wheeling Walks. Although the overall messages were the same in both campaigns (walk 30 minutes or more daily), a mass media telethon event held in Broome County in conjunction with the campaign kickoff to generate initial enthusiasm for walking encouraged callers to pledge to walk at least 10 minutes per day. The event was successful, with 10,800 community members making a pledge. The 10minute message, however, may have created ambiguity. Although the 10-minute pledge was not inconsistent with the paid advertising campaign, which encouraged sedentary people to begin with 10 minutes, increase to 20 minutes, and then to walk 30 minutes daily, the 10,800 pledge respondents may have heard only the 10-minute daily message. Campaign implementation staff must ensure that all messages are designed and delivered to promote an established norm. The fewer newspaper stories may also have lessened the impact of the campaign. Print news media may have a stronger impact on local community behavior than other media. The Broome County television market is more than double (890,573 viewers) that of Wheeling (418,170). Smaller media markets are easier to penetrate because they have fewer competing activities (20). In addition, the Wheeling Walks intervention staff spent 2 years planning implementation, which included a 12week participatory process that increased social capital within the community (24). The participatory planning process resulted in the formation of community task forces to address barriers and resources associated with the development of Wheeling Walks. The Wheeling task forces were involved in conducting community focus groups as part of the formative research to develop campaign television, radio, and print advertisements. The participatory planning group developed into a community advisory commission (24). The mayor of Wheeling sanctioned the Walkable Wheeling Task Force, which he charged with providing semiannual progress reports on improvements to the walkability of the physical environment. The Wheeling community developed a great sense of ownership and engagement in the intervention. Although BC Walks had a communitywide commitment to increase physical activity and the regional metropolitan planning organization was actively involved, the process in Broome County was not as formalized at the local level as it was in Wheeling. (There was no participatory planning process, task force, community advisory commission, or environmental task force sanctioned by the mayor.) It is always a challenge to replicate programs and have the same efficacy achieved by the program originators. Broome County was chosen as the intervention site because United Health Services Hospitals had established involvement with state and local stakeholders to promote physical activity in the region. Having only one intervention community and no random assignment limits the generalizability of the results. However, most of the large communitywide interventions reviewed for this study had similar methodologic limitations (8-11). The BC Walks study, as in Welch Walks (a program in a West Virginia community even smaller than Wheeling [27]) and Wheeling Walks, demonstrated an increase in walking among participants (the purpose of the intervention), but it did not demonstrate an increase in moderate or vigorous activity. The positive change in walking behavior provides some evidence for the impact of targeted campaigns in smaller communities. Future research should examine the potential effective mediators of change in social marketing campaigns. Additional trials should then empirically test the efficacy of interventions targeting the mediators. Public health intervention funding is limited and needs to be spent judiciously. Simultaneous or subsequent replication of small campaigns in similar communities will permit the empirical determination of relative cost-effectiveness of changing physical activity behavior and policy and environmental supports within communities.
2014-10-01T00:00:00.000Z
2006-06-15T00:00:00.000
{ "year": 2006, "sha1": "37e2bf097cf835d6908aba4f6377496847cae899", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "37e2bf097cf835d6908aba4f6377496847cae899", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250384237
pes2o/s2orc
v3-fos-license
Safety assessment of the process Ganesha ecosphere, based on the Starlinger iV+ technology, used to recycle post‐consumer PET into food contact materials Abstract The EFSA Panel on Food Contact Materials, Enzymes and Processing Aids (CEP) assessed the safety of the recycling process Ganesha Ecosphere (EU register number RECYC248), which uses the Starlinger iV+ technology. The input is hot caustic washed and dried poly(ethylene terephthalate) (PET) flakes mainly originating from collected post‐consumer PET containers, with no more than 5% PET from non‐food consumer applications. The flakes are dried and crystallised in a first reactor, then extruded into pellets. These pellets are crystallised, preheated and treated in a solid‐state polycondensation (SSP) reactor. Having examined the challenge test provided, the Panel concluded that the drying and crystallisation (step 2), extrusion and crystallisation (step 3) and SSP (step 4) are critical in determining the decontamination efficiency of the process. The operating parameters to control the performance of these critical steps are temperature, air flow and residence time for the drying and crystallisation step, and temperature, pressure and residence time for the extrusion and crystallisation step as well as the SSP step. It was demonstrated that this recycling process is able to ensure that the level of migration of potential unknown contaminants into food is below the conservatively modelled migration of 0.1 μg/kg food. Therefore, the Panel concluded that the recycled PET obtained from this process is not of safety concern when used at up to 100% for the manufacture of materials and articles for contact with all types of foodstuffs for long‐term storage at room temperature, with or without hotfill. The final articles made of this recycled PET are not intended to be used in microwave and conventional ovens and such uses are not covered by this evaluation. Data Recycled plastic materials and articles shall only be placed on the market if the recycled plastic is from an authorised recycling process. Before a recycling process is authorised, the European Food Safety Authority (EFSA)'s opinion on its safety is required. This procedure has been established in Article 5 of Regulation (EC) No 282/2008 1 on recycled plastic materials intended to come into contact with foods and Articles 8 and 9 of Regulation (EC) No 1935/2004 2 on materials and articles intended to come into contact with food. According to this procedure, the industry submits applications to the competent authorities of Member States, which transmit the applications to EFSA for evaluation. In this case, EFSA received from the German Competent Authority (Bundesamt f € ur Verbraucherschutz und Lebensmittelsicherheit), an application for evaluation of the recycling process Ganesha Ecosphere, European Union (EU) register No RECYC248. The request has been registered in EFSA's register of received questions under the number EFSA-Q-2021-00303. The dossier was submitted on behalf of Ganesha Ecosphere Limited, India. Terms of Reference The German Competent Authority (Bundesamt f € ur Verbraucherschutz und Lebensmittelsicherheit) requested the safety evaluation of the recycling process Ganesha Ecosphere, in accordance with Article 5 of Regulation (EC) No 282/2008. Interpretation of the Terms of Reference According to Article 5 of Regulation (EC) No 282/2008 on recycled plastic materials intended to come into contact with foods, EFSA is required to carry out risk assessments on the risks originating from the migration of substances from recycled food contact plastic materials and articles into food and deliver a scientific opinion on the recycling process examined. According to Article 4 of Regulation (EC) No 282/2008, EFSA will evaluate whether it has been demonstrated in a challenge test, or by other appropriate scientific evidence, that the recycling process Ganesha Ecosphere is able to reduce the contamination of the plastic input to a concentration that does not pose a risk to human health. The poly(ethylene terephthalate) (PET) materials and articles used as input of the process as well as the conditions of use of the recycled PET are part of this evaluation. 2. Data and methodologies Data The applicant has submitted a confidential and a non-confidential version of a dossier following the 'EFSA guidelines for the submission of an application for the safety evaluation of a recycling process to produce recycled plastics intended to be used for the manufacture of materials and articles in contact with food, prior to its authorisation' (EFSA, 2008) and the 'Administrative guidance for the preparation of applications on recycling processes to produce recycled plastics intended to be used for manufacture of materials and articles in contact with food' (EFSA, 2021 Additional information was sought from the applicant during the assessment process in response to requests from EFSA sent on 15 February 2022 and was subsequently provided (see 'Documentation provided to EFSA'). The following information on the recycling process was provided by the applicant and used for the evaluation: • General information: general description, -existing authorisations. • Specific information: recycling process, -characterisation of the input, -determination of the decontamination efficiency of the recycling process, -characterisation of the recycled plastic, -intended application in contact with food, -compliance with the relevant provisions on food contact materials and articles, -process analysis and evaluation, -operating parameters. Methodologies The risks principles followed up for the evaluation are described here. The risks associated with the use of recycled plastic materials and articles in contact with food come from the possible migration of chemicals into the food in amounts that would endanger human health. The quality of the input, the efficiency of the recycling process to remove contaminants as well as the intended use of the recycled plastic are crucial points for the risk assessment (EFSA, 2008). The criteria for the safety evaluation of a mechanical recycling process to produce recycled PET intended to be used for the manufacture of materials and articles in contact with food are described in the scientific opinion developed by the EFSA Panel on Food Contact Materials, Enzymes, Flavourings and Processing Aids (EFSA CEF Panel, 2011). The principle of the evaluation is to apply the decontamination efficiency of a recycling technology or process, obtained from a challenge test with surrogate contaminants, to a reference contamination level for post-consumer PET, conservatively set at 3 mg/kg PET for contaminants resulting from possible misuse. The resulting residual concentration of each surrogate contaminant in recycled PET (C res ) is compared with a modelled concentration of the surrogate contaminants in PET (C mod ). This C mod is calculated using generally recognised conservative migration models so that the related migration does not give rise to a dietary exposure exceeding 0.0025 lg/kg body weight (bw) per day (i.e. the human exposure threshold value for chemicals with structural alerts for genotoxicity), below which the risk to human health would be negligible. If the C res is not higher than the C mod , the recycled PET manufactured by such recycling process is not considered of safety concern for the defined conditions of use (EFSA CEF Panel, 2011). The assessment was conducted in line with the principles described in the EFSA Guidance on transparency in the scientific aspects of risk assessment (EFSA, 2009) and considering the relevant guidance from the EFSA Scientific Committee. General information 6 According to the applicant, the recycling process Ganesha Ecosphere is intended to recycle food grade PET containers using the Starlinger iV+ technology. The recycled PET is intended to be used at up to 100% for the manufacture of materials and articles for direct contact with all kinds of foodstuffs for long-term storage at room temperature, with or without hotfill. The recycled pellets may also be used for sheets, which are thermoformed to make food trays. The final articles are not intended to be used in microwave or conventional ovens. 3.2. Description of the process 3.2.1. General description 7 The recycling process Ganesha Ecosphere produces recycled PET pellets from PET containers (e.g. bottles), from post-consumer collection systems (kerbside, deposit systems and mixed waste collection). The recycling process comprises the four steps below. Input • In step 1, the post-consumer PET containers are processed into hot caustic washed and dried flakes. This step may be performed by a third party or by the applicant. Decontamination and production of recycled PET material • In step 2, the flakes are dried and crystallised in a reactor under air flow at high temperature. • In step 3, the flakes are extruded at high temperature and then crystallised. • In step 4, the crystallised pellets are preheated before being treated in a solid-state polycondensation (SSP) reactor at high temperature and under vacuum. The operating conditions of the process have been provided to EFSA. Pellets, the final product of the process, are checked against technical requirements, such as acetaldehyde, pellets size and bulk density. Characterisation of the input 8 According to the applicant, the input material for the recycling process Ganesha Ecosphere consists of hot caustic washed and dried flakes obtained from PET containers, e.g. bottles, previously used for food packaging, from post-consumer collection systems (kerbside, deposit systems and mixed waste collection). A small fraction may originate from non-food applications. According to the applicant, the proportion will be no more than 5%. Technical data for the hot washed and dried flakes are provided, such as information on physical properties and on residual contents of moisture, metal content, poly(vinyl chloride) (PVC), polyolefins and other plastics than PET (see Appendix A). 3.3. Starlinger iV+ technology 3.3.1. Description of the main steps 9 The general scheme of the Starlinger iV+ technology, as provided by the applicant, is reported in Figure 1. The steps are: • SSP (step 4): The crystallised pellets are preheated in a reactor before being introduced into a semi-continuous SSP reactor running under vacuum at a high temperature and for a predefined residence time. The process is run under defined operating parameters 10 of temperature, pressure, air flow and residence time. Decontamination efficiency of the recycling process 11 To demonstrate the decontamination efficiency of the recycling process Ganesha Ecosphere, a challenge test performed at pilot plant scale was submitted to EFSA. PET flakes were contaminated with toluene, chloroform, phenylcyclohexane, benzophenone and lindane, selected as surrogate contaminants in agreement with the EFSA guidelines (EFSA CEF Panel, 2011) and in accordance with the recommendations of the US Food and Drug Administration (FDA, 2006). The surrogates include different molecular masses and polarities to cover possible chemical classes of contaminants of concern and were demonstrated to be suitable to monitor the behaviour of PET during recycling (EFSA, 2008). Conventionally recycled 12 post-consumer PET flakes were soaked in a heptane/isopropanol solution containing the surrogates and stored for 14 days at 40°C. After decanting the surrogate solution, the flakes were rinsed with water and air-dried. The concentration of surrogates in these flakes was determined. The Starlinger IV+ technology was challenged at the Starlinger facilities in pilot plant scale. The contaminated flakes were introduced directly into the drier (step 2), then sampled after each step (2-4) to measure the residual concentrations of the applied surrogates. Instead of being processed continuously, the SSP reactor was run in batch mode. However, since the reactor in the process works practically with no mixing, the Panel agreed that the batch reactor in the challenge test provided the same cleaning efficiency when run at the same temperature, pressure and residence time. The decontamination efficiency of the process was calculated from the concentrations of the surrogates measured in the washed contaminated flakes before drying and crystallisation (before step 2) and after SSP (step 4). The results are summarised in Table 1. Table 1, the decontamination efficiency ranged from 90.9% for lindane to more than 99.9% for toluene, chloroform and phenylcyclohexane. Discussion Considering the high temperatures used during the process, the possibility of contamination by microorganisms can be discounted. Therefore, this evaluation focuses on the chemical safety of the final product. Technical data, such as information on physical properties and residual contents of PVC, polyolefins and metals, were provided for the input materials (i.e. washed and dried flakes, step 1). These are produced mainly from PET containers, e.g. bottles, previously used for food packaging, collected through post-consumer collection systems. However, a small fraction may originate from non-food applications, e.g. bottles from window cleaner or shampoo. According to the applicant, the collection system and the process are managed in such a way that in the input stream this fraction will be no more than 5%, as recommended by the EFSA CEF Panel in its 'Scientific opinion on the criteria to be used for safety evaluation of a mechanical recycling process to produce recycled PET intended to be used for manufacture of materials and articles in contact with food' (EFSA CEF Panel, 2011). The process is adequately described. The washing and drying of the flakes from the collected PET (step 1) is conducted in different ways, depending on the plant and according to the applicant, this step is under control. The Starlinger iV+ technology comprises drying and crystallisation (step 2), extrusion and crystallisation (step 3) and SSP (step 4). The operating parameters of temperature, residence time, pressure and air flow have been provided to EFSA. A challenge test to measure the decontamination efficiency was conducted at pilot plant scale on process steps 2-4. The Panel considered that this challenge test was performed correctly according to the recommendations in the EFSA guidelines (EFSA, 2008). The fourth step is expected to be the most critical step for the decontamination, but drying and crystallisation (step 2) as well as extrusion and crystallisation (step 3) are relevant, too. Therefore, the Panel considered that these three steps (drying and crystallisation, extrusion and crystallisation, SSP) were critical for the decontamination efficiency of the process. Consequently, the temperature, the air flow and the residence time for the drying and crystallisation (step 2), as well as the temperature, the pressure and the residence time for extrusion and crystallisation (step 3) and for SSP (step 4) should be controlled to guarantee the performance of the decontamination (Appendix C). The decontamination efficiencies obtained for each surrogate, ranging from 90.9% to > 99.9%, have been used to calculate the residual concentrations of potential unknown contaminants in PET (C res ) according to the evaluation procedure described in the 'Scientific opinion on the criteria to be used for safety evaluation of a mechanical recycling process to produce recycled PET' (EFSA CEF Panel, 2011; Appendix B). By applying the decontamination percentages to the reference contamination level of 3 mg/kg PET, the C res for the different surrogates was obtained (Table 2). According to the evaluation principles (EFSA CEF Panel, 2011), the dietary exposure must not exceed 0.0025 lg/kg bw per day, below which the risk to human health is considered negligible. The C res value should not exceed the modelled concentration in PET (C mod ) that, after 1 year at 25°C, could result in a migration giving rise to a dietary exposure exceeding 0.0025 lg/kg bw per day. Because the recycled PET is intended for the manufacturing of bottles at up to 100%, the scenario for infants has been applied (water could be used to prepare infant formula). A maximum dietary exposure of 0.0025 lg/kg bw per day corresponds to a maximum migration of 0.1 lg/kg into food and has been used to calculate C mod (EFSA CEF Panel, 2011). The results of these calculations are shown in Table 2. The relationship between the key parameters for the evaluation scheme is reported in Appendix B. As C res values are lower than the corresponding modelled concentrations in PET (C mod ), the Panel considered that under the given operating conditions the recycling process Ganesha Ecosphere using the Starlinger iV+ technology is able to ensure that the level of migration of unknown contaminants from the recycled PET into food is below the conservatively modelled migration of 0.1 lg/kg food, at which the risk to human health is considered negligible. The Panel noted that the input of the process originates from India. In the absence of data on misuse contamination of this input, the Panel used the reference contamination of 3 mg/kg PET (EFSA CEF Panel, 2011) that was derived from experimental data from an EU survey. Accordingly, the recycling process under evaluation using a Starlinger iV+ technology is able to ensure that the level of unknown contaminants in recycled PET is below a calculated concentration (C mod ) corresponding to a modelled migration of 0.1 lg/kg food. Conclusions The Panel considered that the process Ganesha Ecosphere using the Starlinger iV+ technology is adequately characterised and that the main steps used to recycle the PET flakes into decontaminated PET pellets have been identified. Having examined the challenge test provided, the Panel concluded that the three steps (drying and crystallisation, extrusion and crystallisation, and SSP) are critical for the decontamination efficiency. The operating parameters to control its performance are the temperature, the air flow and the residence time for the drying and crystallisation (step 2), as well as the temperature, the pressure and the residence time for extrusion and crystallisation (step 3) and SSP (step 4). The Panel concluded that the recycling process Ganesha Ecosphere is able to reduce foreseeable accidental contamination of post-consumer food contact PET to a concentration that does not give rise to concern for a risk to human health if: i) it is operated under conditions that are at least as severe as those applied in the challenge test used to measure the decontamination efficiency of the process; ii) the input material of the process is washed and dried post-consumer PET flakes originating from materials and articles that have been manufactured in accordance with the EU legislation on food contact materials and contain no more than 5% of PET from non-food consumer applications. iii) the recycled PET obtained from the process Ganesha Ecosphere is used at up to 100% for the manufacture of materials and articles for contact with all types of foodstuffs for longterm storage at room temperature, with or without hotfill. The final articles made of this recycled PET are not intended to be used in microwave or conventional ovens and such uses are not covered by this evaluation. Recommendation The Panel recommended periodic verification that the input material to be recycled originates from materials and articles that have been manufactured in accordance with the EU legislation on food
2022-07-09T15:29:15.029Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "3cacf39558e644d8ce0a7a62e1d8a64bd33834d6", "oa_license": "CCBYND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2022.7386", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "861269dcdb4987de7c976d821104a80e859dc21f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259229356
pes2o/s2orc
v3-fos-license
Arginine-Rich Cell-Penetrating Peptide-Mediated Transduction of Mouse Nasal Cells with FOXP3 Protein Alleviates Allergic Rhinitis Intranasal corticosteroids are effective medications against allergic rhinitis (AR). However, mucociliary clearance promptly eliminates these drugs from the nasal cavity and delays their onset of action. Therefore, a faster, longer-lasting therapeutic effect on the nasal mucosa is required to enhance the efficacy of AR management. Our previous study showed that polyarginine, a cell-penetrating peptide, can deliver cargo to nasal cells; moreover, polyarginine-mediated cell-nonspecific protein transduction into the nasal epithelium exhibited high transfection efficiency with minimal cytotoxicity. In this study, poly-arginine-fused forkhead box P3 (FOXP3) protein, the “master transcriptional regulator” of regulatory T cells (Tregs), was administered into the bilateral nasal cavities of the ovalbumin (OVA)-immunoglobulin E mouse model of AR. The effects of these proteins on AR following OVA administration were investigated using histopathological, nasal symptom, flow cytometry, and cytokine dot blot analyses. Polyarginine-mediated FOXP3 protein transduction induced Treg-like cell generation in the nasal epithelium and allergen tolerance. Overall, this study proposes FOXP3 activation-mediated Treg induction as a novel and potential therapeutic strategy for AR, providing a potential alternative to conventional intranasal drug application for nasal drug delivery. Introduction Allergic rhinitis (AR) is an inflammatory disease triggered by an immunoglobulin E (IgE)-mediated atopic response. It is characterized by nasal mucosal inflammation and manifests as rhinorrhea, sneezing, itching, and nasal congestion [1,2]. The rising incidence of AR poses serious public health issues [1,2]. Various therapeutic modalities, including antihistamines, steroids, montelukast inhibitors, and immunotherapy are used to treat AR. Among these, intranasal corticosteroids are the most effective and are regarded as the treatment of choice. Nasal administration is primarily suitable for potent drugs, as only a limited volume can be dropped or sprayed into the nasal cavity. Continuous and frequent administration of drugs may be necessary to achieve long-term effects in the nasal epithelium because drugs are rapidly eliminated from the nasal cavity by mucociliary clearance [3]. The physicochemical properties of a drug, as well as its potency and molecular size, are crucial factors in formulating a solution for nasal delivery. Numerous studies have been conducted to evaluate the appropriate types of drugs and methods for improving intranasal administration [4,5]. However, few studies have examined nasal administration of large proteins and peptides, as small substrates are more easily delivered into the brain than large peptides [6]. Thus, a novel and alternative method for delivering large proteins and peptides is needed to adjust the drug absorption time, concentration, and contact time between the drug and nasal mucosa, and prevent removal of the drug by mucociliary clearance. Among intranasal corticosteroids, mometasone furoate (MF) is a topically effective new-generation intranasal corticosteroid with low systemic absorption and a high affinity for glucocorticoid receptors [2,7]. However, the slow breakdown of MF in suspension delays the commencement of its activity and enables its mucociliary clearance from the nasal cavity [8]. This postpones the onset of its effects and hastens its elimination. A possible solution for this issue is the use of cell-penetrating peptides (CPPs), a diverse family of short peptides that can enter several mammalian cell types [9,10]. Various macromolecules can be attached to these peptides and subsequently internalized. Cargo molecules that penetrate cells maintain their biological activities [9,10]. Among the different CPPs, arginine-rich CPPs have been most widely studied. An example is the protein transduction domain of the HIV-type 1 TAT protein, which contains a high proportion of arginine and lysine residues that are responsible for its ability to penetrate the plasma membrane [9,10] when added to culture media. Moreover, simple polyarginine peptides with an optimal length of 9-11 residues show significantly higher cell penetration rates than TAT [11]. Extensive research has revealed that arginine-rich CPPs [11] contain a high proportion of lysine and arginine residues, which enable penetration across the plasma membrane [10,11]. Our earlier studies have demonstrated that a peptide containing nine arginine residues (9R) efficiently transported enhanced green fluorescent protein (EGFP) into the nasal epithelium and brain of mice [12] without causing any damage to the nasal mucosa or brain. The protein was also transported into the developing inner ears of mice [13] and adult guinea pigs [14] without impairing their auditory or vestibular function. Reports have demonstrated the efficacy of the 9R peptide in delivering cargo into mouse nasal mucosa and maintaining prolonged contact with the nasal mucosa, bypassing drainage by mucociliary clearance. Polyarginine-mediated protein transduction in cells results in EGFP signals that last for 12-96 h, whereas EGFP signals without polyarginineinduced transduction last for only 12 h [12]. Sublingual immunotherapy (SLIT) is another AR-therapeutic strategy that is clinically used to treat type 1 allergies [15]. SLIT induces allergen tolerance, presumably through the reprogramming of allergen-specific T helper 2 (Th2) cells to Th1 cells and the production of peripheral CD4 + regulatory T cells (Tregs). Because the transfer of SLIT-induced Tregs provides tolerance against antigens, production of Tregs is necessary for SLIT efficacy [15,16]. Forkhead box P3 (FOXP3) is widely recognized as a Treg marker and is designated as the "master transcriptional regulator" of Tregs [17,18]. FOXP3 expression may stimulate the differentiation of naïve and memory CD4 + T cells into Tregs [19]. Therefore, FOXP3-mediated Treg induction is a potential therapeutic strategy for AR. To date, cell-based therapy such as adoptive cell transfer of engineered Tregs has provided an effective therapeutic alternative to combat autoimmune diseases. However, no studies have reported directly on the application of the FOXP3 gene or protein for treating AR. The purpose of the current investigation was to explore the feasibility of 9R-mediated FOXP3 protein transduction into the nasal epithelium for therapeutic application in AR. Immunoglobulin E (IgE) Transgenic Mice Six-week-old male ovalbumin (OVA)-specific IgE transgenic mice ("BALB/cA-Tg(IgE-H01-4)Rin Tg(IgE-kL01-4)Rin/Jcl") were obtained from CLEA Japan Inc. (Tokyo, Japan). OVA-IgE mice are genetically modified BALB/c mice that exhibit both acute and chronic allergic reactions with persistent production of IgE upon intraperitoneal administration of egg albumin [21,22]. The animals were kept in a climate-controlled space maintained at 25 • C and 50% relative humidity. All mice were fed a standard commercial pellet diet and had unlimited access to water. The Kitano Hospital "Committee on the Use and Care of Animals" approved all animal studies, which were conducted as per recognized veterinary standards (protocol number: A1900003). Euthanasia was performed via cervical dislocation, ensuring minimal distress and pain to the animals. Protein Administration into the Mouse Nasal Cavity FOXP3-9R and FOXP3 (control) proteins (10 µL aliquots, 0.1 mg/mL) were administered into the bilateral nasal cavities of the animals (5 mice/group) using a micropipette ("QSP, l0 µL filter tips; Thermo Fisher Scientific, Waltham, MA, USA). Before nasal administration, the micropipette tip was positioned 1-2 mm from the entrance of the nasal cavity while mice were awake in the supine position. After nasal treatment, mice were placed in a prone posture for 30 min. We divided the OVA-IgE mice into 6 groups: FOXP3 96 h before OVA (FOXP3 injected 96 h before OVA administration), FOXP3-9R 96 h before OVA (FOXP3-9R injected 96 h before OVA administration), FOXP3 24 h before OVA (FOXP3 injected 24 h before OVA administration), FOXP3-9R 24 h before OVA (FOXP3-9R injected 24 h before OVA administration), FOXP3 immediately after OVA (FOXP3 injected immediately after OVA administration), and FOXP3-9R immediately after OVA groups (FOXP3-9R injected immediately after OVA administration). An overview of the experimental procedure is illustrated in Figure 1. phosphate-buffered saline (PBS). Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was performed for determining the molecular weight of FOXP3-9R (Supplementary Figure S1b). The concentration of FOXP3 and FOXP3-9R protein was 0.3 mg/mL and 0.2 mg/mL, respectively. We diluted them using PBS to obtain the same concentration (0.1 mg/mL). The proteins were then stored at −80 °C. Immunoglobulin E (IgE) Transgenic Mice Six-week-old male ovalbumin (OVA)-specific IgE transgenic mice (" BALB/cA-Tg(IgE-H01-4)Rin Tg(IgE-kL01-4)Rin/Jcl" ) were obtained from CLEA Japan Inc. (Tokyo, Japan). OVA-IgE mice are genetically modified BALB/c mice that exhibit both acute and chronic allergic reactions with persistent production of IgE upon intraperitoneal administration of egg albumin [21,22]. The animals were kept in a climate-controlled space maintained at 25 °C and 50% relative humidity. All mice were fed a standard commercial pellet diet and had unlimited access to water. The Kitano Hospital "Committee on the Use and Care of Animals" approved all animal studies, which were conducted as per recognized veterinary standards (protocol number: A1900003). Euthanasia was performed via cervical dislocation, ensuring minimal distress and pain to the animals. Protein Administration into the Mouse Nasal Cavity FOXP3-9R and FOXP3 (control) proteins (10 µL aliquots, 0.1 mg/mL) were administered into the bilateral nasal cavities of the animals (5 mice/group) using a micropipette ("QSP, l0 µL filter tips; Thermo Fisher Scientific, Waltham, MA, USA). Before nasal administration, the micropipette tip was positioned 1-2 mm from the entrance of the nasal cavity while mice were awake in the supine position. After nasal treatment, mice were placed in a prone posture for 30 min. We divided the OVA-IgE mice into 6 groups: FOXP3 96 h before OVA (FOXP3 injected 96 h before OVA administration), FOXP3-9R 96 h before OVA (FOXP3-9R injected 96 h before OVA administration), FOXP3 24 h before OVA (FOXP3 injected 24 h before OVA administration), FOXP3-9R 24 h before OVA (FOXP3-9R injected 24 h before OVA administration), FOXP3 immediately after OVA (FOXP3 injected immediately after OVA administration), and FOXP3-9R immediately after OVA groups (FOXP3-9R injected immediately after OVA administration). An overview of the experimental procedure is illustrated in Figure 1. Assessment of Nasal Symptoms Nasal itching in OVA-IgE mice was investigated to assess the allergic reaction induced by intraperitoneal administration of OVA. Following FOXP3-9R, FOXP3, or saline administration, mice were observed for itching for 10 min, after a 10 min adaptation period at each time point [23], on days 1 (3 h, 12 h), 2 (24 h), and 3 (48 h) ( Figure 1). Immunohistochemistry In groups injected with FOXP3-9R and FOXP3 24 h before OVA administration, a total of 10 mice (n = 5 per time point) were euthanized 3 h after OVA administration; none of the animals died before this point ( Figure 1). This endpoint was selected to span the expected periods of maximum nasal symptoms, to determine any detrimental effects on the nasal mucosa resulting from the intranasal administration of FOXP3-9R and FOXP3. Mouse heads were dissected to perform a nasal examination, after which the dissected heads were fixed in 4% paraformaldehyde in PBS at 4 • C for 24 h. These tissues were then aligned, frozen in dry ice, and kept at −80 • C before sectioning. For cryostat sectioning, nasal lesions were embedded in an optimal cutting temperature compound ("Sakura Finetek Japan Co., Ltd., Tokyo, Japan") and sectioned serially at 12 µm thickness. The cryostat sections were then washed twice with PBS (5 min/wash), and incubated with fluorescein isothiocyanate (FITC)-conjugated mouse anti-CD4 (RM4-5) antibody (Thermo Fisher Scientific) and allophycocyanin (APC)-conjugated mouse or rat anti-FOXP3 (FJK-16s) antibody (Thermo Fisher Scientific) for 1 h at 25 • C. The sections were then incubated with Hoechst 33,258 ("Molecular Probes, Eugene, OR, USA) for 10 min at 25 • C, for nuclear staining. The specimens were mounted on glass slides using Fluoromount ("Diagnostic BioSystems, Pleasanton, CA, USA) and analyzed and imaged using the BZ9000 fluorescence microscope (Keyence, Osaka, Japan). Hematoxylin and Eosin (H&E) Staining To assess the adverse effects associated with AR, nasal epithelial tissues of mice injected with FOXP3 or FOXP3-9R proteins 24 h before OVA administration were stained using H&E 3 h after OVA administration. A BZ9000 fluorescence microscope was used to analyze and image the samples (Keyence). Antibody Staining and Flow Cytometry To examine Treg expression, nasal and spleen cells were extracted from treated mice by gentle processing between the ends of two sterile frosted slides. Cell surface staining and flow cytometric analysis of FITC-conjugated mouse anti-CD4 (RM4-5), PE-conjugated mouse anti-CD25 (PC61.5), and APC-conjugated mouse or rat anti-FOXP3 (FJK-16s) expression were performed using the eBioscience FOXP3 "Mouse Regulatory T cell Staining Kit #2" (Thermo Fisher Scientific) as previously described [24]. FITC-conjugated CD4 + T cells and APC-conjugated FOXP3 + T cells were positively selected using the CytoFLEX ® flow cytometer (Beckman Coulter, Brea, CA, USA) to isolate Tregs. The purity of sorted populations was estimated to be >99%. Fluorescence-activated cell sorting data were analyzed using the Kaluza 2.1 software package (Beckman Coulter) [25]. Dot Blotting To examine the expression of cytokines, dot blot analysis was performed using a mouse cytokine antibody array kit (Membrane, 22 Targets, ab133993; Abcam, Cambridge, UK) following the manufacturer's instructions. Briefly, nasal epithelial tissues were homogenized and sonicated for 10 min to produce a uniform protein suspension and diluted in concentrations of 250 µg/mL using a BCA protein assay kit (Thermo Fisher Scientific) as previously described [26]. After mixing, 1 mL aliquots were spotted onto membranes and incubated for 2 h at 25 • C. These blots were rinsed and incubated in washing buffer. The blots were then subjected to biotin-streptavidin labeling and detection using horseradish peroxidase-conjugated antibodies. Blots were dried, wrapped in plastic film, and imaged using the "LAS-3000 imaging system (FujiFilm, Tokyo, Japan). ImageJ 1.50i software" (National Institutes of Health, Bethesda, MD, USA) was used to analyze the results as previously reported [27,28]. Statistical Analysis GraphPad Prism version 9.50 for Windows (GraphPad Software, San Diego, CA, USA) was utilized for all statistical analyses. Values are expressed as the mean ± standard error. Repeated measurements by two-way ANOVA with Tukey's post hoc correction were used to determine variations among groups. p-values < 0.05 were considered statistically significant. Nasal Symptoms after Administration of OVA in OVA-IgE Mice After OVA administration, nasal itching in OVA-IgE mice was monitored at each time point. Nasal symptoms of AR began 1 h after administration, peaked at 3 h, and subsided by 48 h (Figure 2, Saline condition). Pharmaceutics 2023, 15, x FOR PEER REVIEW 5 of 12 as previously described [26]. After mixing, 1 mL aliquots were spotted onto membranes and incubated for 2 h at 25 ℃. These blots were rinsed and incubated in washing buffer. The blots were then subjected to biotin-streptavidin labeling and detection using horseradish peroxidase-conjugated antibodies. Blots were dried, wrapped in plastic film, and imaged using the "LAS-3000 imaging system (FujiFilm, Tokyo, Japan). ImageJ 1.50i software" (National Institutes of Health, Bethesda, MD, USA) was used to analyze the results as previously reported [27,28]. Statistical Analysis GraphPad Prism version 9.50 for Windows (GraphPad Software, San Diego, CA, USA) was utilized for all statistical analyses. Values are expressed as the mean ± standard error. Repeated measurements by two-way ANOVA with Tukey s post hoc correction were used to determine variations among groups. p-values < 0.05 were considered statistically significant. Nasal Symptoms after Administration of OVA in OVA-IgE Mice After OVA administration, nasal itching in OVA-IgE mice was monitored at each time point. Nasal symptoms of AR began 1 h after administration, peaked at 3 h, and subsided by 48 h (Figure 2, Saline condition). Effect of FOXP3-9R on Nasal Symptoms We assessed whether the injection of FOXP3-9R or FOXP3 to mouse nasal mucosa affected nasal symptoms. A FOXP3-9R injection 24 h before OVA administration was the most effective in suppressing nasal itching, followed by a FOXP3-9R injection 96 h before OVA administration and FOXP3-9R and FOXP3 immediately after OVA administration ( Figure 2). Histopathological Changes We performed H&E staining of the nasal epithelium of mice receiving FOXP3-9R or FOXP3 injection 24 h before OVA administration. Mice in the FOXP3 24 h before OVA and Effect of FOXP3-9R on Nasal Symptoms We assessed whether the injection of FOXP3-9R or FOXP3 to mouse nasal mucosa affected nasal symptoms. A FOXP3-9R injection 24 h before OVA administration was the most effective in suppressing nasal itching, followed by a FOXP3-9R injection 96 h before OVA administration and FOXP3-9R and FOXP3 immediately after OVA administration ( Figure 2). Histopathological Changes We performed H&E staining of the nasal epithelium of mice receiving FOXP3-9R or FOXP3 injection 24 h before OVA administration. Mice in the FOXP3 24 h before OVA and saline groups showed mucosal edema or eosinophil infiltration and dilated secretory ducts of the lamina (Figure 3). Conversely, the pseudostratified columnar epithelium structure was normal in the FOXP3-9R 24 h before OVA group; in this group, kinocilia were uniform on the epithelial surface. Moreover, the secretory ducts of the lamina propria were not dilated, and there was no evidence of mucosal edema or eosinophil infiltration in this group of animals ( Figure 3). Flow Cytometric Analysis and Immunohistochemistry We conducted a flow cytometric analysis of the nasal epithelium from OVA-IgE mice after OVA administration. The CD4 + FOXP3 + cell numbers in the nasal epithelium were considerably different between mice receiving FOXP3 and FOXP3-9R 24 h before OVA administration and immediately after OVA administration (24 h before administration: 3.7% and 6.2%, respectively, p < 0.001, Figure 4a,b; immediately after administration: 2.9% and 4.5%, respectively, p < 0.001, Figure 4b). However, no major differences were detected when the proteins were administered 96 h before OVA administration (0.8% and 1.2%, respectively, p = 0.48, Figure 4b). After FOXP3-9R administration, CD4 + FOXP3 + cell numbers were the highest in the FOXP3-9R 24 h before OVA group (Figure 4a,b). The frozen nasal epithelium sections were immunostained with the appropriate antibodies to analyze the expression of CD4 + FOXP3 + cells. The results showed that CD4 + FOXP3 + cells congregated in the lamina propria in the FOXP3-9R 24 h before OVA group but not in the FOXP3 24 h before OVA group (Figure 4c). CD4 + FOXP3 + cells were not found in the brain of the FOXP3-9R 24 h before OVA group (Supplementary Figure S2). saline groups showed mucosal edema or eosinophil infiltration and dilated secretory ducts of the lamina (Figure 3). Conversely, the pseudostratified columnar epithelium structure was normal in the FOXP3-9R 24 h before OVA group; in this group, kinocilia were uniform on the epithelial surface. Moreover, the secretory ducts of the lamina propria were not dilated, and there was no evidence of mucosal edema or eosinophil infiltration in this group of animals ( Figure 3). Flow Cytometric Analysis and Immunohistochemistry We conducted a flow cytometric analysis of the nasal epithelium from OVA-IgE mice after OVA administration. The CD4 + FOXP3 + cell numbers in the nasal epithelium were considerably different between mice receiving FOXP3 and FOXP3-9R 24 h before OVA administration and immediately after OVA administration (24 h before administration: 3.7% and 6.2%, respectively, p < 0.001, Figure 4a,b; immediately after administration: 2.9% and 4.5%, respectively, p < 0.001, Figure 4b). However, no major differences were detected when the proteins were administered 96 h before OVA administration (0.8% and 1.2%, respectively, p = 0.48, Figure 4b). After FOXP3-9R administration, CD4 + FOXP3 + cell numbers were the highest in the FOXP3-9R 24 h before OVA group (Figure 4a,b). The frozen nasal epithelium sections were immunostained with the appropriate antibodies to analyze the expression of CD4 + FOXP3 + cells. The results showed that CD4 + FOXP3 + cells congregated in the lamina propria in the FOXP3-9R 24 h before OVA group but not in the FOXP3 24 h before OVA group (Figure 4c). CD4 + FOXP3 + cells were not found in the brain of the FOXP3-9R 24 h before OVA group (Supplementary Figure S2). Cytokine Analysis Nasal epithelium was isolated from the six experimental groups for performing bioassays; different ratios of Treg and CD4 + Path cells were analyzed for proinflammatory cytokines. CD4 + Path T cells in mice that received FOXP3-9R 24 h before OVA had lower levels of interleukin (IL)-4, IL-5, and IL-17 than in those that received FOXP3 24 h before OVA (Figure 5a,b and Supplementary Figure S3, IL-4, p = 0.002; IL-5, p = 0.03; IL-17, p < 0.001). Regarding the time course, compared to mice receiving FOXP3 immediately after OVA administration, mice receiving FOXP3-9R 24 h before OVA administration had lower Discussion This study demonstrated that transduction of polyarginine-fused recombinant FOXP3 into the nasal epithelium of OVA-IgE transgenic mice may induce immune tolerance via homing of Tregs and activation of Th2, thereby preventing AR-related nasal symptoms. Administration of FOXP3 without polyarginine resulted in partial transduction; however, it did not alleviate the nasal symptoms. Similar to findings in previous studies, mice that underwent 9R-mediated protein transduction did not exhibit deterioration in terms of nasal symptoms or histopathological findings. In our study, AR-related histopathological findings, such as mucosal edema, increased inflammation, cilia loss, vascular dilatation, glandular hyperplasia, eosinophil infiltration in lamina propria, and dilatation and vacuolization in the pseudostratified columnar epithelium on the nasal mucosa surface, were observed in the FOXP3 and saline groups but not in the FOXP3-9R Discussion This study demonstrated that transduction of polyarginine-fused recombinant FOXP3 into the nasal epithelium of OVA-IgE transgenic mice may induce immune tolerance via homing of Tregs and activation of Th2, thereby preventing AR-related nasal symptoms. Administration of FOXP3 without polyarginine resulted in partial transduction; however, it did not alleviate the nasal symptoms. Similar to findings in previous studies, mice that underwent 9R-mediated protein transduction did not exhibit deterioration in terms of nasal symptoms or histopathological findings. In our study, AR-related histopathological findings, such as mucosal edema, increased inflammation, cilia loss, vascular dilatation, glandular hyperplasia, eosinophil infiltration in lamina propria, and dilatation and vacuolization in the pseudostratified columnar epithelium on the nasal mucosa surface, were observed in the FOXP3 and saline groups but not in the FOXP3-9R groups. Further, FOXP3-9R mice that maintained nasal mucosal integrity did not exhibit local side effects such as epithelial necrosis, irritation, or hemorrhage. Notably, the histopathological findings were substantially different between mice receiving FOXP3-9R and FOXP3. CPPs help cargo molecules penetrate cells and preserve their biological activity [8,11]. Therefore, another possible use of this approach is the delivery of CPP-mediated oligonucleotides for RNA-based gene silencing to suppress or inhibit gene expression [29]. Consequently, CPPs may serve as effective therapeutic tools against various human pathologies. FOXP3 transduction can trigger immune tolerance against AR by inducing Treg-like cell production. FOXP3 also contributes to the maintenance of autoimmune system homeostasis. For example, extracellular adenosine is transported into the cytoplasm more easily by FOXP3 [30]. In our study, immune tolerance was likely the result of a SLIT response, which leads to anti-allergen tolerance, possibly through the reprogramming of allergenspecific Th2 cells into Th1 cells and the generation of peripheral Tregs; additionally, the SLIT response suppresses ILs [16,31], which was also observed in this study. These findings suggest that despite the involvement of inflammatory cytokines in FOXP3 transcriptional repression, signals mediated by T cell receptors and cytokine receptors enhance the suppressive activity of Tregs to prevent inflammation after an inflammatory challenge. Recent research has established the existence of finely and timely regulated systems that control the role of Tregs during an inflammatory response by demonstrating that activation-and inflammation-induced alterations in experienced Tregs are lost over time to prevent overall immunosuppression. To date, there are only a few reports regarding the use of Tregs in allergy therapy. Xu et al. reported that hydrogen-rich saline induces Tregs and alleviates AR [32]. Recently, cell-based therapy such as adoptive cell transfer of engineered Tregs was found to provide an effective therapeutic alternative to combat autoimmune diseases [33]. However, these previous studies did not specifically address the role of the FOXP3 gene or FOXP3 protein. The use of Tregs in therapeutic applications may involve risks, as Tregs transferred to patients can be altered into Th17 cells that lack regulatory functions [34]; Th17 cells are proinflammatory cells formed in environments similar to those of Tregs [34]. In addition, the use of Tregs in therapy can generate tumors through the action of FOXP3 [35,36]. Our study used locally administered FOXP3, resulting in safer outcomes than those obtained with systemic administration. In addition, no tumorigenesis was observed in our experiment. Thus, our findings suggest that the induction of Tregs via FOXP3 activation and the local administration of FOXP3 before the onset of nasal symptoms have potential for AR therapy applications. Our study provides insights into putative novel AR-therapeutic approaches. This study had several limitations. First, we did not monitor any long-term effects. As previously described for systemic administration, long-term observation may reveal tumorigenesis, off-target effects, and genesis of chronic inflammation through changing the T cell phenotype by immune responses [37]. In addition, SLIT typically warrants long-term therapy. Compared to long-term SLIT, our study might be too brief for acquiring immune tolerance. In a previous study using Balb/cJ mice, SLIT was performed 5 days a week for 6-9 weeks [38]. They showed that the effect of SLIT is time-dependent. Treatment for 9 weeks was able to ameliorate both clinical symptoms, eosinophilia, allergen-specific IgE as well as the local T cell response. A shorter treatment period still had an effect on the levels of IgE, whereas there was no effect on the clinical symptoms [38]. Such time-dependency has also been observed in humans. In a large, tablet-based SLIT study, it was shown that 8 weeks or more of pre-seasonal treatment was significantly more effective than 6 weeks, with respect to reduction in symptom score and medication requirements [39]. Second, our therapeutic approach focused on prevention and not the treatment of existing disease conditions. Third, the mechanisms for induction of Tregs were unknown in our study. Experimental evidence has shown that the immune system can induce peripheral mechanisms of immune tolerance to allergens [40]. The generation of Tregs can be influenced by factors such as FOXP3+ Treg, pathogen-derived molecules, and exogenous signals such as histamine, adenosine, vitamin D3 metabolites, or retinoic acid [40,41]. While the molecular mechanisms of Treg generation in vivo are not fully understood, recent studies have provided insights into these processes. There is a counter-regulation between Th2 and Treg responses in healthy individuals and allergy patients. The transcription factor GATA3 directly inhibits the expression of FOXP3, hindering tolerance induction by Th2-type immune responses. In autoimmune disease models, a dichotomy between pathogenic Th17 and protective Treg responses has been observed, with TGF-β contributing to the generation of both. The presence of IL-6 shifts the balance towards Th17 generation [42,43]. Retinoic acid also influences the balance between inflammatory Th17 cells and suppressive Tregs by inhibiting Th17 formation and enhancing FOXP3 expression through a signaling pathway independent of STAT3/STAT5 [41,44]. Overall, further analysis of long-term observations, side effects, and drug function enhancement is required for the clinical application of our therapeutic strategy. Conclusions Our study demonstrated that effective intranasal FOXP3-9R delivery into the nasal mucosa of mice induced Treg-like cells and generated anti-AR tolerance. Furthermore, cytokine assays indicated that FOXP3-9R transduction before OVA administration induced a SLIT-like response in the mouse nasal epithelium. These results suggest that CPP-mediated local protein transduction and immune tolerance therapy are potential alternatives to conventional protein transduction methods for the delivery of therapeutically relevant molecules for AR therapy. These findings shed light on new AR therapeutic approaches. However, further research is warranted for the definitive establishment of AR therapy using this strategy. Informed Consent Statement: Not applicable. Data Availability Statement: All data will be provided upon reasonable request.
2023-06-23T04:13:19.814Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "62f2f3243b071bfe0a12be2cdb901295d03e4a89", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "62f2f3243b071bfe0a12be2cdb901295d03e4a89", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
211296859
pes2o/s2orc
v3-fos-license
Universal energy-accuracy tradeoffs in nonequilibrium cellular sensing We combine stochastic thermodynamics, large deviation theory, and information theory to derive fundamental limits on the accuracy with which single cell receptors can estimate external concentrations. As expected, if estimation is performed by an ideal observer of the entire trajectory of receptor states, then no energy consuming non-equilibrium receptor that can be divided into bound and unbound states can outperform an equilibrium two-state receptor. However, when estimation is performed by a simple observer that measures the fraction of time the receptor is bound, we derive a fundamental limit on the accuracy of general nonequilibrium receptors as a function of energy consumption. We further derive and exploit explicit formulas to numerically estimate a Pareto-optimal tradeoff between accuracy and energy. We find this tradeoff can be achieved by nonuniform ring receptors with a number of states that necessarily increases with energy. Our results yield a novel thermodynamic uncertainty relation for the time a physical system spends in a pool of states, and generalize the classic 1977 Berg-Purcell limit on cellular sensing along multiple dimensions. Single cells possess extremely sensitive mechanisms for detecting chemical concentrations through the binding of molecules to cell-surface receptors ( fig. 1(a), [1][2][3]). This remarkable capacity may require energy consumption, and raises important questions about fundamental limits on the accuracy of cellular chemosensation, both as a function of the energy consumed by arbitrarily complex nonequilibrium receptors, and the computational sophistication of downstream observers of these dynamics. A seminal line of work by [5,6,9] addressed this question for equilibrium receptors with two states, bound and unbound, with the binding transition rate proportional to the external concentration c ( fig. 1(b)). They studied the accuracy of a concentration estimateĉ computed by a simple observer (SO) that only has access to the fraction of time the receptor is bound over a time T , finding a fundamental lower bound on the fractional error of this estimate: Here ⟨(δĉ) 2 ⟩ is the variance of the estimateĉ, and N is the mean number of binding events in time T . For over 30 years, eq. (1) was thought to constitute a fundamental physical limit on the accuracy of cellular chemosensation. However, recent work focusing on highly specific receptor models [7][8][9][10] revealed this limit could be circumvented in two qualitatively distinct ways. First, in the simple case of a two-state receptor, an ideal observer (IO) that has access to the entire receptor trajectory of binding and unbinding events, could outperform the SO by performing maximum-likelihood estimation (MLE), obtaining an error 2 c = 1 N [8]. The IO in this case outperforms the SO by a factor of two, * harveys@stanford.edu by employing the mean duration of unbound intervals, and ignoring the duration of bound intervals which contribute spurious noise because the transition rate out of the bound state is independent of c. Second, even when the estimate is performed by a SO, the Berg-Purcell limit can be overcome by energy-consuming non-equilibrium receptors with more than two states ( fig. 1(d)), reflecting different receptor conformations or phosphorylation states [7,8,10]. Notably, [10] numerically observed a tradeoff between error and energy for a very specific class of receptor models with states arranged in a ring. While these more recent works demonstrate circumventions of the Berg-Purcell limit in highly specific models, they leave open foundational theoretical questions about the general interplay between estimation accuracy and energy consumption across the large space of possible complex non-equilibrium receptors. Specifically, for a SO, which may be implemented in a more biologically plausible manner than an IO, can we derive general analytic bounds or exact formulas for accuracy in terms of energy expenditure for general classes of non-equilibrium receptors? Can we exploit these formulas to find Paretooptimal receptors through numerical optimization? We address these questions by combining and extending stochastic thermodynamics [11][12][13][14][15][16] and large deviation theory (LDT) of Markov chains [4,5,14,15,19,21,23]. We derive a novel thermodynamic uncertainty relation connecting fluctuations in the time a stochastic process occupies a subset of states, and the energy dissipated by that process. This relation is of independent interest to non-equilibrium statistical mechanics [5,[23][24][25] and could have applications not only to cellular chemosensation, as discussed here, but also to understanding relations between energy and accuracy in other biological processes, like cellular motors and biological clocks [26][27][28]. Overall framework. A general non-equilibrium receptor can be modeled as a continuous-time Markov process [10,29], with n different conformational or signaling states indexed by i = 0, . . . , n − 1. The transition rate from state i to j is Q ij . We assume some subset of these rates are binding transitions with rates proportional to concentration c (see [30,§VII] for a discussion of more complex dependence on concentration). Over observation time T , the receptor moves stochastically through a sequence of states yielding a random trajectory x(t). An IO that has access to the entire trajectory ( fig. 1(c)) can compute a MLE of the concentration c viâ where P[x(t) c] denotes the probability distribution over receptor state trajectories at a given concentration c. To discuss the SO, we further assume that binding transitions occur from a group of states defined as unbound, nonsignaling states, N , to states we call the bound, signaling states, S ( fig. 1(d)). While an IO can access the trajectory, we assume the SO can only access the fraction of time spent in the signaling states, perhaps by counting the number of signaling molecules generated while the receptor occupies those states. We note these assumptions are consistent with previous work [8,10], though they exclude receptors with intermediate states that are bound but not signaling [31]. Receptor complexity and the ideal observer. We first ask if more states and transitions could yield improved accuracy relative to an IO of a two state receptor [8]. The properties of Markov processes [30,§B.II], imply the log probability of trajectory x(t) reduces to [14,15,19,21] log P[x(t) c] = −T where p T i is the empirical density, or fraction of time the trajectory x(t) spends in state i, and φ T ij is the empirical flux, or the number of transitions from state i to state j divided by T . Maximizing eq. (3) w.r.t. c yields the IO estimateĉ in eq. (2). When only transition rates from N to S are proportional to c as in fig. 1 ij is the receptor's empirical binding rate along trajectory x(t), and R p ≡ ∑ i∈N ,j∈S p T i Q ij , the expected binding rate conditioned on the empirical density p T i . For two states,ĉ is inversely proportional to the total duration of unbound intervals, agreeing with [8]. However, this result goes beyond [8] to reveal what function of the trajectory x(t) an optimal IO must compute to estimate concentration for arbitrarily connected receptors, as in fig. 1(d). The Cramér-Rao bound [32] lower bounds the fractional error 2 c of the IO through the Fisher information J c of the receptor trajectory Here π i is the steady-state probability of state i, and J 0 c = ∑ i π i (∂ c log π i ) 2 is the Fisher information that the initial state contains about c. The term linear in T reflects additional information obtained from the entire trajectory. Note only transition rates modulated by c contribute information. This result holds for arbitrary receptors as in fig. 1(c), but simplifies to J c = J 0 c + T c 2 R π , where R π = ∑ i∈N ,j∈S π i Q ij is the expected steady-state binding rate, for receptors of the form in fig. 1(d) with transition rates linear in c. Then the Cramér-Rao (CR) bound yields for large T , where N is the expected number of binding events. In [30] we directly calculate the variance of the IO concentration estimate and demonstrate its error saturates the CR bound eq. (5). This result generalizes [8] from simple equilibrium two state receptors to arbitrarily connected nonequilibrium receptors of the form in fig. 1(d), confirming that any such energy-consuming nonequilibrium receptor with binding rates proportional to concentration cannot outperform an IO of a simple equilibrium two-state receptor. Fluctuations and gain determine simple observer performance. The IO estimate in eq. (2) requires computing a complex function of the receptor trajectory x(t), which may not be biologically plausible. We therefore explore a SO, which estimates concentration using only the fraction of time the receptor is bound (signaling), denoted by q T = ∑ i∈S p T i . Due to randomness in x(t), q T fluctuates about its mean q π = ∑ i∈S π i , which depends on the concentration c. Given the observable q T , one can then estimateĉ by solving q π (ĉ) = q T . Standard error propagation then yields, Thus a larger variance ⟨(δq T ) 2 ⟩ in the time spent bound increases the error 2 c , while a larger gain dq π dc decreases it. We next compute and bound this variance and gain. A thermodynamic uncertainty relation for densities. We first derive a lower bound on the variance ⟨(δq T ) 2 ⟩ using stochastic thermodynamics and LDT [4] of Markov processes. A random trajectory x(t) of duration T in a general Markov process will yield an empirical density p T i and an empirical current j T ij = φ T ij − φ T ji , which corresponds to the net number of transitions from i to j divided by T . As T → ∞, these random variables will converge to their mean values, corresponding to the steady state probabilities π i = lim T →∞ p T i and steady state currents j π ij ≡ π i Q ij − π j Q ji = lim T →∞ j T ij . At large but finite T , p T and j T fluctuate about their means, and their joint distribution takes the form P(p T = p, j T = j) ∝ e −T I(p,j) [14,15,19,21,30]. Here I(p, j) is a large deviation rate function that achieves its minimum at p = π and j = j π , and describes how fluctuations in p T and j T are suppressed. This rate [14,15,19,21]. Similarly, at large but finite T , the distribution of the fraction of time time spent bound, namely q T = ∑ i∈S p i , takes the form P(q T = q) ∝ e −T I(q) . Here the large deviation rate function I(q) achieves its minimum at the mean value q π ≡ ∑ i∈S π i , and describes how deviations in q T from its mean are suppressed. The variance of q T is given by 1 (T I ′′ (q π )) [4], so any upper bound on I ′′ (q π ) will yield a lower bound on the variance of q T . One can obtain I(q) from the more general rate function I(p, j) through the contraction principle [4], which states that I(q) = inf p,j I(p, j), subject to the constraints Instead of calculating this directly, the infimum can be bounded by evaluating I(p, j) for a choice of j = j * (q) and p = p * (q) satisfying the same constraints. With the following choice of p * (q) and j * (q), This is a simple choice that satisfies the constraints mentioned above, as well as j * ij (q π ) = j π ij and p * i (q π ) = π i , ensuring that the inequality in eq. (7) is saturated at the minimum, q = q π . The coefficient in brackets in eq. (7) is chosen to maximize the tightness of the eventual bound. Following the approach of [23], inserting our choice of p * , j * into I(p, j) leads to an explicit upper bound on I ′′ (q π ) in terms of the total energy consumption rate of the receptor (in units of k B T ), defined as Σ π = ∑ i<j j π ij log φ π ij φ π ji [11]. We then find a lower bound on the variance of q [30, §III]: Equation (8) can be thought of as a new, general thermodynamic uncertainty relation which implies that the more energy T Σ π a system consumes, the more reliable the occupation time for a pool of states can become. This can be compared to another thermodynamic uncertainty relation connecting increased energy consumption to a reduction in current fluctuations in general stochastic processes [23,24]. Our result in eq. (8) adds pooled state occupancies to the class of observables for which thermodynamic uncertainty relations can be generally proven. An energy-accuracy tradeoff for the simple observer. The gain c dq π dc in eq. (6) can be calculated for arbitrary nonequilibrium processes using the known relationship between first passage times and the sensitivity of Markov chain stationary distributions [6,30]. Our formulae in [30,§IV] simplify to c dq π dc = q π (1 − q π ) for nonequilibrium receptors of the form in fig. 1(d) with only one nonsignaling state and binding transitions linear in c. 1 Inserting this result for gain and the relation for variance in eq. (8) into eq. (6), yields a general lower LDT-bound on error in terms of energy T Σ π and mean binding events N : This recovers Berg-Purcell eq. (1) at zero energy rate Σ π . Overall, eq. (9) is a generalization of the Berg-Purcell limit to general energy consuming non-equilibrium receptors of the form in fig. 1(d), with one nonsignaling (unbound) state, but an arbitrary network of signaling (bound) states. This LDT-bound is clearly not tight as Σ π → ∞, but for finite Σ π it provides a simple energybased bound on error that is independent of both the detailed number and connectivity of receptor states. At Σ π ≥ 4N T the CR-bound in eq. (5) becomes more stringent than the LDT-bound in eq. (9), and any bound on the IO must also apply to the SO. Thus the combined bound (the maximum of the CR and LDT bounds) yields a forbidden region of error versus energy ( fig. 2). Exact estimation error for the simple observer. Equation (9) provides a lower bound on the fractional error because our choice of p * , j * in eq. (7) does not achieve the infimum of the contraction. When the contraction of the rate function I(p, j) is expanded as a Taylor series in (q − q π ), one can compute the optimal p and j to leading order, which allows us to find the second derivative term in the Taylor expansion of I and hence the uncertainty where T unbind is the mean time until the next unbinding event given the receptor is in a signaling (bound) state, T unbind = ∑ i∈S T iN P(x(t) = i x(t) in S), T hold is the mean duration of a full journey through the signaling states, T hold = ∑ i∈S T iN P(x(t) = i x(t) just entered S), and T iN is the mean first passage time from state i to the single nonsignaling state. In [30, §V.C] we have numerically verified this formula by comparing it with Monte-Carlo simulations. For a two-state system, T unbind = T hold because the unbinding process is memoryless, and eq. (10) reduces to the Berg-Purcell limit in eq. (1). Thus eq. (10) is another generalization of Berg-Purcell to general energy consuming non-equilibrium receptors of the form in fig. 1(d), with one nonsignaling state, but an arbitrary network of signaling states. While eq. (10), and its generalization to multiple nonsignaling states [30, §V.C], gives an exact formula for error that allows us to search for Pareto-optimal receptors, our LDTbound in eq. (9) makes manifest a connection between error and energy. Pareto-optimal error reduction via energy consumption requires more states and is achievable by rings. Figure 2 compares the error bounds in eqs. (5) and (9) to numerical minimization of eq. (10) at fixed energy consumption for receptors of increasing size. The union of the CR and LDT lower bounds is respected by all models found. For a given energy consumption and number of states n, we minimized error over all possible fully connected receptors for every partition of n into signaling and nonsignaling states. We observed that for all receptor sizes and energies studied, the error achieved with a single nonsignaling state is not outperformed by any other partition [30, §VI.B], so we focus on this case. To explore the role of the number of states in the Paretooptimal tradeoff between energy and accuracy, in fig. 2 we numerically minimized error over all possible fully connected receptors, including the number of states (up to 10), for a given energy consumption. We found that as energy increases, near minimal error is achievable only by Energy-accuracy tradeoffs. The LDT bound eq. (9) (solid black), and the CR bound eq. (5) (dashed black) together yield a forbidden region of achievable error and energy (gray). Solid circles show the minimal error achieved by fully connected (small circles) and ring (large circles) receptors after numerically minimizing eq. (10) (see [30, §V]) w.r.t all transition rates, with an energy constraint. Data point color reflects the error of the smallest receptor that is within 1% of the fractional error of the best performing receptor obtained at each energy, indicating the smallest number of states for which adding states (up to 10) would not significantly lower error. Thin lines show the performance of n-state ring receptors with uniform transition rates in each direction. (iiii) Three optimal receptors found at different energy consumption levels indicated by arrows. Colored (gray) nodes in the diagrams represent signaling (nonsignaling) states. Node radii are proportional to steady state probabilities πi and edge widths are proportional to steady state fluxes. increasing the number of states ([30, fig. 6]). Figure 2 (iiii) depict optimal minimal size receptors obtained at different levels of energy consumption. At low energy consumption the LDT bound is tight, and the optimal receptor is equivalent to a two-state receptor with the signaling states behaving like one coarse-grained state (see [30,§VI.A]), and there is no advantage to adding > 3 states. The LDT-bound eq. (9) becomes increasingly loose as energy consumption increases and the minimal achievable error of optimal receptors at fixed n saturates at a level that depends on the number of states. At high energy consumption, the optimal receptor approaches a many-state uniform ring with more asymmetric transition rates at higher energies. As the number of states n → ∞, the minimal achievable error at high energy consumption saturates the CR-bound eq. (5). Thus the combined LDT and CR bound is tight at both high and low energy. We repeated the optimization with receptors restricted to ring topologies with arbitrary transition rates, and found that the best possible rings perform indistinguishably from fully connected networks at every energy level ( fig. 2). This suggests that the Pareto optimal tradeoff between energy and accuracy is achieved by non-uniform ring networks of increasing size and energy consumption. Uniform ring receptors are not Pareto-optimal. For ring receptors with uniform transition rates in each direction, we can analytically compute T unbind , T hold , and therefore 2 c in eq. (10), as a function of n, Σ π , and N [30, where σ is defined via Σ π R π = σ tanh σ 2n . This error versus energy for n = 3, . . . , 10 is plotted in fig. 2. The lower envelope of these curves, corresponding to minimizing eq. (11) w.r.t n at fixed energy per binding event Σ π R π , yields an upper bound on the numerically derived Pareto optimal tradeoff. As seen in fig. 2, at intermediate energy consumption, this upper bound is slightly higher than the minimal error achieved by nonuniform rings, indicating nonuniform transition rates are required for ring receptors to be optimal at intermediate energy. of uniform rings can approach the CR bound at large energy and n. The optimality of single path rings is reminiscent of the optimality of single path chains for hitting times, observed numerically in [35] for a different notion of energy. However, large uniform rings can be highly suboptimal under a SO at low energy; as Σ π → 0, 2 c → 2 N n(n+1) 6(n−1) . This reproduces Berg-Purcell in eq. (1) for n = 2, 3, but is much worse for n > 3 ( fig. 2). Thus larger receptors must consume more energy to be optimal. Discussion. We derived several general results eqs. (4), (5), (9) and (10) delineating fundamental performance limits of cellular chemosensation using arbitrarily complex energy consuming nonequilibrium receptors, as a joint function of observation time, energy consumption rate, number of states, and the sophistication of the downstream observer. Along the way we have also derived a general thermodynamic uncertainty relation eq. (8) which reveals one must pay a universal energetic cost for reliable occupation time in any physical process. We hope these analytic relations between time, energy and accuracy will find further applications in myriad biological and physical processes [27,[36][37][38][39][40][41][42] In this supplement we provide complete derivations of several results presented in the main text, as well as background material on Markov processes and large deviation theory for a general physics audience. In §II below we derive the maximum-likelihood estimate (MLE) and eq. (4) of the main text. The MLE reveals the computation the ideal observer (IO) must make to construct an estimate of the concentration from knowledge of the entire trajectory of receptor states over a given time interval. Correspondingly, eq. (4) describes the Fisher information that the entire receptor state trajectory contains about the external concentration. The reciprocal of this Fisher information bounds the error of the IO estimate through the Cramér-Rao bound. In sections III and IV we derive the main text's eqs. (8) and (9). Equation (8) describes a thermodynamic uncertainty relation revealing that energy must be spent to reduce fluctuations in the time a physical process spends in a subset of states. Equation (9) describes how this thermodynamic uncertainty relation, when combined with a computation of the receptor gain, yields a lower bound on estimation error in terms of energy consumption. In §V we use large deviation theory to derive exact formulae for the fractional error for the ideal observer (IO) and the simple observer (SO), including the special case of a uniform ring receptor. The second formula (main text eq. (10)) was used for the numerical optimization of SO performance over the space of receptors in fig. 2 of the main text. In §VI we provide details for the numerical computations in the main text fig. 2. Finally, to make this supplement self contained, we provide appendices A and B with brief reviews of the theory of continuous-time Markov processes and the large deviation theory for empirical density, flux and current. A. Fisher information in a Markovian trajectory In this section we derive the Fisher information from an extended observation of a system with Markovian dynamics, eq. (4) in the main text. We first consider a discrete time Markov process, and will later take the limit as the size of the discrete time steps ∆t become vanishingly small. The discrete-time transition matrix is given by M , where M ij is the probability of transition from state i to state j if the system is in state i at a particular time step. For a set of states labeled by i, we define π i as the steady state distribution, which satisfies πM = π and has elements which sum to 1. The matrix M can be expanded in terms of the continuous time transition rate matrix Q, which has elements Q ij and 9 obeys ∑ j Q ij = 0 (see eq. (83) in §A.I). We would like to consider the general probability of a Markovian trajectory from a state x 0 at t = 0 to state x n at t = n∆t. 2 Assuming that the system begins in the steady state distribution, the probability of this trajectory in discrete time is given by We can now directly calculate the Fisher information matrix for this distribution with respect to the parameter λ µ , using the notation ∂ µ ≡ ∂ ∂λµ : We now recognize that the Fisher information matrix eq. (13) can be rewritten as (using the fact ∑ k M jk = 1 and ∑ j π j M jk = π k ) or, where J 0 µν is the Fisher information matrix for a random variable representing a single observation of the system state, and the indices k and j index the time steps in the measurement interval. The expression eq. (15) simplifies if we write the sums inside the brackets on the right hand side out term-by-term. For k = 0, j = 0, the second term on the right hand side simplifies to Similarly, for k = 0, j = 1, we have In the same fashion, all k = j terms in eq. (15) give an expression similar to eq. (16) and all k ≠ j terms vanish as in eq. (17). Our expression for the Fisher information matrix then becomes: Given that M is not changing in time, after relabeling x k → i and x k+1 → j all terms in the sum over k are identical. We therefore find: Lastly, we take the continuous time limit by sending ∆t → 0. For infinitesimal ∆t, We can then rewrite eq. (19) as In the limit ∆t → 0, ∆t log ∆t → 0, and log (1 + Q ii ∆t) ≈ Q ii ∆t. Defining T ≡ n∆t, we therefore find where we have recognized that the i = j terms from eq. (20) all vanish in the limit ∆t → 0. We then assume that the signal to be estimated is a scalar denoted by c, which could represent an external concentration of some ligand. For a scalar parameter, the Fisher information of the entire trajectory then becomes: which is eq. (4) in the main text. If we specialize to the models of receptors studied in the main text, the only offdiagonal transition rates that depend on c are those along the edges in the set ⃗ B: the 11 edges that start in the set N and end in S. As those transition rates are proportional to c, eq. (22) reduces to: This leads to a lower bound on the uncertainty of any unbiased estimate of c, via the Cramér-Rao bound [1,2]. B. Maximum likelihood estimation for the ideal observer In the previous section we computed the Fisher information for the ideal observer, which leads to a lower bound on the uncertainty of any estimate of c. In general, the maximum likelihood estimator saturates the Cramér-Rao bound asymptotically, in the limit of a large number of independent observations [3]. We compute this estimator in this section. We will postpone calculating its variance to §V.A. In §B.II we see that, when the duration of observation is large, the likelihood of any single trajectory collapses to a function of certain summary statistics: the empirical density, p T i , the fraction of time spent in state i, and the empirical flux, φ T ij , the rate at which transitions from state i to j occur (see §B.I for precise definitions). In eq. (106) we see that the likelihood is: where Q ij is the source of dependence on c. If we use the notation φ p ij = p T i Q ij , the maximum of this function must satisfy Now we can specialize to the models of receptors studied in the main text, where the only off-diagonal transition rates that depend on c are those along the edges in the set ⃗ B. As those transition rates are proportional to c, eq. (22) reduces to: Because R p is proportional to c, the maximum likelihood estimator iŝ We will compute the variance of this estimator for large T in §V.A. III. A THERMODYNAMIC UNCERTAINTY PRINCIPLE FOR DENSITY Here we present a derivation of eq. (8) in the main text, which constitutes a thermodynamic uncertainty relation connecting fluctuations in the fraction of time a physical process spends in a pool of states to the energy consumption rate of that process. This uncertainty relation reveals that one cannot reduce fluctuations in total occupation time without paying an energy cost. We make use of a known result that the empirical density and currents for continuous-time Markov processes obey a large deviation principle with a known joint rate function. The large deviation rate function I(p, j) describes both fluctuations in the empirical quantities p and j around their steady states and highly unlikely large deviations [4]. This rate function is known to take the following form [5] (see also §B.IV): with (dropping the state indices i, j for notational simplicity) where a p ij ≡ 2 p i p j Q ij Q ji and j p ij ≡ p i Q ij − p j Q ji . We also require that the probability current is conserved, ∑ j j ij = 0 for all nodes indexed by i. For the purposes of this sensing problem, we are interested in the rate function of the density in a subset of states we call the signaling states, q = ∑ i∈S p i . In the main text, we argued that we can bound this rate function by repeated application of the contraction principle, such that is given by where j * and p * are arbitrary choices for j and p in place of evaluating the infimum. As discussed in the main text, we are interested in the variance of the signaling density q, which is given by 1 (T I ′′ (q π )) [4]. Therefore, we are interested in bounding the quantity I ′′ (q π ), For any choice of j * ij (q) and p * i (q) that satisfy j * ij (q π ) = j π ij and p * i (q π ) = π i , 3 the 13 second derivative of the rate function is given by This sum can be split into the following three contributions: where ⃗ S is the set of transitions between signaling states, ⃗ N the transitions between nonsignaling states, and ⃗ B the transitions from nonsignaling to signaling states. Our choice of j * must satisfy the condition ∑ j j * ij = 0, and our choice of p * must satisfy the conditions ∑ i p * i = 1 and ∑ i∈S p * i = q. We also require j * ij (q π ) = j π ij and p * i (q π ) = π i . With the benefit of hindsight, we can then choose: Defining as the steady state energy consumption rate (in units of k B T ) due to transitions along edges in the sets X = { ⃗ S, ⃗ N , ⃗ B}, and as the flux due to transitions from the signaling states to the nonsignaling states, we find where we made use of the identity In general the inequality holds, becoming an approximate equality for receptors near equilibrium. 4 Applying this inequality to eq. (37) and using eq. (35), we find, 4 When φ π ij < φ π ji the inequality is reversed, but the factor of φ π ij − φ π ji in eq. (37) is negative in such cases. By the same arguments, we also find that For the term contributed by transitions between the signaling and nonsignaling states, we find that Plugging eqs. (38) to (40) into eq. (33), we arrive at our final bound for the second derivative of the rate function of q evaluated at q π : which implies that . Therefore, we find that the uncertainty in q is bounded by the energy consumption and flux: This is eq. (8) in the main text. IV. COMPUTING THE RECEPTOR GAIN Equation (6) from the main text shows that we need an expression for dq π dc , the rate of change of the signaling density, q, with respect to the concentration estimate,ĉ. Here we present the derivation of the expression used in the main text for systems with only one nonsignaling state. As discussed in the main text, this receptor gain plays a role in the estimation error of the simple observer (SO), with larger gain leading to smaller error. Given an empirical observation of the signaling density, we can estimate the concentration by asking the question: "For what value of c would this value of q be typical?". For any value of c, the typical q is the one determined by the steady-state distribution: q π (c) = ∑ i∈S π i (c), with π i varying with c via the transition rates Q ij ∝ c for i ∈ N , j ∈ S. Thus, the concentration estimate,ĉ(q) is the solution to the equation q π (ĉ) = q, and therefore dq dĉ = dq π dc c=ĉ . 15 Using the result from [6] (see eq. (94), §A.IV) the effect of a perturbation to the rate matrix, Q, on the steady-state distribution π is related to the mean first-passage times, T, as follows: where T ij is the mean first passage time from state i to state j for i ≠ j and 0 for i = j (see §A.III). We are interested in the gradient of q π = ∑ k∈S π k . Furthermore, the only off-diagonal transition rates that depend on c are Q ij ∝ c for i ∈ N and j ∈ S. Therefore: From [7] (see eq. (91), §A.III), we note that ∑ j Q ij T jk = δ ik π i − 1. Then we can write where A is defined by this equation. When detailed balance is satisfied, π i Q ij is symmetric in i, j, whereas T ik − T jk is antisymmetric, so A = 0. Similarly, when there is only one non-signalling state, the sum over i and j consists of one term with i = j, which gives zero. More generally, A = 0 if there are no transitions between nonsignaling states with nonzero rates and unbalanced fluxes. Therefore, all detailed balanced systems and all systems with only one non-signalling state havê where K d is the dissociation constant, the concentration at which q π = 1 2 . V. EXACT ERROR FORMULAE FOR THE IDEAL AND SIMPLE OBSERVERS In this section we compute the fractional error, for the ideal observer and show that it saturates the Cramér-Rao bound (eq. (5) in the main text). We will then derive the expression for the simple observer's fractional error that we used in numerical optimization (eq. (10) and fig. 2 in the main text). In §II.B we saw that the maximum likelihood estimator could be written in terms of the empirical densities and fluxes asĉ In §B.III we see that the empirical densities and fluxes obey a large deviation principle. Therefore, the concentration estimate also obeys a large deviation principle described by the contraction: We have a constrained optimization problem for each possible value ofĉ, so we have a Lagrangian for each value ofĉ: where α, β, γ i are Lagrange multipliers and µ i , ν ij are Karush-Kuhn-Tucker multipliers, satisfying µ i ≥ 0, and µ i ∂L ∂µ i = 0, the allowed region being ∂L ∂µ i ≤ 0, and similar for the ν ij . The Lagrange/KKT multipliers will take different values for eacĥ c as well. The conditions for the infimum are where e S is a vector of ones for states in S and zero elsewhere, and e N is the reverse. 5 To calculate the variance ofĉ for large observation time, T , we only need the second derivative of the rate function atĉ = c. So we Taylor expand the minimizers of eq. (48) in (ĉ − c) as 5 In general, given a vector v and set of states X , the vector v X has components v X Because the zeroth order parts of p are nonzero, and we are only considering infinitesimal fluctuations, the inequality constraints will be loose and we can set the µ i to zero. Some of the φ π ij could be zero, but as we shall see below, those components do not receive any corrections and we can set the ν ij to zero as well. The Taylor series expansion of I(p, φ) begins at second order: Therefore, expanding eq. (49) to zeroth order gives If we multiply the second equation by Q ij , sum over j ≠ i, and add the result to the first equation, we find Qγ = αe. The only solutions are α(c) = 0 and γ i (c) =constant. The original equations then imply that β(c) = 0. Then the Taylor series expansion of eq. (48) is (51) If we minimize this expression with respect to p ′ i and φ ′ ij , we find We can see that α ′ = 0 and γ ′ i =constant with the same method used for the zeroth order parts. This leaves us with To determine β ′ we can look at the β ′ constraint in eq. (51). It shows that, to first order in (ĉ − c), we require R ′ − R p ′ = R π c and therefore β ′ = 1 c . We can substitute these results into eq. (51) to find This saturates the Cramér-Rao bound, eq. (5) in the main text. B. Exact variance of the signaling density First we compute the variance of the signaling density q, which is the fraction of time the receptor is bound and signalling along a single trajectory, by solving the contraction to leading order in (q − q π ). The contraction of the rate function from empirical density, p, and current, j, to empirical signalling density, q is We can find the infimum by minimizing the following Lagrangian: where α, β, γ i are Lagrange multipliers and µ i are Karush-Kuhn-Tucker multipliers. As we have an optimization problem for each possible q, there will be different values of the Lagrange/KKT multipliers for each q as well. The contraction is then determined by where e S is a vector of ones for states in S and zero elsewhere. We assume that at q = q π , we have p i = π i and j ij = j π ij = π i Q ij − π j Q ji (see eq. (31)). We also assume that the solution lies in the interior of the allowed region where p i > 0 and µ i = 0 (for an ergodic process, all π i are nonzero, and for infinitesimal (q − q π ) the same will be true of p i ). From the series expansion of I q (q) about q = q π and eq. (32) we can see that Therefore, we only need the expansion of the optimal p, j to first order in (q − q π ), whose coefficients we denote by p ′ , j ′ . 6 Then we can expand eq. (53) to first order to find Where similarly, α ′ , β ′ , and γ ′ are the first order coefficients of the Lagrange multipliers. The constraints on p and j (pe = 1, pe S = q, je = 0) imply that We can solve these equations with some tools from §A. First, we can solve for j ′ ij − j p ′ ij in the second equation of (55) and insert the result into the first equation of (55): The γ ′ i term supplies the missing j = i term from the sum. So we can rewrite the second part of eq. (57) as If we premultiply by π, we find that α ′ = −q π β ′ . If we premultiply by the Drazin pseudoinverse, Q D (see eq. (87), §A.II), we find that (I − eπ)γ ′ = β ′ Q D e S . Looking at eq. (55), we only care about differences of the γ ′ i , so we can shift γ ′ i by an arbitrary constant and choose to set πγ ′ = 0. Then where Π ij = π i δ ij and T ij is the mean first-passage-time from state i to j (see eq. (92), §A.III). Now we go back to the first part of eq. (57) and sum over j: or, using the natural definition of the adjoint (see eq. (96), §A.V): Substituting in eq. (59) and postmultiplying by Q D : (60) where we definedγ ′ = β ′ Q D † e S , i.e. the quantity γ ′ but computed for the timereversed process. We can then determine the Lagrange multiplier β ′ using the normalization constraints, eq. (56): Now we can determine j ′ using the first part of eq. (57), ji , although we do not actually need this quantity. Instead, we note that eq. (54) depends only on j ′ ij − j p ′ ij . By eq. (57), this can be rewritten in terms of the φ π ij and γ ′ i . We can then substitute eqs. (59) and (61) into eq. (54), to find: 20 In going from the first to second line, we made use of the fact that ∑ i φ π ij = ∑ j φ π ij = 0 when we include the diagonal terms. The variance in the signaling density q is given by 1 (T I ′′ (q π )) [4], where T is the total observation time, so from eq. (61) we have We can rewrite this in terms of set-to-point mean first-passage times where each term is weighted by the conditional probability of being in state i conditional on being in the set X , P(x(t) = i x(t) ∈ X ) for any nonspecific time t. Then eq. (63) reads as where we used eq. (93), §A.III, which implies that ∑ j∈N T X j π j + ∑ j∈S T X j π j = η, a constant independent of the initial set X . This expression simplifies dramatically when there is only one non-signalling state, so that the sum collapses to a single term We can interpret this result physically if we rewrite it as follows: Here T hold is holding time, the mean time spent in bound states during one bound interval. Also, when there is only one nonsignaling state, the set-to-set mean firstpassage time T SN = T S0 so T unbind is the mean time until the next unbinding event given that the receptor is currently bound. Note that the quantity T unbind is not the same as T hold . In the case of T hold , we would condition on the receptor having entered the bound state at the particular time, t 0 , from which we measure the holding time. The states would then be weighted by P(x(t 0 ) = i bound at t 0 ), the probability that the binding transition was to state i. In eq. (64), by using the steady-state distribution we effectively average over the length of time since the last binding event, whereas if we were to calculate the holding time we would condition on it being zero. It is always the case that q π T = N T hold , and therefore: When looking at the definitions of T unbind and T hold , one might think that T hold ≥ T unbind . This is not the case, due to the difference in the probability distribution of the initial state. We will look at an illustrative example in §V.D. C. Exact error for the simple observer To find the fractional error ofĉ, we note that at the minimum of the large deviation rate function: With only one nonsignaling state, we can use eq. (47) for the jacobian between c and q. Thus: This is eq. (10) in the main text. In the case of a two-state process (or one that is lumpable to a two-state process, see [8]), T unbind and T hold have the same distribution. When the holding time has an exponential distribution, the time until the next unbinding is independent of the time since the last binding. For such receptors, eq. (66) reduces to the Berg-Purcell result [9], In general, we expect the fractional error to grow with the mixing time of the receptor, as the effective number of independent observations of the receptor scales ∝ T T mix due to autocorrelation. We would expect that, in most cases, a long unbinding time implies a long mixing time. When there is more than one nonsignaling state, using eq. (63) and the jacobian from eq. (45), the long time limit of the fractional error is: where N = R π T and φ N S ij is φ π ij for i ∈ N , j ∈ S and zero otherwise. Given the explicit formulae for the mean first-passage-times in eqs. (87) and (91), the expression in eq. (56) can immediately be computed numerically. This is the formula that we used in numerical optimization for fig. 2 in the main text. We can validate eq. (70) with Monte Carlo simulations, as shown in fig. 3. D. Exact first passage times and error in a uniform ring receptor In this section we apply eq. (70), the fractional error for a general receptor, to the case of a uniform ring receptor. We consider receptors of the type depicted in fig. 4 A, but with only one nonsignaling state labeled as state 0. The transition rates are given by where the indices are to be interpreted modulo n, the total number of states. It will be convenient to parameterize these models with the energy consumed in one full circuit of the ring (in units of We can determine the mean first-passage-times to the nonsignaling state using the recursion relation eq. (91) whose solution is Furthermore, the conditional probabilities in eq. (67) are If we substitute these equations into eq. (67), we find Substituting these expressions into eq. (69), we find As σ → ±∞, this expression asymptotes to n n−1 . As σ → 0, it becomes 2 + (n−3)(n−2) 3(n−1) . With the same parametrization, the energy consumption per binding is given by We can write eq. (74) explicitly using the inverse function of x tanh x. First, define a function Ω(x) such that Ω(x) tanh Ω(x) = Ω(x tanh x) = x for all x ≥ 0. Note that coth Ω(x) = Ω(x) x and coth nΩ(x) = (Ω(x)+x) n +(Ω(x)−x) n (Ω(x)+x) n −(Ω(x)−x) n . Then 24 In the limits of small and large energy consumption eq. (76) reduces to In fig. 4 B. we have verified eq. (76) with Monte-Carlo simulations. Looking at eq. (73) we see that for large σ, T hold > T unbind . For small σ this is reversed, T hold < T unbind . We can understand how this happen by looking at the mean first-passage-times, as in fig. 5. In each case, T unbind is the arithmetic mean of the first-passage-times in fig. 5 A. When σ is large, the ring is approximately uni-directional. The probability distribution of the state immediately after binding is concentrated at state 1. This is where the first-passage-time T i0 is largest, as it must go through all of the other states before reaching 0. Therefore T hold is above-average and T hold > T unbind . When σ is small, the ring is symmetric between both directions. The probability distribution of the state immediately after binding is equally concentrated in states 1 and n − 1. This is where the first-passage-time T i0 is smallest, as it has a 50% chance of going straight back to 0. Therefore T hold is below-average and T hold < T unbind . A. VI. NUMERICAL METHODS Here we explain in detail how we obtained the results of Figure 2 in the main paper, which contains numerical results falling into two categories: results for optimized networks, and results of directly simulating randomly generated networks. FIG. 6. Numerical optimized and uniform ring networks. Small solid points represent the minimal error achieved by n-state fully connected (fc) receptors, obtained by numerically minimizing eq. (77) with respect to all transition rates, subject to an energy constraint. Open circles show the same minimal error is achieved by similarly optimizing n-state receptors restricted to ring topologies. Thin solid lines represent analytic uniform ring solutions for varying n, eq. (74). A. Numerical optimization of receptors In order to validate our theoretical bounds (eq. (5) and eq. (9) in the main paper), we numerically generate networks that minimize the the exact formula (70) subject to an energy consumption constraint. The optimization problem is then: Receptors of a given number of states and division between signaling and nonsignaling states were optimized using the MATLAB built-in nonlinear optimizing function fmincon [10]. The interior-point algorithm was used to minimize the objective function in (77) starting from randomly initialized transition rates in a complete graph. At each energy consumption constraint, the data point presented in fig. 2 in the main text represents the network found giving the minimum error out of 200 optimizations 26 with different random initializations. Lumpability of optimized networks Lumpability [8] is a property of certain continuous-time Markov chains which indicates that the size of the state space can be reduced by 'lumping' together states according to a certain partitioning. A lumped state, which represents some subset of original states, will obey the same exponentially distributed holding time as the original subset. Let a continuous-time Markov chain with states V have a partitioning of states into n disjoint subsets {A 1 , A 2 , . . . , A n }. The Markov chain is lumpable with respect to the partitioning if the transition rates Q ij from state i to state j obey the following: for any pairs of subsets in the partitioning (values of and m). Under this condition, due to the memoryless nature of the exponential distribution, the probability of transition out of a subset A m is independent of the microscopic details of which state in A m the system occupies. The lumped chain formed by the partitioning is then also a Markov chain with transition rate between A m and A given by ∑ j∈A Q ij for i ∈ A m . It is potentially interesting to consider whether the optimal networks for concentration estimation are lumpable to two state processes, along the partitioning into nonsignaling and signaling states. To measure the lumpability, we calculate the variance over the mean squared (uncertainty or CV) of the unbinding rates Q i0 , where 0 indicates the one nonsignaling state. If the process is perfectly lumpable, this uncertainty will be 0. For an n-state uni-directional cyclic process with uniform transition rates, the CV will be n − 2. Figure 7 shows the CV of unbinding rates for n-state Markov processes found to be optimal for concentration estimation, as a function of energy dissipation per binding event. All processes are approximately lumpable to a two-state system until a critical dissipation level, where they separate, eventually saturating at n − 2 as the optimal processes are all uniform rings. B. Numerical optimization of receptors with > 1 nonsignaling state The optimization problem described in eq. (77) applies to receptors with arbitrary partitioning between signaling and nonsignaling states. However, we find that for all models examined the best performing partition for n-state receptors are those with 1 nonsignaling state. This is shown in fig. 8 for all partitionings of n = 5 node receptors. VII. EXTENSION TO NONLINEAR DEPENDENCE The assumption of binding transition rates that are linear in concentration may be expected biophysically for processes involving a single ligand binding. For processes that require binding at multiple sites we can expect power law dependences. For example the SNARE complex in neuronal synapses requires 5 calcium ions to bind for vesicle docking, the rate of which depends on calcium concentration to a power between 4 and 5 [11,12]. If all the binding transitions follow the same power law, that would scale eq. (5) of the main text and eq. (47) by a constant: Equivalently, we can say that uncertainty of the quantity c k obeys all of the equations and inequalities in this work. If the binding transitions are related to the concentration through an arbitrary function, f (c), we will find using eq. (4) a different dependence of the Fisher information on the concentration. The resulting uncertainty may depend then on the value of c being estimated. However, all of our results would apply to the uncertainty in f (c). If all of the binding transitions are potentially different functions of c, then we will find unequal weights, W ij , in the Fisher information associated with those transitions (in eq. (79) W ij = k): In this case, observations of the identity of the binding transition used can be informative of the concentration signal. Furthermore, in this case the SO would be affected only through the gain, c dq π dc : This would modify the denominator of our expression for the true uncertainty, eq. (70), as well as our lower bound, eq. (9) of the main text. Cases where all of the binding transitions are potentially different functions are particularly important when generalizing our results to sensing quantities other than concentration. Appendix A PRIMER ON CONTINUOUS-TIME MARKOV PROCESSES In this appendix we provide a quick summary of those aspects of the theory of Markov processes in continuous time that are used in this supplement. In the following sections we describe the transition rate matrix, its Drazin pseudoinverse, its relation to mean first-passage times, the relation between first passage times and perturbations of the steady-state distribution, and the natural definition of the inner product and adjoint for vectors on the Markov chain state-space. I Master equation and the transition rate matrix We consider a Markov process on a discrete set of n states indexed by i = 1, . . . , n. We describe this process by a set of transition rates Q ij denoting the probability per unit time that the system jumps to state j given that it is currently in state i. The probability that the system is in state i at time t, p i (t), evolves according to the master equation: The master equation can be written in matrix form if we let p i be the components of a row vector p(t) and we define the diagonal elements of the transition rate matrix as The quantity λ i is the probability per unit time that the system jumps to any other state given that it is currently in state i. The holding time, or amount of time spent in any individual visit to state i, follows an exponential distribution with mean 1 λ i . The probability that the next state visited by the Markov process is state j, given that it is currently in state i, is given by P ij = Q ij λ i . The definition of the diagonal elements in eq. (83) imply that the sum of matrix elements across any row of Q is zero. If we define e to be a column vector of ones, we can express the row sums as Qe = 0. The steady-state distribution π is the solution of dp dt = 0, and thus obeys: For an ergodic process π is uniquely defined by eq. (84) and is strictly positive in every state. For future use, it will be helpful to define diagonal matrices, Λ and Π, with Then the matrix of next-state probabilities, P ij , can be written as: II Drazin pseudoinverse The transition rate matrix Q of an ergodic Markov process has a one dimensional null-space (because πQ and Qe are both zero). Therefore the rate matrix is not invertible. However there are several ways of defining a pseudoinverse [13]. The most useful one for our purposes is the Drazin pseudoinverse of Q, defined by where τ is an arbitrary timescale. The Drazin pseudoinverse, Q D , has the same left and right eigenvectors and null spaces as Q, but with nonzero eigenvalues inverted. In particular, Q D e = 0 and πQ D = 0. III Mean first-passage times We define the mean first-passage time, T ij , as the mean time it takes the process to reach state j for the first time, starting from state i. The diagonal elements, T ii , are defined to be the mean time it takes the process to leave and then return to state i. It will be convenient to additively decompose the mean first-passage-time matrix into its diagonal and off-diagonal parts: T = T dg + T. To compute the mean first passage times, consider the first time the process leaves state i. On average, this will take time λ −1 i . With probability P ij , it will go directly to j, so the conditional mean time would be λ −1 i . On the other hand, if it goes to some other state, k, with probability P ik , the conditional mean time would be λ −1 i + T kj . Combining these, we get the recursion relation where ee T is the matrix of all ones. Remembering eq. (86) (that P = I + Λ −1 Q), we can write eq. (88) as All of these observables p T i , φ T ij , and j T ij can be measured along a single trajectory of the Markov process, and will approach stationary values as the trajectory duration T → ∞. The empirical density will converge to the stationary distribution, p T i → π i , the empirical flux will converge to the steady state flux, φ T ij → φ π ij , where φ π ij = π i Q ij , and the empirical current will converge to the steady state current, j T ij → j π ij , where j π ij = π i Q ij − π j Q ji . We would now like to find the joint distribution P (p T , φ T ) over empirical densities p T and empirical fluxes φ T , from which we can find the joint distribution P (p T , j T ) over empirical densities p T and empirical currents j T . This will enable us to study the fluctuations and large deviations of these observables around their most likely values of π, φ π , and j π in the large T limit. In this limit, these joint distributions are well characterized by a large deviation rate functions I(p T , φ T ) and I(p T , j T ), defined implicitly by the relations P p T , φ T ∼ e −T I(p T ,φ T ) , P p T , j T ∼ e −T I(p T ,j T ) . For an ergodic process Q, I(p T , φ T ) is expected to be a non-negative convex function of p T and φ T that achieves a global minimum value of 0 at the unique point p T i = π i and φ T ij = φ π ij = π i Q ij [14]. Similarly, I(p T , j T ) is expected to be a non-negative convex function of p T and j T that achieves a global minimum value of 0 at the unique point p T i = π i and j T ij = j π ij = π i Q ij − π j Q jj [5,15]. Thus eq. (100) implies that the probability of any large deviation in the empirical density p T i , empirical flux φ T ij , and empirical current j T ij , from their respective, typical stationary values π i , π i Q ij , and π i Q ij −π j Q ji , is exponentially suppressed in the duration T of the trajectory. We present these rate functions in eqs. (110) and (112) below. Readers who are simply interested in the form of this function, but not the ideas underlying its derivation, can safely skip the remainder of this appendix. To derive the rate functions in eq. (100), we follow the approach of [14]. The essential idea is to use a common tilting method from large deviation theory. This method involves comparing the probability of a particular path under our given true Markov process with the probability of that same path under a perturbed Markov process in which transition rates are tilted to make a particular large deviation more likely. In particular, we compare the probability of the observed trajectory x(t) under the given Markov process with transition rates Q ij , with the probability of the same trajectory under a fictitious process with tilted transition ratesQ ij . This fictitious process is the one that would produce the observed trajectory x(t) as a 'typical' realization. That is, it is a Markov process with stationary distributionπ i = p T i and transition ratesQ ij = φ T ij π i . In essence, empirically densities p T i and fluxes φ T ij that might be rarely observed under Q ij are typical underQ ij . P(x r+1 x r ) P(τ r x r ) P (x 0 ) , (101) with P(x r+1 x r ) = Q xrx r+1 and P(τ r x r ) = λ xr e −λx r τr , where λ xr = ∑ j≠xr Q xrj (see eq. (85) in §A.I). Equation (102) is equivalent to eq. (86) which follows from the definition of the transition rates of a continuous time Markov process, and eq. (103) is the exponential distribution with parameter λ xr describing the holding time in each state. For large T the boundary effects at t = 0, T will be negligible. We will neglect those factors in eq. (101) from here on. The joint probability for the trajectory x(t) is then given by We can now split up the sums over states and transitions indexed by r into two contributions. For the first term in the sum in eq. (104), we will perform an inner sum over all instances in the trajectory in which the Markov process is in state i, which corresponds to summing over r such that x r = i, and then we will perform an outer sum over all of states i of the Markov process. Similarly, we will break the second term in eq. (104) into an inner sum over all of the transitions from state i to state j in a trajectory (i.e. transitions in which x r = i, x r+1 = j), and then an outer sum over all pairs of states i and j in the Markov process. These groupings of sums yield We can now simplify this expression, recognizing that the sum of the occupancy times of state i during this trajectory is ∑ r∶xr=i τ r = T p T i , and the total number of transitions 36 from state i to j is ∑r∶ xr=i Thus the probability density assigned to any individual trajectory x(t) of duration T depends on that trajectory only through two types of observables: the empirical densities p T i and empirical fluxes φ T ij . This dramatic simplification of the distribution over trajectories singles out empirical densities and fluxes as uniquely important order parameters in Markovian non-equilibrium processes. III Rate function for densities and fluxes Next, to go from a probability distribution over trajectories x(t) to a joint distribution over empirical densities and fluxes (p T , φ T ), we would need to integrate over all possible trajectories x(t) that produce any given empirical density and flux pair (p T , φ T ). This direct integration would result in adding a difficult to compute entropic contribution to eq. (106). Instead, we compute the ratio of probabilities for the same path under two different processes, Q and the fictitious process with tilted ratesQ: Noting that this ratio depends on the trajectory x(t) only though the observables (p T , φ T ), we can find a computable relation between the distribution P (p T , φ T ) of these observables under the process Q, and the distributionP (p T , φ T ) of these observables under the tilted processQ: where x(t) → (p T , φ T ) indicates integration over the set of trajectories that lead to a given empirical density and flux.
2020-02-26T02:01:23.146Z
2020-02-24T00:00:00.000
{ "year": 2020, "sha1": "5eb2fb827965a54295df3e6adb942c8f442bd99a", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2002.10567", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5eb2fb827965a54295df3e6adb942c8f442bd99a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Biology", "Medicine" ] }
219510751
pes2o/s2orc
v3-fos-license
The Study on Thematic Progression Patterns in Economic Journalistic Texts from China Daily Thematic Structure theory is one of the most important methods for analyzing texts. Based on the research of thematic structure, Danes proposed the concept of thematic progression pattern. Linguists at home and abroad have used thematic progression patterns to study various types of discourse. However, the research on the theme and thematic progression patterns of economic journalistic texts is rare. Based on this, This paper selects three articles of economic news from China Daily and uses quantitative method. Thematic progression patterns (TP patterns) used in economic journalistic texts and the frequency of occurrence of each thematic progression pattern (TP pattern) were analyzed in this paper. The aim of this paper is to help readers to understand economic journalistic texts, grasp the author's intention and discourse mechanism. I. INTRODUCTION The text coherence is realized by both themes and thematic progression patterns. The study of texts is largely concerned with the study of themes and thematic progression patterns. It is important to conduct the study on text under the direction of themes and thematic progression patterns. By using the TP patterns, we can analyze the developing of a text clearly. It also can help us to find the dynamic distribution of themes and rhemes in a text [1]. It is successive themes and rhemes that push the whole text progress forward logically [2]. The proper application of TP patterns can help to guarantee the whole text more complete and fluent. Different types of texts tend to use different TP patterns to achieve their specific purposes. As the economic journalistic texts, It should have its own TP patterns to achieve its special effects. This paper selects economic journalistic texts from China Daily as the research objects and adopts TP pattern theory to analyze the inner structure of economic journalistic texts. II. IDENTIFICATION OF THEME According to Halliday, "the element at the starting point in the clause is regarded as the Theme and the rest is Rheme" [3]. A coherent text largely depends on the clauses linked together in each text. Such link among clauses are realized by TP patterns in organizing the text. In fact there are many different opinions about the divisions of theme. This thesis uses Halliday's identification of Theme. In Terms of Meta-function, he divides the theme into three types: Ideational Theme, Interpersonal Theme, and Textual Theme. In terms of the complexity, he divides the theme into three types: Simple Theme, Multiple Theme and Clausal Theme. In terms of markedness, the theme is divided into two types: Marked Theme and Unmarked Theme. III. THEMATIC PROGRESSION PATTERNS As for thematic progression patterns, there are many TP patterns brought up by many scholars. Because some of these patterns are same and there is no one TP pattern theory which can cover all the patterns occurring in one text, so this thesis adopts five thematic progression patterns to analyze American presidential inaugural speeches. These five patterns are Danes' Derived Pattern [4] plus Huang Yan's seven patterns. A. Simple Linear Pattern This pattern is the case where the rheme of the preceding clause becomes the theme of the following clause.(T= Theme, R=Rheme): B. Constant Theme Pattern The themes in different clauses are semantically related while the rhemes carry different information. D. Alternative Pattern In this pattern, the previous themes become the following rhemes. E. Juxtaposed Pattern This pattern refers to that the themes of the odd sentences are equal to each other and at the same time themes in the even clauses are identical with each other in a text. (note: n is odd; m is even) F. Irregular Pattern This pattern refers to the case where there is no apparent relations between themes and rhemes on the surface . G. Linear Constant Theme Pattern When the rheme of the preceding clause becomes the theme of the second clause, and the theme of the second clause serves as the themes of the next clauses, it is the Linear Constant Theme. Namely, the first two clauses are in linear pattern while the following clauses have identical themes. In order to analyze the data from a comprehensive perspective, Huang Yan's theory can't cover all the patterns that occur in the speeches, so Danes' Derived Pattern is also adopted in the analysis. H. Derived Pattern This pattern has two types, which are Derived Rheme and Derived Theme. Derived Rheme: in adjoining clauses, when the following rhemes are derived from the preceding rheme in terms of the semantic relations or as an element of the preceding rheme, this pattern is called Drived Rheme. Derived Theme: it is similar to Derived Rheme, which refers to that themes are split from the theme of previous clause. The above patterns are employed in the analysis of the collected speeches. Next part presents the specific analysis of themes and thematic progression patterns in economic Journalistic Texts. IV. METHODOLOGY This study mainly employs the quantitative research method. In the light of quantitative method, the thesis selects three economic journalistic texts from China Daily. Then, the statistics of these patterns will be presented in the form of tables. By analyzing the samples, the frequency of each pattern in the journalistic texts can be seen. Example 1 Chinese firm to set up smartphone factory in Uganda Kampala -The government of Uganda has endorsed a Chinese firm, Xinlan Group in establishing a smartphones factory in Uganda. Bemanya Twebaze, the chief executive officer Uganda Registration Services Bureau (URSB) said on Tuesday, a delegation from Xinlan Group is expected in Uganda next month to officially ink the 10 million deal. The development comes a few weeks after President Museveni directed managers of telecommunication company to establish a cell phone factory in Uganda was hatched in July, 2017 when a government delegation that visited China asked Chinese authorities to give Uganda a comprehensive cyber-secuity solution, including technical capacity to monitor and curb increasing social media misuse. "Once the project kicks off, a total of 5000 Ugandans are expected to get jobs and the project will also help enhance access to internet in the country," Twebaze said. The Chinese company is expected to use local minerals such as coltan to manufacture the telephones. Xinlan is the overseas investment arm of the Amoi Group. From the above analysis, we can present the result as Table I. From the Table I we can see, Simple Linear Pattern takes the first place, 3 in 7, 42.8%. Then the next is Constant Theme Pattern and Irregular Pattern, 2 in 7, 28.5%. Example 2 Chinese app is beautifying the whole world Wai Wai Oo, a student from Myanmar, whips out her smartphone, and facing its screen, takes a selfie. Within seconds, BeautyPlus, a Chinese self-editing app on her phone, beautifies the image: face becomes slim, dark circles under her eyes disappear, and patches of rouge appear on her lips. BeautyPlus is the flagship product of China's self-editing software leader Meitu Inc. It was developed specifically for users outside of China. More than 300 million overseas users use it. "Back in Myanmar, many of my friends use Meitu's products," said Wai Wai Oo. BeautyPlus has location-specific features. In Brazil, for example, a function can darken skin and whiten teeth in an image. "Our global strategy is to ensure that each of our overseas products is 'hyper-localized' to inspire our users in expressing their beauty," said Fox Lui, head of Meitu's international economic. The different definitions of beauty in different countries inspired the company's designers to follow beauty trends and develop diverse products. 11 In India, the company unveiled BeautyPlus ME, a lighter version of Beauty-Plus, as many Indian users tend to use apps with smaller file-size due to bandwidth issues and slow internet connectivity. This strategy has proved to be a success. Google Play named BeautyPlus ME as the first trending app in India in 2016. "Each market has its own unique preferences when it comes to how local people want to express themselves through photos, videos and personal styles. The key to our success is to listen to our users and get to know their preferences," Fox said. The company has established a number of offices abroad. More importantly, more than 20 nationalities are represented on Meitu's global team to be more localized. With its hyper-localization strategy, Meitu has received a wide presence in the overseas market, with over 500 million users overall, it said. The company operates three beauty apps -Beauty-Plus, BeautyPlus Me and Airbrushand virtual makeup app MakeupPlus in foreign markets. To attract more users and drive growth, Meitu cooperated with well-known local celebrities such as Bollywood actress Shilpa Shetty in India. About future plans, Meitu said it will beef up its presence not only in Brazil and India but also the United States, Europe, Japan and Russia. The rest in each sentence is Rheme, which has certain relationships with Themes or Rhemes in each sentence, then the author demonstrates the TP for the whole text, from which we can count the number of each TP pattern. International Journal of Languages, Literature and Linguistics, Vol. 6, No. 1, March 2020 The tourists spent 2 billion yuan ($314 million) during the week-long holiday ending Wednesday, up over 28 percent from last year, according to the regional tourism development commission Thursday. Rural tourism and winter sports were the top choices, according to the commission. Urumqi International Airport handled 408,000 passengers during the period, up 18 percent. A total of 2,994 flights departed from or arrived at the airport, up 8.5 percent. During the holiday, trains in Xinjiang carried nearly 910,000 passengers, including 727,100 within the region. Xinjiang saw robust growth of 32.4 percent in the number of tourists in 2017, receiving 107 million. Tourists spent over 182 billion yuan in the region, 30 percent more than in 2016. As tourism is seen as a barometer of stability and harmony, Xinjiang's tourism performance "shows again that Xinjiang is a good place," said Shohrat Zakir, chairman of the Xinjiang regional government. T1= Northwest China's Xinjiang Uygur autonomous region T2= The tourists T3= Rural tourism and winter sports T4= Urumqi International Airport T5= A total of 2,994 flights T6= During the holiday T7= Xinjiang T8= Tourists T9= As tourism T10= Xinjiang's tourism performance The rest in each sentence is Rheme, which has certain relationships with Themes or Rhemes in each sentence, then the author demonstrates the TP for the whole text, from which we can count the number of each TP pattern. From the Table III we can see, Simple Linear Pattern takes the first place, 3 in 7, 42.8%; the second is Constant Theme Pattern, 2 in 7, 28.5%; the last are Alternative Pattern and Irregular, 1 in 7, 14.2%. VI. CONCLUSION The last chapter gives the summary of the research. The present study is an analysis of TP patterns in economic journalistic texts with the help of statistics. According to the previous studies and under the guidance of Halliday's Theme theory and Huang Yan's Thematic Progression theories, the two questions are answered: What are the TP patterns used in economic journalistic texts? What are the percentages of TP patterns in economic journalistic texts? To begin with, in the light of Theme-Rheme theory, economic journalistic texts prefer to use the simple themes and unmarked themes. On the basis of analysis of themes, TP patterns are analyzed and the frequently-used TP patterns are found: constant theme patterns, simple linear patterns, constant rheme patterns, alternative patterns, irregular patterns. As for juxtaposed patterns and derived patterns, they are not found in economic journalistic texts. This study is only tentative rather than exhaustive. So the limitations of this study are presented. The number of the selected data is limited because of the space and time. It is a relatively small scale research. To some degree, this may affect the results of the research. And this study is only conducted on economic journalistic texts, so there is no comparison with other data. It is also suggested that a comparative study can be done in the further study. and wrote the whole paper. Xiaolan Lei guided and revised the paper. All authors had approved the final version. ACKNOWLEDGMENT Author would like to express the gratitude to all those who offered help during the writing of this paper. Author also owe a special debt of gratitude to all the professors in Foreign Languages Institute, from whose devoted teaching and enlightening lectures I have benefited a lot and academically prepared for the paper. Finally, Author want to express the gratitude to my beloved parents who have always been helping me out of difficulties and supporting without a word of complaint.
2020-05-21T09:17:47.971Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "5be278703100a36b8d0f5a105a2826c2ffd7d6dc", "oa_license": null, "oa_url": "https://doi.org/10.18178/ijlll.2020.6.1.248", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b6c8234f7d35ac02772504e129f877baf992ea81", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "History" ] }
252845768
pes2o/s2orc
v3-fos-license
Altered Fecal Microbiome and Correlations of the Metabolome with Plasma Metabolites in Dairy Cows with Left Displaced Abomasum ABSTRACT Left displaced abomasum (LDA) in postpartum dairy cows contributes to significant economic losses. Dairy cows with LDA undergo excessive lipid mobilization and insulin resistance. Although gut dysbiosis is implicated, little is known about the role of the gut microbiota in the abnormal metabolic processes of LDA. To investigate the functional links among microbiota, metabolites, and disease phenotypes in LDA, we performed 16S rDNA gene amplicon sequencing and liquid chromatography-tandem mass spectrometry (LC-MS/MS) of fecal samples from cows with LDA (n = 10) and healthy cows (n = 10). Plasma marker profiling was synchronously analyzed. In the LDA event, gut microbiota composition and fecal metabolome were shifted in circulation with an amino acid pool deficit in dairy cows. Compared with the healthy cows, salicylic acid derived from microbiota catabolism was decreased in the LDA cows, which negatively correlated with Akkermansia, Prevotella, non-esterified fatty acid (NEFA), and β-hydroxybutyric acid (BHBA) levels. Conversely, fecal taurolithocholic acid levels were increased in cows with LDA. Based on integrated analysis with the plasma metabolome, eight genera and eight metabolites were associated with LDA. Of note, the increases in Akkermansia and Oscillospira abundances were negatively correlated with the decreases in 4-pyridoxic acid and cytidine levels, and positively correlated with the increases in NEFA and BHBA levels in amino acid deficit, indicating pyridoxal metabolism-associated gut dysbiosis and lipolysis. Changes in branched-chain amino acids implicated novel host-microbial metabolic pathways involving lipolysis and insulin resistance in cows with LDA. Overall, these results suggest an interplay between host and gut microbes contributing to LDA pathogenesis. IMPORTANCE LDA is a major contributor to economic losses in the dairy industry worldwide; however, the mechanisms associated with the metabolic changes in LDA remain unclear. Most previous studies have focused on the rumen microbiota in terms of understanding the contributors to the productivity and health of dairy cows; this study further sheds light on the relevance of the lower gut microbiota and its associated metabolites in mediating the development of LDA. This study is the first to characterize the correlation between gut microbes and metabolic phenotypes in dairy cows with LDA by leveraging multi-omics data, highlighting that the gut microbe may be involved in the regulation of lipolysis and insulin resistance by modulating the amino acid composition. Moreover, this study provides new markers for further research to understand the pathogenesis of the disease as well as to develop effective treatment and prevention strategies. gut microbiota richness increased due to the LDA event. Principal coordinate analysis showed that the fecal microbiomes between the LDA and healthy groups were clearly separated (Fig. 1D), and the heterogeneous community structure was significant (permutational analysis of variance [PERMANOVA] P , 0.01), clearly separating the two groups. Specifically, Firmicutes, Bacteroidetes, Tenericutes, Spirochaetes, Proteobacteria, and Actinobacteria were highly abundant in the feces of all lactating dairy cows (Table S3). Notably, the abundances of Verrucomicrobia, Fusobacteria, and Cyanobacteria were higher (P , 0.05) in LDA cows than in healthy cows. Based on differential analysis of the gut microbiota at the genus level, cows with LDA had higher (P , 0.05) abundances of 22 genera and lower (P , 0.05) abundances of two genera compared with those of healthy cows (Table S4). We further focused on genera with a relative abundance of .0.1% in LDA events, including Peptococcaceae rc4-4, Bacteroidaceae 5-7N15, Oscillospira, Prevotella, and Epulopiscium, which were markedly increased (P , 0.01) in cows with LDA (Fig. 1E). Moreover, the abundance of Akkermansia, belonging to the Verrucomicrobia, was higher (P , 0.05) in the LDA group than in healthy cows. Alterations of fecal metabolic profiles of cows with LDA. The principal component analysis and orthogonal partial least-squares discriminant analysis score plots showed that the fecal metabolome differed between the LDA and healthy groups (Fig. showed different taxonomic compositions between the healthy and LDA groups. (E) Boxplot of gut microbiota at the genus level (only genera with relative abundance of .0.1% during LDA event and significance of P , 0.05 are shown) between the healthy and LDA groups. Bar plot indicates that changes in phylum levels correspond to the targeted genera. The abundance of microbiota in the LDA group compared with that of the healthy group evaluated by analysis of variance. *, P , 0.05; **, P , 0.01. S1). A total of 48 metabolites differed between the healthy and LDA groups, including 20 with increased abundance and 28 with decreased abundance in cows with LDA (Table S5). These metabolites were classified as lipids (37.50%), nucleosides and nucleotides (14.58%), organic acids (10.42%), organoheterocyclic compounds (20.83%), and benzenoids (8.33%) ( Fig. 2A). Moreover, 20 of the differential metabolites identified in the stool were derived from common metabolic processes between the host and gut microbiota (Fig. 2B), including linoleic acid, 4-pyridoxic acid, cytidine, citrulline, and cholic acid (Table S5). We also identified 12 differential metabolites specifically derived from the gut microbiota (Fig. 2B), including salicylic acid, glycodeoxycholic acid, and different origins of the differential metabolites, including host, microbiota, co-metabolism, and others (food-related, drug-related, and unknown). (C) Venn plot for KEGG pathway analysis of metabolites from different origins, including host, microbiota, and co-metabolism. Host-Microbial Metabolism in Dairy Cows with LDA Microbiology Spectrum 2-methylbenzoic acid. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis indicated two shared metabolic pathways between the microbiota and co-metabolism with the host (Fig. 2C), including purine metabolism and porphyrin-chlorophyll metabolism (Table S6). Associations of metabolites and fecal microbiota in LDA. To identify correlations between metabolites and changes in the gut microbiome associated with LDA, we performed a Spearman's rank correlation test. The abundances of Akkermansia, Oscillospira, and Peptococcaceae rc4-4 were strongly correlated (P , 0.01) with most of the metabolites identified in the feces (Fig. 3A). In our previous study, we identified 102 metabolites in the plasma which differed between LDA and healthy dairy cows (8). Therefore, we integrated the plasma metabolome with the fecal metabolome (Fig. 3B), and eight shared metabolites were screened between the fecal and plasma specimens, including citrulline, cytidine, linoleic acid, stearoylcarnitine, salicylic acid, 4-pyridoxic acid, taurolithocholic acid, and pentadecanoic acid ( Fig. 3C and D). In comparison with those in healthy cows, the levels of cytidine and 4-pyridoxic acid, belonging to the co-metabolism category, were decreased (P , 0.05) in the LDA group ( Fig. 3D and Table S5). The level of salicylic acid produced by microbiota in dairy cows markedly decreased (P , 0.01) during the LDA event. Linoleic acid and stearoylcarnitine levels of fecal samples were lower (P , 0.05) in the LDA group than in the healthy group. By contrast, the plasma levels of linoleic acid and stearoylcarnitine were increased (P , 0.05) in the LDA group. To determine LDA-specific biomarkers, we performed multiple logistic regression and receiver operating characteristic (ROC) analyses using the identified key metabolites associated with LDA, including 4-pyridoxic acid, salicylic acid, and cytidine. The area under the ROC curve value was 0.91 and 0.98 for the feces and plasma, respectively (Fig. S4). In our dairy cow study cohort, 9 of the 10 healthy cows were successfully predicted and 9 of the 10 cows with LDA were successfully predicted. Alterations of plasma amino acids in cows with LDA. We performed targeted metabolomics to analyze the pool and composition of amino acids. The plasma level of total amino acids was markedly decreased in the LDA group compared with the healthy group, indicating that the size of the amino acid pool waned in the LDA event (Fig. 5A). The relative abundances of BCAA, including leucine and isoleucine, were markedly increased (P , 0.01) in the LDA group compared with that in the healthy group (Fig. 5A). In contrast, the abundances of tyrosine, tryptophan, and histidine decreased (P , 0.01) in the LDA group. Based on the key metabolic factors and associated microbes linked to the LDA event screened out from the correlation analysis described above (Fig. 4C), we further analyzed the correlations between various amino acids, metabolic factors, and microbes. The decrease in circulating total amino acid levels was strongly negatively correlated with an increase in the abundance of microbes, including Akkermansia (r = 20.68, P = 0.001) and Oscillospira (r = 20.68, P = 0.001; Fig. 5B). By contrast, total amino acid levels were positively correlated (r . 0.5, P , 0.02) with the abundances of the key metabolites 4-pyridoxic acid, cytidine, and salicylic acid in the feces and plasma. The decrease in fecal 4-pyridoxic acid levels was positively correlated with decreases in most glycogenic amino acids (r . 0.5, P , 0.02) and negatively correlated with the increase in leucine (r = 20.64, P = 0.004). Moreover, the increased levels of leucine and isoleucine, which belong to the ketogenic amino acid group, were positively correlated with increases in the abundance of Akkermansia (r = 0.66, P = 0.002 and r = 0.61, P = 0.005, respectively) and Oscillospira (r = 0.56, P = 0.011 and r = 0.49, P = 0.029, respectively). Notably, the increase in NEFA and BHBA concentrations was strongly positively correlated with an increase in the levels of BCAA, including leucine (r = 0.59, P = 0.006 and r = 0.61, P = 0.004, respectively) and isoleucine asparagine (r = 0.62, P = 0.004 and r = 0.65, P = 0.002, respectively). However, the decrease in RQUICKI was strongly b Production data were obtained from our previous report (8), including parity, days in milk, and milk yield. c P , 0.01 compared to healthy cows using two-tailed Student's t test. d Data of NEFA and BHBA as metabolic markers were obtained from our previous study (8). Serum biochemistry was determined using automatic biochemical analyzer (Catalyst One, IDEXX, Westbrook, USA), including glucose, TG, ALT, CHOL, TP, ALB, GLOB, and TBIL. Serum electrolytes were determined using a VetStat Analyzer (IDEXX, Westbrook, USA), including free calcium (Ca), potassium (K), and chloride (Cl). Serum insulin levels were determined using a commercial enzyme-linked immunoassay kit. e P , 0.05 compared to healthy cows using two-tailed Student's t test. Metabolic pathway and network of LDA. We identified 3,697 KEGG orthologs (KOs) in the fecal samples, 203 of which differed in abundance between LDA and healthy cows (adjusted P , 0.05; Table S7). Spearman's correlation analysis showed that the relative abundances of leucine and isoleucine were strongly correlated with NEFA and BHBA concentrations (Fig. 5B), indicating that ketogenic amino acid metabolism increased in the LDA event. Genes encoding enzymes produced by gut microbes were found to mediate these ketogenic relationships (Fig. 6, Table S8), including methylmalonyl-CoA mutase (k01848 and k01849, conversion of propanoyl-CoA to succinyl-CoA), alonate-semialdehyde dehydrogenase (k00140, conversion of propanoyl-CoA to acetyl-CoA), and enoyl-CoA hydratase (k01692, b-oxidation). Notably, glycolysis/gluconeogenesis and the citric acid cycle were downregulated in the LDA event. Moreover, decreased levels of tryptophan and tyrosine, which are representative glycogenic and ketogenic amino acids, respectively, were strongly and positively correlated with decreased levels of 4-pyridoxic acid (r . 0.55, P , 0.01) in the feces and plasma (Fig. 5B). Since we previously reported that tryptophan and tyrosine metabolism were negatively correlated with NEFA and BHBA levels in dairy cows under metabolic stress (20), we constructed metabolite-metabolite and metabolite-gene interaction networks to investigate whether 4-pyridoxic acid, salicylic acid, and cytidine are involved in the ketogenic reaction. Interestingly, pyridoxine was identified as a key metabolite linking 4-pyridoxic acid to tryptophan in the network (Fig. 7A). Pyridoxal metabolism was also associated with NEFA, BHBA, and RQUICKI (Fig. 4C). Additionally, kynureninase was identified as a critical gene mediating the association between 4-pyridoxic acid and tryptophan (Fig. 7B), whereas the glutamic-oxaloacetic transaminase-2 was identified as mediating the link between 4-pyridoxic acid and tyrosine. DISCUSSION Dairy cows are at increased risk of developing metabolic disorders as a result of LDA events (7,16); however, the associated mechanisms contributing to metabolic changes in LDA remain unclear. In this study, we used integrated multi-omics data to analyze the correlations among gut microbiota, metabolites, and host phenotypes in dairy cows with LDA. The results clearly demonstrate a state of gut microbiota dysbiosis associated with an amino acid deficit in dairy cows with LDA. Changes in the gut microbiota composition were strongly correlated with ketogenesis and glucose metabolism in the context of LDA. Furthermore, key metabolites which link microbiota to phenotypes in cows with LDA were identified, providing novel mechanistic insights into disease pathogenesis. Gastrointestinal microbiota dysbiosis is closely related to the development of metabolic diseases such as obesity and diabetes (17). The rumen microbiota composition has been associated with production and health in dairy cows (21,22). Hu et al. (23) reported that dysbiosis of the rumen microbiota contributes to the development of diseases in dairy cows. However, knowledge of the contribution of the lower gut microbiota to the health and production of dairy cows is limited (24). In the present study, we found that gut microbiota richness of dairy cows increased during LDA events, and circulation amino acid pool deficits in LDA events indicated that dairy cows with LDA are characterized by low dry matter intake. Previous studies have explained that intermittent fasting increased the richness and diversity of gut microbiota, which are associated with the balance of energy demand during low-energy intake status (25,26). Notably, a recent study revealed that the microbiota structure was markedly different between the rumen and FIG 6 Prediction of genes based on KEGG orthologs (KO) associated with plasma levels of leucine and isoleucine, and genes involved in leucine-isoleucine degradation and fatty acid beta-oxidation. Boxplots indicate differential enrichment of KOs in healthy cows and cows with LDA. k00140, malonatesemialdehyde dehydrogenase; k01692, enoyl-CoA hydratase; k01848, methylmalonyl-CoA mutase (N-terminal domain); k01849, methylmalonyl-CoA mutase (C-terminal domain); k00169, pyruvate ferredoxin oxidoreductase alpha subunit; k00170, pyruvate ferredoxin oxidoreductase beta subunit; k00171, pyruvate ferredoxin oxidoreductase delta subunit; k00172, pyruvate ferredoxin oxidoreductase gamma subunit; k01622, fructose-1. NEFA, non-esterified fatty acid; BHBA, beta-hydroxybutyric acid; PEP, phosphoenolpyruvic acid. An upward-pointing arrow indicates that the metabolite level was increased in the LDA group compared with that in the healthy group, whereas a downward-pointing arrow indicates that the metabolite level was decreased in the LDA group, such as the change in citric acid level from that in our previous report (8). The enzymatic reactions involved are shown in Table S8. *, P , 0.05; **, P , 0.01. Host-Microbial Metabolism in Dairy Cows with LDA Microbiology Spectrum gut in dairy cows during early lactation, with Firmicutes and Bacteroidetes being the most abundant taxa in the gut (27). We found no alteration in the abundance of Firmicutes and Bacteroidetes in the gut associated with LDA during lactation, whereas the abundance of Fusobacteria was markedly higher in cows with LDA than in healthy cows. An increased abundance of Fusobacteria has also been reported in the context of metabolic disorders and inflammation (28). Additionally, we found that the abundance of Saccharibacteria (TM7) was decreased in LDA cows, which has been reported to supplement the contribution of Saccharibacteria (TM7) to reduced inflammation by modulating the pathogenicity of the host bacteria (29). Previous studies have also suggested that LDA increases the levels of inflammatory biomarkers, highlighting an inflammation challenge in dairy cows during LDA events (30). Thus, the associations of the observed changes in Fusobacteria and Saccharibacteria (TM7) in the gut of cows with LDA warrant further study. We also observed that the abundances of Oscillospira, Bacteroidaceae 5-7N15, Prevotella, Akkermansia, Butyrivibrio, Peptococcaceae rc4-4, Eubacterium, and Epulopiscium were higher in cows with LDA than in healthy cows. Intriguingly, changes in the abundances of Oscillospira (31), Akkermansia (32), Prevotella (33), and Peptococcaceae rc4-4 (34) have also recently been recognized as important factors in the development of metabolic diseases. Akkermansia belongs to the phylum Verrucomicrobia, whose abundance has been reported to increase during low-grade inflammation status (35). In addition, a previous study showed that an increased abundance of Prevotella induces low-grade inflammation by mediating the T helper 17 cell immune response (36). Low-grade inflammation has been linked to enhanced lipid mobilization in dairy cows and has also been implicated as a contributor to metabolic diseases during the lactation period, including ketosis and LDA (37). We further observed positive correlations in the abundances of Akkermansia and Prevotella with NEFA and BHBA levels, suggesting that Akkermansia and Prevotella contribute to intensifying lipolysis in cows with LDA. Several studies have suggested that cows with LDA have enhanced lipolysis and ketogenesis (7,8,13). Notably, Bacteroidaceae, Oscillospira, and Butyrivibrio are important microbes that produce butyrate (31,38). In addition to being an important short-chain fatty acid for maintaining epithelial integrity, butyrate is also used as a substrate to synthesize ketone bodies and cholesterol in the liver (39). However, whether butyrate from gut microbiota catabolism plays a dual role in development of LDA remains to be further investigated. We speculate that the observed gut microbiota dysbiosis is associated with the high circulating NEFA and BHBA concentrations in dairy cows with LDA. Recently, Zierer et al. (40) demonstrated that the fecal metabolome can be used to understand the function of gut microbial activity and the associations of fecal metabolites with host metabolism. To explore the correlative mechanism between gut microbiota and the metabolic phenotype in cows with LDA, we compared the fecal metabolome profiles between LDA and healthy dairy cows. The differential metabolites between the two groups were involved mainly in co-metabolism, with 4-pyridoxic acid and cytidine identified as key metabolites. 4-Pyridoxic acid is the main catabolite of pyridoxal, which is associated with systemic inflammation (41); thus, 4-pyridoxic acid is widely used as a functional biomarker to evaluate the pyridoxal status in the body (42). In the present study, the levels of 4-pyridoxic acid in the feces and plasma were markedly lower in LDA cows than in healthy cows. We also observed that 4-pyridoxic acid was strongly positively correlated with tryptophan and mediated the link between the gut microbiota and clinical phenotype. Guo et al. (16) suggested pyridoxal deficiency as an evident characteristic in cows with LDA, along with a concurrent decrease in tryptophan metabolism. Pyridoxal deficiency has been widely reported to be associated with inflammatory or metabolic diseases (42). Mascolo et al. (43) suggested that pyridoxal participates in regulating the inflammatory response and oxidative stress by altering kynureninase activity in the tryptophan metabolism pathway. In addition, we found that the abundance of salicylic acid, which is derived from the catabolism of gut microbiota, decreased during LDA events. Zhang et al. (44) reported that salicylic acid negatively modulates the abundance of Prevotella, further suppressing the inflammation response. Of note, 4-pyridoxic acid, cytidine, and salicylic acid were regarded as prediction factors of LDA in this study. However, one limitation of this study is that the small sample size may increase the false positive rate in the ROC curve. Conversely, we observed increases in other gut microbiota catabolites in cows with LDA compared with that in healthy cows, such as glycodeoxycholic acid and taurolithocholic acid (TLCA), which belong to the secondary bile acids (SBA) group. Taurochenodeoxycholic acid (TCDCA) is transformed into lithocholic acid (LCA) and TLCA by the gut microbiota via an alternative bile acid synthetic pathway (45). LCA and TLCA stimulate glucagon-like peptide-1 (GLP-1) secretion to regulate glucose metabolism by activating Takeda G protein-coupled receptor 5 (TGR5) (46). GLP-1 is involved in the glucose-dependent stimulation of insulin secretion, which contributes to increased insulin levels in cows with LDA. Pravettoni et al. (9) demonstrated that high glucose levels and low insulin sensitivity were characteristics of LDA, similar to ketosis. Although SBA deficiency promotes the development of intestinal inflammation, a high SBA level associated with metabolic diseases has also been reported (46). Using a fecal transplantation experiment, de Groot et al. (47) found that fecal LCA and plasma TLCA levels were markedly increased in metabolic syndrome. Indeed, the plasma TCDCA and TLCA levels were increased in LDA cows in the present study, as reported in our previous study (8). Collectively, these findings suggest that the gut microbiota plays an important role in LDA development. Accumulating evidence suggests that dairy cows experience an excessive nutrient or amino acid deficit in cases of metabolic disease (8,48). Consistently, we observed that the size of the amino acid pool decreased in LDA cows compared with that in healthy cows and that the composition of the amino acid pool was also altered. Notably, the relative proportion of BCAA was increased in cows with LDA and positively correlated with the levels of BHBA and NEFA, such as leucine and isoleucine. BCAA can be converted to acetyl-CoA or succinyl-CoA, which participate in the tricarboxylic acid cycle during a state of nutrient deficiency, and then acetyl-CoA produces BHBA (49). Interestingly, we found that ketogenic genes were upregulated in LDA cows, while tricarboxylic acid cycle and glycolysis/gluconeogenesis-related genes were downregulated in LDA. Additionally, BCAA is synthesized by the gut microbiota and maintains homeostasis by regulating glucose metabolism, lipid metabolism, and insulin resistance (18). Yoon (50) suggested that high plasma BCAA levels contribute to insulin resistance by activating the mammalian target of rapamycin complex-1. Most metabolic diseases in dairy cows are closely related to insulin resistance, such as ketosis, fatty liver, and LDA (9,51,52). However, previous studies have suggested that abnormal metabolism of aromatic amino acids, including tryptophan and tyrosine, is a risk factor for inducing insulin resistance (53,54). We observed that the gut microbe was negatively correlated with tryptophan and tyrosine metabolism. Taken together, these findings suggest that the gut microbe is involved in the regulation of lipolysis and insulin resistance in LDA by modulating the amino acid composition, which requires further research. In conclusion, to our knowledge, this is the first study to characterize the correlation between gut microbes and metabolic phenotypes in dairy cows with LDA by leveraging multi-omics data. Key metabolites were identified that were closely related to gut microbiota dysbiosis, and lipolysis was enhanced in conjunction with an amino acid deficit in LDA cows, including 4-pyridoxic acid, cytidine, and salicylic acid. In contrast, changes in SBA and BCAA metabolism in the gut microbiota may contribute to the promotion of insulin resistance. One of the main limitations of this study is that our findings revealed functional links between the microbiome, metabolome, and disease phenotype, but could not define causality. Therefore, we suggest future evaluation of the correlation between the gut microbiota and metabolism prior to the onset of the disease, which could further help explain the underlying cause of LDA and suggest potential new prevention options. MATERIALS AND METHODS Animals and sample collection. Animals were treated and samples were collected in strict accordance with the Guidelines for the Care and Use of Laboratory Animals of China. All procedures were approved by the Institutional Animal Care and Use Committee of Sichuan Agricultural University (no. DYY-2018203039). The experiment was performed at a modern dairy farm in Sichuan Province, China from December 2019 to March 2020, with approximately 1,200 lactation Holstein dairy cows, 350 dairy heifers, and 450 dry cows. All cows were transferred to fresh barns after calving. Lactation dairy cows were housed in free-stall barns and had free access to freshwater. The cows were milked daily at 0630, 1200, and 1930 h. A total mixed ration (TMR) diet was provided three times daily (ingredients and chemical composition of the TMR diet are shown in Table S1). After the first daily milking, a veterinarian monitored for and diagnosed diseases in the postpartum dairy cows using previously described evaluation criteria (4). The average monthly incidence of LDA in total herds was 1.3% during the experimental period, 300 postpartum dairy cows were included in clinical health monitoring, and 26 cows were diagnosed with LDA. The clinical criteria used to determine a healthy status and an LDA diagnosis have been described in our previous study (8). LDA diagnosis and sample collection were performed prior to feeding. Blood and stool samples were collected on the day of LDA diagnosis from both the LDA cows and healthy cows. Serum and plasma (heparin sodium as an anticoagulant) samples were collected and centrifuged at 1,500 Â g for 10 min at 25°C. All samples were stored at 280°C until analysis. Only cows with a single LDA event were included in the study cohort. Finally, we selected 10 healthy postpartum Holstein dairy cows and 10 LDA dairy cows with similar parity, age, and days in milk. The production data were collected, and the serum metabolic markers assessed in this study are listed in Table 1. Fecal microbiome analysis by 16S rDNA amplicon sequencing. DNA was extracted from the stool samples using the E.Z.N.A. Stool DNA kit (no. D4015; Omega Bio-Tek, Norcross, GA, USA). DNA concentration and purity were determined using a 1% agarose gel. According to the concentration, DNA was diluted to 1 ng/mL using sterile water. Amplification of the 16S rDNA V3-V4 region was performed by PCR (Bio-Rad, Hercules, CA, USA) as described previously (55). The 16S rDNA was sequenced on the MiSeq platform (HiSeq2500; Illumina, San Diego, CA, USA). After raw sequences were filtered, paired-end reads were merged to tags by fast-length adjustment of short reads (FLASH, v1.2.8). The tags were clustered into operational taxonomic units (OTUs) with a 97% threshold using UPARSE pipeline (56). Chimeras were filtered out using UCHIME (v4.2.40). Each OTU representative sequence was taxonomically classified using Ribosomal Database Project Classifier trained on the Greengenes database by QIIME (v1.8.0), based on a 0.7 confidence threshold. The OTU tables were filtered to remove low-quality features based on interquartile range, and then the data were normalized by total sum scaling. After data pretreatment, alpha and beta diversities were calculated using the MicrobiomeAnalyst platform (https://www.microbiomeanalyst.ca) (57). Fecal metabolome profiling. Stool samples (60 mg) were added to 1,000 mL of methanol:acetonitrile:water (4:4:1, vol/vol/vol) containing an isotopically labeled internal standard mixture. After vortexing for 60 s, the samples were sonicated twice using ultrasonic liquid processors (Scientz JY92-II, Ningbo, China) in an ice-water bath for 30 min. The samples were then incubated for 1 h at 220°C and centrifuged at 14,000 Â g for 20 min at 4°C. The supernatants were subjected to ultra-high-performance liquid chromatography (UHPLC; 1290 Infinity II, Agilent Technologies, Santa Clara, CA, USA). The column temperature was set to 25°C and the flow rate was 0.3 mL/min. Samples were processed with the following mobile phase gradient: 0 to 0.5 min, 95% B (acetonitrile); 0.5 to 7 min, 95% to 65% B; 7 to 8 min, 65% to 40% B; 8 to 9 min, 40% B; 9 to 9.1 min, 40% to 95% B; 9.1 to 12 min, 95% B. Mobile phase A consisted of Host-Microbial Metabolism in Dairy Cows with LDA Microbiology Spectrum water with 25 mM ammonium acetate and 25 mM ammonium hydroxide. Triple time-of-flight was performed on a mass spectrometer (TOF 6600, AB SCIEX, Framingham, MA, USA) equipped with an electrospray ionization source operating in positive-and negative-ion modes. The source temperature was 600°C and the ion source gas and curtain gas pressures were 0.4137 MPa and 0.20685 MPa, respectively. The mass-to-charge ratio (m/z) ranges of the time-of-flight MS scan and tandem mass spectrometry ion scan were set to 60 to 1,000 Da and 25 to 1,000 Da, respectively. The MS/MS spectra were collected in information-dependent acquisition mode. The ion peak, retention time, and peak area were analyzed using XCMS software. The detailed approach used for compound identification of the metabolites is described in our previous report (58). Targeted plasma amino acids analyses. The plasma samples were pretreated as previously described (58) and the supernatants were prepared using the UHPLC system (1290 Infinity II, Agilent Technologies). The column temperature was 40°C, the injection volume was 1 mL, and the flow rate was 0.25 mL/min. The mobile phase consisted of A (water with 25 mM ammonium formate and 0.08% formic acid) and B (acetonitrile with 0.1% formic acid). The mobile phase gradient was 0 to 12 min, 90% to 70% B; 12 to 18 min, 70% to 50% B; 18 to 25 min, 50% to 40% B; 25 to 30 min, 40% B; 30 to 30.1 min, 40% to 90% B; 30.1 to 37 min, 90% B. MS (QTRAP 5500, AB SCIEX, Framingham, MA, USA) was performed in positive-ion mode. The source temperature was 500°C, and the ion source gas and curtain gas pressures were 0.2758 MPa and 0.20685 MPa, respectively. Amino acid data were collected with a multiple reaction monitor, and the peak area and retention time were collected using MultiQuant software (v3.0, AB SCIEX). The targeted amino acids were identified using standard substances from Sigma-Aldrich (Saint Louis, MO, USA), as shown in Table S2. Statistical analyses. To compare the variables between the healthy and LDA groups, univariate analyses were performed in R software (v 4.1.3), including fold change and a two-tailed Student's t test. The fecal metabolic profile data were log 10 -transformed and scaled to multivariate analysis. Principal-component analysis and orthogonal partial least-squares discriminant analysis were performed using SIMCA-P software (v14.1.0, Umetrics, Umea, Sweden). A P value of less than 0.05 was considered significant. Differential metabolites were screened using variable importance in projection scores of .1 and P , 0.05. The category and origin of the differential metabolites were analyzed using the MetOrigin platform (59). To screen for key metabolites, the fecal microbiome and its metabolome were integrated with plasma metabolome data (MetaboLights repository, accession code MTBLS4126) obtained in our previous study (8). The Spearman's rank correlation coefficient was used to analyze the associations between fecal microbiota and differential metabolites. Correlations between clinical variables and microbiota/metabolites were also assessed in both LDA and healthy cows. Correlation thresholds were set to Spearman's jrj . 0.4 and P , 0.05. KEGG pathway and KO functional enrichment analyses were performed using the R platform. Network interaction analysis between metabolites and metabolites/genes was performed using MetaboAnalyst (https://www.metaboanalyst.ca) (60). To determine relevant clinical biomarkers of LDA, multiple logistic regression and receiver operating characteristic curve analyses of metabolites were performed using GraphPad software (v9.1.0, GraphPad, San Diego, CA, USA). Data availability. Sequencing data are available on the NCBI Sequence Read Archive (SRA) database (https://www.ncbi.nlm.nih.gov/sra) under the study accession code PRJNA838477. The raw LC-MS/MS data files for the fecal metabolome are deposited at the MetaboLights database (http://www.ebi.ac.uk/ metabolights) of the European Bioinformatics Institute under the code MTBLS2441. SUPPLEMENTAL MATERIAL Supplemental material is available online only. We declare no competing interests.
2022-10-13T06:18:09.883Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "ff07bbb73caec1802adb3fb1aed7cf43b334cd77", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ASMUSA", "pdf_hash": "5f8f26bbc5af45cca24803144745f99500969fe0", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119306471
pes2o/s2orc
v3-fos-license
Inert Doublet Dark Matter and Mirror/Extra Families after Xenon100 It was shown recently that mirror fermions, naturally present in a number of directions for new physics, seem to require an inert scalar doublet in order to pass the electroweak precision tests. This provides a further motivation for considering the inert doublet as a dark matter candidate. Moreover, the presence of extra families enhances the Standard Model Higgs-nucleon coupling, which has crucial impact on the Higgs and dark matter searches. We study the limits on the inert dark matter mass in view of recent Xenon100 data. We find that the mass of the inert dark matter must lie in a very narrow window 74-76 GeV while the Higgs must weigh more than 400 GeV. For the sake of completeness we discuss the cases with fewer extra families, where the possibility of a light Higgs boson opens up, enlarging the dark matter mass window to m_h/2-76 GeV. We find that Xenon100 constrains the DM-Higgs interaction, which in turn implies a lower bound on the monochromatic gamma-ray flux from DM annihilation in the galactic halo. For the mirror case, the predicted annihilation cross section lies a factor of 4-5 below the current limit set by Fermi LAT, thus providing a promising indirect detection signal. It was shown recently that mirror fermions, naturally present in a number of directions for new physics, seem to require an inert scalar doublet in order to pass the electroweak precision tests. This provides a further motivation for considering the inert doublet as a dark matter candidate. Moreover, the presence of extra families enhances the Standard Model Higgs-nucleon coupling, which has crucial impact on the Higgs and dark matter searches. We study the limits on the inert dark matter mass in view of recent Xenon100 data. We find that the mass of the inert dark matter must lie in a very narrow window 75 ± 1 GeV while the Higgs must weigh more than 400 GeV. For the sake of completeness we discuss the cases with fewer extra families, where the possibility of a light Higgs boson opens up, enlarging the dark matter mass window to 1 2 m h -76 GeV. We find that Xenon100 constrains the DM-Higgs interaction, which in turn implies a lower bound on the monochromatic gamma-ray flux from DM annihilation in the galactic halo. For the mirror case, the predicted annihilation cross section lies a factor of 4-5 below the current limit set by Fermi LAT, thus providing a promising indirect detection signal. I. INTRODUCTION Recent data released by the Xenon collaboration [1], relative to 48000 kgd statistics, improve previous limits on the Dark Matter (DM) direct detection cross section versus the DM mass. This result has significant implications for scenarios of Dark Matter based on particle physics. One popular such scenario, pursued in recent years, is the inert scalar doublet extension of the minimal Standard Model [2]. While its great virtue lies in its simplicity, the stability of the inert candidate is assumed without any theoretical hint in its favor. As we argued in a recent paper [3], the inert nature of the second doublet is favored by electroweak precision tests (EWPT) constraints in the presence [4] of mirror families. Mirror fermions are a must in a number of physically motivated scenarios: Kaluza-Klein theories [5], family unification based on large orthogonal groups [6][7][8], N=2 supersymmetry [9] and some unified models of gravity [10]. Moreover, they were envisioned by Lee and Yang in their classic paper on parity breakdown [11] as a way to restore parity in the fundamental interactions. Such a framework becomes quite predictive when constraints from the EWPT and the vacuum stability of the Higgs potential [3] are considered. The lowest component of the inert doublet, which is a possible dark matter candidate, must have a mass less than around 100 GeV. In view of this, we study the dark matter direct detection in the presence of three extra chiral mirror families, taking into account the recent data from the Xenon100 experiment. The key point here is the enhancement of the effective Higgs portal to the nucleon in the presence * amelfo@ictp.it, miha@ictp.it, goran@ictp.it, nesti@aquila.infn.it, yuezhang@ictp.it of the extra heavy fermions [12]. This is precisely the same diagram which enhances the Higgs production cross section at hadron colliders. In the presence of mirror families the enhancement of direct detection gives approximately a factor of 9, so with the new Xenon100 bound this scenario becomes predictive for the mass and possible annihilation channels of the inert dark matter. In turn, this frameworks predicts a lower bound on the monochromatic gamma ray line from the annihilation in the galactic halo, whose cross section is less than an order of magnitude below the current Fermi LAT sensitivity. It is important to keep in mind that although the mirror case provides a good rationale for the stability of the inert DM candidate, generically it may not be necessary to stick to the mirror conjecture. In fact, none of the results we present depend on the choice of chirality of the extra fermions, and as such they are equally applicable to the usual additional copies of the SM families. In what follows, we present our results for one to three extra families (more are not allowed by precision tests) in the presence of an extra inert doublet. The paper is organized as follows. In the section II, we describe the theoretical and experimental constraints on the inert doublet extension of the SM with extra families. We include the limits from direct collider searches, the electroweak precision constraints, perturbativity and vacuum stability, and describe the updated Tevatron Higgs exclusion window in this model. In Section III, we present our results for the relic density and direct detection, which constrain the DM mass to lie between 1 2 m h -76 GeV for a fourth family and in a very narrow window 75 ± 1 GeV in the case of the three mirror families. This enables us to give a quite robust prediction for the monochromatic gamma ray flux from the galactic halo. Section IV contains a summary and the outlook. As we will describe now, the EWPT together with the constraints from perturbativity and vacuum stability, favor the existence and inertness of an extra doublet, Φ. After the electroweak symmetry breaking, this field decomposes into an extra scalar S, a pseudoscalar A and a charged component C. A. Vacuum stability and perturbativity The large Yukawa couplings of the extra quarks become a problem for vacuum stability in the case of a light SM Higgs, as they tend to drive the Higgs boson self coupling to negative values. Therefore, one should take the extra quarks as light as possible, without running into conflict with direct search, ∼ 350 GeV. Still, in the case of more than one extra family, the light Higgs mass window 115 GeV m h 131 GeV is excluded if the vacuum is to be stable up to a reasonably high cutoff (e.g. 1 TeV). In particular with three extra families (mirror case), the SM Higgs boson needs to be heavier than ∼ 400 GeV [3]. At the same time the requirement of perturbativity imposes an upper bound on the SM Higgs mass, which is about 600 GeV [2,15], as well as on the mass (Yukawa couplings) of heavy fermions, which have to be lighter than roughly 500 GeV. All this has important implications for the discussions of the EWPT, in the next section (see Fig. 1). Concerning the components of the extra doublet, in the mass ranges that are favored by EWPT, they have no appreciable impact on perturbativity or/and stability of the SM Higgs. B. Electroweak Oblique Corrections It has been shown in [4,13] that with a fourth family one can fit the electroweak oblique parameters S, T , U to within 68% confidence level (CL). However, this becomes progressively constrained as more families are added, until χ 2 ≈ 13.5 for the case with three extra families, outside 99% CL. The introduction of the second doublet can help to alleviate the tension (in appendix A we list the relevant expressions for its contribution). In fact, we have explored the best fit cases and find that they are characterized by a significant cancellation of the contributions to T from the (three) extra families and the doublet. In this case we find the best χ 2 ≈ 9.0, lying inside the 99% ellipse. Qualitatively, one can understand what happens for the mirror case in the following way: Naively for heavy electroweak doublet fermions, the contribution to the S parameter is 1/(6π). Therefore, three extra families contribute around 0.7. However, as noticed recently [14], this relatively large contribution can be reduced by making the extra neutrinos lighter than the Z-boson. The T parameter receives a large positive contribution by splitting the extra neutrinos and charged leptons, which can be compensated by splitting the components of the second Higgs doublet. Finally, the reason one cannot get even an even better fit is due to the relatively large contribution in the U direction, again from splitting the lepton doublets of the extra families. Fig. 2 illustrates the effect of adding an inert doublet and mirror families on the oblique parameters S, T and U . We show a projection in the S − U plane of the 68%, 95% and 99% contours from [13], together with the value obtained for a heavy SM Higgs and how it changes with the addition first of an extra inert doublet, and then three extra families. Sample points are Fig. 3 we show, for one to three extra families, the contours of the best χ 2 in the m h − m U parameter space Other mass parameters are varied in the range consistent with direct search limits (see Table I below) in order to optimize the fit, and we have taken into account the lower bound on the SM Higgs mass due to vacuum stability. Indeed, for a fourth family the Higgs can be either light (m h ∈ [114, 131] GeV) or heavy (m h > 204 GeV). For more families instead, the vacuum stability bound becomes relevant: m h 300 GeV and m h 400 GeV for two and three extra families, respectively. The results, shown in Fig. 3, can be summarized as follows: • The best fits are obtained when Higgs is lighter. • For one and two extra families one can always find solutions so that the oblique parameters are fit to within 68% CL. • In contrast, for three extra families, the SM Higgs is constrained to be heavier than 400 GeV, so that the best χ 2 turns out to be much higher, see Fig. 1, where we bring together the EWPT and vacuum stability and perturbativity constraints. It turns out that for the mirror case the best fit scenario is very predictive regarding the mass spectrum. First, the extra charged leptons and (especially) neutrinos are constrained to be light, while the quarks have to lie around 400 GeV and the Higgs around 500 GeV to be safe from vacuum instability (see Fig. 1). Then, the scalars from the second doublet are constrained to lie in the range 250 GeV m C 500 GeV and m A 450 GeV. Also, at the best fit point, the scalar component S has to be lighter than 100 GeV. This is only possible if S has tiny (or no) mixing with the SM Higgs boson, to avoid the LEP bound on Higgs-like particles. Finally, the χ 2 is also minimized when the extra doublet does not mix at all with the standard Higgs one [3]. C. An inert doublet The fact that the inert nature is favored by EWPT provides a motivation to take the lightest neutral component in the second doublet to be the dark matter candidate. Before pursuing such possibility in detail, we will define the potential and study the relevant experimental bounds on the Higgs sector. In this scenario, the extra scalar doublet does not develop a vacuum expectation value and is not coupled to fermions [2,15,16]. Assuming the stability of its lightest member implies an exact Z 2 symmetry, 1 which restricts the potential to the following form: Clearly, all terms are odd under the Z 2 symmetry. Experimental bounds on the inert doublet scalars and extra leptons derive mainly from LEP, while the extra quarks are required to be very heavy from direct searches at the Tevatron and LHC. We summarize the bounds on new fermions and inert scalars in Table I, and refer the reader for details to [3] and references therein. D. Implications on the SM Higgs search An extension with a second Higgs doublet and extra families changes both the production and decays of the SM Higgs boson. We calculate the branching ratios using HDECAY [17] as shown in Fig. 4, with possible new decay channels into SS, NN and EĒ. Meanwhile, the existence of extra quarks will enhance H → gg, which dominates the branching ratios before the SS decay channel opens. Both extra quarks and leptons contribute destructively [18] with the W -boson to the branching ratio H → γγ. We find such destructive interference is most complete for two extra families. For one or three extra families, the suppressions of diphoton branching ratio are similar, about 0.01−0.1 of the SM value, for a light Higgs. We also noticed the charged scalar C from the inert doublet makes a negligible contribution [19] to the H → γγ branching ratio. On the other hand, the Higgs production cross section via gluon fusion also receives enhancement due to the presence of heavy chiral quarks. Combining these effects, we use the most recent results on Higgs searches from D0 and CDF [22] to evaluate the exclusion window on the Higgs boson mass. With the presence of a fourth family, the enhancement factor is roughly a factor of 9, for m h < 200 GeV. This has been used by [22] to claim the exclusion region between 131 − 204 GeV. 2 However, this does not hold for light extra neutrino N as argued in [24], because the Higgs "invisible" decay significantly reduces the branching ratio of W W channel used for the identification of the Higgs. Similarly, if the scalar S is sufficiently light (even for heavy N ), i.e., m S ≈ 50 GeV, the Tevatron exclusion window on the Higgs boson mass shrinks to ∼ 150-200 GeV. At the same time, the H → SS channel dominates for all the light Higgs mass values, as can be seen in Fig. 4. In any case, there is still the bound from the LEP: m h 114 GeV [20]. For the mirror families case, the Higgs production cross section gets enhanced 49 times for m h < 200 GeV [20] and this factor gets reduced to as much as ∼ 20 for heavier Higgs near the tt threshold [21]. In this case, the Tevatron direct search excludes the Higgs mass window between ∼ 160-250 GeV for a light m S ≈ 50 GeV and moderate λ L ≈ 0.3. Taking a slightly heavier m S ≈ 70 GeV, the exclusion window will extend to ∼ 130-250 GeV. It is useful to recall though, that vacuum stability with mirror families excludes the light Higgs regime anyway, and further imposes the lower bound m h 400 GeV (Fig. 1). III. INERT DOUBLET AS DARK MATTER As a convention, we take S to be the lightest component of Φ and therefore the DM candidate (assuming A to be DM is physically equivalent, since a simple redefinition interchanges them). Of course, if extra families are included, the new neutrinos are available as an additional component of DM. This depends on the nature and mass spectrum of neutrino masses. In case they are Dirac particles, their contribution to the relic density is negligibly small, less than 0.3%. An appreciable contribution can be obtained by a judicial choice of their Majorana masses [24]. 3 We do not pursue this option here, so that S from the inert doublet by itself accounts for the DM. The relevant interactions of S which govern the relic density and the direct detection are its interactions with W and Z, fixed by the gauge group representation and the following interaction with the SM Higgs boson: Throughout the analysis, we will take m A = m C ≈ 450 GeV, a consequence of strong hierarchy between A, C and S, demanded in the case of three/mirror extra families. Therefore, the co-annihilation effects are safely neglected. A. Relic Density To determine the relic density, we employ the MicrOMEGAs package [26], which includes all the relevant two body annihilation final states. The relic density is constrained by the WMAP five year data [27] to be 0.092 < Ωh 2 < 0.128, where Ω is the dark matter density and h is the normalized Hubble expansion rate. The main processes controlling the thermal freeze-out of dark matter include the usual annihilation to weak gauge bosons, as well as the annihilation through the SM Higgs boson into ff (predominantly bb) [16]. Thus, the relic density of the dark matter depends not only on m S , but also on m h and λ L . Roughly, the viable parameter space can be divided into the following relevant regions which are depicted in Fig. 5. • First, S S → h * → bb (denoted A in Fig. 5) dominates the annihilations. This can happen for a light m S < 75 GeV (this is when the W W channel takes over) and large enough λ L /m 2 h . Alternatively, the same happens for smaller λ L /m 2 h but when the center of mass energy is near the Higgs pole (denoted B in Fig. 5). The latter case corresponds to m S ≈ 1 2 m h , as long as m h < 150 GeV. • Second, S S → W W dominates. This can happen either predominantly through the direct SSW W coupling (denoted C), in which case in order to give the correct relic density m S is forced to lie around 75 GeV; or through both the direct and Higgs mediated SSW W couplings (denoted D). In this case, for proper values of λ L /m 2 h , one may obtain the correct relic density through judicious cancellation of the two contributions [28] and the mass of S extends from 75 to ∼ 110 GeV. In principle, the two body final state can be considered just as a subset of a more general annihilation channel S S → W W * . As was noticed in [29], the three body process becomes more relevant for m S 75 GeV, when S S → bb annihilation rate is low. The three-body annihilations have not yet been included in MicrOMEGAs, therefore the relic density Ω h 2 provided by MicrOMEGAs has to be rescaled to properly account for such an effect. In practice, we calculate the thermally averaged annihilation cross sections for both S S → W W and S S → W W * . Then, the correct relic density Ωh 2 is suppressed by the factor where the thermally averaged cross sections are evaluated at T f = m S /25. It is useful to note that the branching ratios depicted in Fig. 4 help to properly determine the SM Higgs propagator when the annihilation happens at the resonance. An important point to note is the possibility of annihilation of S into neutrinos from extra families, which have large couplings to the Higgs boson. If such an annihilation channel is open, because of the large Yukawa couplings of N , the SSh coupling must be dramatically reduced in order to keep the relic density intact. In this case, the direct detection cross section is accordingly reduced, and will typically end up being below the Xenon sensitivity. There is also an intermediate scenario with m N just slightly below m S , where the direct detection may still be possible. We will comment on this possibility below. Let us first focus on the scenario where all the extra neutrinos are heavier than S. B. Direct Detection with Extra Families Direct detection of the inert dark matter is mediated by the exchange of the SM Higgs boson with nucleons at tree level [12]. The effective matrix element for the Higgs interaction with the nucleon is [31], where the sum over q goes through all the quarks, m N is the nucleon mass, n h is the number of heavy quarks and N |m qq q|N ≡ m N f (N ) Tq is the nucleon sigma term for light flavors. Clearly, the strength of the effective interaction depends on the number of heavy quarks, which contributes democratically. In the following analysis, we use the central values of f Tq in [31] from the lattice results and get f (n h = 3) = 0.375 for the SM which is close to the central value used in [30]. 4 The extra family extension will boost such interactions, yielding f (5) = 0.542 for fourth family case and f (9) = 0.875 for three extra families. The main uncertainty in the matrix element comes from the strange quark contribution [32]. The direct detection cross section is thus where µ = m S m N /(m S + m N ). In the case of mirror families there are 6 new heavy quarks and the direct detection cross section gets enhanced by a factor of 9. This facilitates the direct detection of this dark matter candidate. It is then important to compare these predictions with the bound resulting from the recent Xenon100 released data [1]. For a realistic comparison, one has to also take into consideration the large uncertainty in the local DM density ρ . This quantity, which is necessary in converting the Xenon expected rate to the excluded cross section, is inferred only from very indirect and uncertain measurements, and can at best be constrained to lie in the fairly broad range ρ = 0.4 ± 0.2 GeV/cm 3 [33]. This uncertainty then shifts the bound on the cross section, and is of relevance for the model under consideration. In Fig. 6 we present the results of the comparison, for 3+1 families (left plot) which favors light Higgs and 3+3 families (right plot) which favors a heavy Higgs boson. These represents the main results of our work. As it can be seen, the upper bound set by Xenon100 narrows down the allowed region for the m S , even in the hypothesis of low DM density, to one or two fixed values of m h . • Focusing first on the 3 + 1 case, if the SM Higgs is light, the DM mass is practically fixed by the legs of the "giraffe". For example, for m h = 120 GeV, the mass lies in the window between 1 2 m h and 76 GeV. The former value corresponds to the annihilation through the SM Higgs boson to bb final states (with minor corrections from W W * ). The second value corresponds to annihilations to W W . On the other hand, when the SM Higgs is heavy, the DM mass is confined to a particular value around 75 GeV. • The 3 + 3 case is represented by the red region in the right panel of Fig. 6. The allowed region has only one "leg", because the Higgs has to be heavier than 400 GeV (for vacuum stability). Therefore, m S must lie very near 75 GeV. Note that this value is only due to the W W annihilation channel and thus it is independent from the Higgs mass. Actually, the exclusion of the rightmost part (in contrary to zero extra family case) is quite insensitive to the particular choice of Higgs mass (as explained for case D in figure 5). The Xenon collaboration also claimed a mild positive evidence of dark matter, whose cross section is just below the current bound. If true, it can be easily accommodated for the values of m S discussed above. Let us finally comment on the other possible scenario mentioned above, where some of the heavy neutrinos are lighter than S, allowing the annihilation SS → N N . In this case, to maintain the correct relic density, the hSS coupling λ L is reduced by a factor of at most ∼ 1/10 and the direct detection regions shown in Fig. 6 are shifted downwards by 10 −2 . This reduces the predictivity of this scenario in terms of the S mass, but implies in turn that m N lies in a narrow region, [45 GeV, m S ], which is important for the detection of N at colliders. From the point of view of EWPT, both scenarios with either lighter or heavier than N are equally allowed. C. Indirect Detection with Gamma-ray Line As we saw above, the 100 live-day Xenon100 results restrict the DM mass to a narrow region, especially in the presence of mirror families. The main annihilation channel during freeze-out is to gauge bosons, while today in the galactic halo since the temperature is low, the annihilation through the SM Higgs to bb could be important. A spectacular signature of the inert doublet DM would be the observation of a monochromatic gamma-ray line from its annihilation in our galactic halo. This could serve as a promising signal of indirect detection to determine the mass of the DM. In this model, such process goes through a dimension six operator SSF µν F µν with a loop suppression (mainly W -loop). The DM initial states SS can either couple to the W -loop directly, or through the SM Higgs to both W and heavy fermion loops. In fact, the associated loop functions are the same as those in the h → γγ process. The implication of Xenon100 is the suppression of the SSh coupling and thus the Higgs mediated annihilation to two photon, which eliminates the possibility of any destructive cancellation. Therefore, there is a robust prediction of a lower bound on the gamma-ray line flux. The Fermi LAT experiment has put constraints on the DM annihilation into gamma-ray lines between the energy 30-200 GeV [35]. In Fig. 7, we show the cross section as a function of the DM mass for m h = 400 GeV and different values of λ L , as well as the experimental bound assuming different halo density functions. For the mirror case where the DM mass is restricted to 74-76 GeV by WMAP and Xenon100, we find the predicted annihilation cross section σv(SS → γγ) lies only a factor of 4-5 below the current Fermi LAT bound. If the future data release can further push the limit down by one order of magnitude, one will be able to verify or exclude the possibility of the inert doublet being the DM candidate. Decaying DM: approximate Z 2 symmetry? Up to now, as in previous studies we have assumed the dark matter to be absolutely stable. If it were to be so, it would imply the existence of an exact Z 2 symmetry. Observationally, dark matter does not have to be absolutely stable, and therefore one should be open minded to consider the possibility that the Z 2 symmetry is only approximate. The point is, that approximate global symmetries are equally useful in guaranteeing the naturalness of small couplings as the unbroken one. This is the essential criterion of naturalness. Needless to say, an unstable DM still has to be cosmologically long lived. A decaying scalar dark matter could also lead to mono-chromatic gamma-rays carrying energy equal to half of its mass. This process could go through the effective dimension five operator ( /v)SF µν F µν , where breaks the Z 2 symmetry explicitly. Fermi LAT imposes an stringent upper bound 10 −26 . IV. LHC PROSPECTS Finally, we comment on the LHC prospect of discovering or falsifying this theoretical setup. Heavy quark states. The most obvious way to verify or falsify the above framework is to search the heavy quarks from extra families. Being colored states, they have large cross sections at hadron colliders. As mentioned in Sec. II, the current limits on the heavy quarks are around 350 GeV, mainly from Tevatron. With higher energy and luminosity, LHC can soon push the mass limits into the non-perturbative regime, where new bound states could emerge whose properties largely depend on the corresponding Yukawa couplings [36]. Inert dark matter signatures. The signatures of the inert doublet model have been studied in [37,38], focusing on the multi-lepton final states. One should keep in mind that a typical mass spectrum of the inert doublet is quite hierarchical in the mirror scenario under consideration. In particular, we find A to be the heaviest inert scalar 450 GeV < M A < 600 GeV, the mass of C lies in the intermediate range 250 GeV < m C < 500 GeV and S is very light 50 GeV < m S < 150 GeV. The resulting signatures after their pair production at the LHC differ slightly from the previous analyses, due to the large mass hierarchy and potential cascade decays of both A and C. Therefore, the final state leptons and jets possess large transverse momentum. Meanwhile, due to the fact that any such final state always contains a pair of S, the resulting missing energy will also be typically larger than 100 GeV. These characteristics can be fully tested when the energy of LHC reaches 14 TeV. Are mirror neutrinos Majorana? A priori, just as in the SM, we cannot know the nature of neutrinos. The dominant view today is the Majorana picture which, if true, would have particularly exciting consequences for the neutrinos belonging to extra generations. Particularly interesting is the mirror case, which forces the three mirror neutrinos to be heavy neutral leptons with masses around 50-100 GeV. They could even be the source [39] of the seesaw mechanism in which case, the mirror and ordinary families would be forced to mix by a tiny amount. Although this is not mandatory, these mixings are plausible and are naturally small enough (technically, the mirror symmetry preserves their smallness) to evade the Z width constraint. In all honesty, this appealing, simple and testable seesaw picture may not be very convincing. After all, one needs new physics to generate the Majorana masses of mirror neutrinos and there is no reason that the new physics is not generating Majorana masses for ordinary neutrinos, too. What about the seesaw paradigm, with only one or two extra families? The former case is immediately ruled out since one predicts only one massive ordinary neutrino. In the latter case, one has an interesting prediction of maximally hierarchical neutrinos, since only two of them are massive. This fits nicely with cosmological considerations, which keep lowering the sum of light neutrino masses [40]. Moreover, the decays of the heavy extra neutrinos N are governed by the Dirac mass terms, which are functions of the leptonic mixing matrix, the masses of N 's and only one complex parameter [41]. This case can definitely be tested by measuring different flavor combinations of the final dilepton final states, similar to the minimal case of type I+III seesaw [42]. This could be an example of a testable seesaw mechanism at the LHC. If one gives up the dark matter candidate, EWPT work in the minimal setup with only the standard Higgs doublet, in which case the masses of N 's lie again between 50 GeV and 150 GeV. In any case, irrespective of the seesaw, it is worth considering mirror symmetry not to be exact. Once N and E are produced pairwise, their decay can lead to the interesting two leptons and six jets events with no missing energy [43]. If N is of Majorana type, the characteristic feature is the equal decay rate in leptons and antileptons [44]. V. CONCLUSION AND OUTLOOK In recent years, the inert scalar doublet model has become one of the popular extensions of the SM whose lightest component can play a role of the DM. Whereas its simplicity may be appealing, the inert nature is postulated by hand, which makes it more a model for rather than of dark matter. On the other hand, the inert nature is a natural scenario with the existence of mirror families, due to the electroweak precision constraints [3]. Mirror fermions have been suggested more than 50 years ago, as a way of restoring parity, and are well motivated by a number of respected theoretical frameworks: KK compactification, N = 2 supersymmetry, family unification based on large orthogonal groups and some unified models of gravity. This has encouraged us to carefully study the issue of dark matter in the context of the inert scalar doublet and mirror fermions. Since nothing in particular depends on the chirality of extra families, we have broadened our study by including the cases of only one and two extra families. The fourth generation has recently been the focus of a large body of research and as such deserves a special merit, in spite of the scalar's inertness not being called for. The case of two extra families does not possess any special features and thus we only commented on it in passing only. We now summarize the essential features case by case. One extra family. It is not surprising that this case passes the EWPT since it works even with only one Higgs doublet as in the SM. For sufficiently light S and/or extra neutrino, the Higgs mass is excluded from the window 150-200 GeV by Tevatron data; otherwise the exclusion window is larger, 131-204 GeV [22]. As far as the direct DM detection is concerned, we find that the Xenon100 result restricts the DM mass to lie in the window 1 2 m h -76 GeV if the SM Higgs is light, and almost fixed at 74-76 GeV if the Higgs is heavy. More extra families. Let us recall that since the extra quarks have to lie above the direct limits, their large Yukawa couplings rule out the light Higgs window because of vacuum stability [3]. It is then enough to consider a heavy Higgs m h 300-400 GeV. This is just above the Tevatron exclusion region which extends, in the case of three families, up to m h 240 GeV. In terms of direct DM detection, the Xenon100 experiment together with the WMAP relic density constraint makes this scenario very predictive. In fact, the hadronic uncertainty in the h-nucleon coupling barely plays a role here, and the DM mass turns out to lie necessarily at 74-76 GeV. This could lead also to a characteristic signature in indirect DM search, in terms of the spectrum of particles resulting from both annihilation or decay from galactic haloes. Appendix A: Explicit formulae for electroweak oblique parameters: Higgs sector In the calculation of electroweak oblique parameters S, T and U for the Higgs sector, we apply the generic formulas given in [34] to the case with two Higgs doublets. The SM Higgs mass is denoted as m h , while the reference point mass (corresponding to S = T = U = 0) is m r , which is taken to be 120 GeV throughout the paper.
2011-05-27T21:45:20.000Z
2011-05-23T00:00:00.000
{ "year": 2011, "sha1": "caad7cbd6e428ee4ec86202366912c4b75aab46d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.4611", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "caad7cbd6e428ee4ec86202366912c4b75aab46d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225117725
pes2o/s2orc
v3-fos-license
Waardenburg Syndrome Type-II in Twin Siblings: An Unusual Audio-Pigmentary Disorder Waardenburg syndrome (WS) is an interesting inherited audio-pigmentary disorder. The syndrome shows no gender, racial, or ethnic predilection. This unique disorder is characterized by pigmentary abnormalities, deafness, and neural crest-derived tissue defect. WS can be recognized by some specific clinical features that appear after birth; not all affected individuals possess all the clinical features. It has four clinical sub types based on the mutant gene and characteristic morphology. These morphological features are broad nasal root, white forelock, the difference in the colour of eyes, congenital leukoderma, and sensorineural deafness. We report an interesting case of WS in twin boys who fulfill the criteria of WS-II. Our cases have four major criteria (white forelock, heterochromia, sensorineural hearing loss, first degree relative with WS), and 1 minor criterion to establish the diagnosis of WS-II. Most clinical features of WS-II except sensorineural deafness are benign and do not need any intervention but severe deafness can be a serious problem. The current report is unique and is a rare case of WS in twin infants. We present this case for its rarity, relative paucity of literature, and also to emphasize the clinical presentation of this extremely rare disease in twins. Introduction Waardenburg syndrome (WS) is an unusual audio-pigmentary genetic disorder [1]. The syndrome is named WS after a Dutch ophthalmologist P. J. Waardenburg, who described a rare syndrome comprised of six unique clinical features: broad nasal root, hypertrichosis of medial side of the eyebrows, lateral displacement of the medial canthus, white forelock, partial or total heterochromia iridis, and sensorineural deafness [2]. WS shows no gender, racial, or ethnic predilection. There are four clinical variants and WS-I and WS-II are the most common types [3]. Type I is clinically manifested as congenital deafness (sensorineural), dystopia canthorum (lateral displacement of medial eye corners), neural tube defects, cleft palate and lip with patchy depigmentation of hair and skin [1][2]. These symptoms are associated with pigmentary abnormalities of the eyes. In Type II, the inner canthi of both eyes are normal but have some other features similar to type I. Type-III WS is an extreme presentation of type I with the abnormality of upper limbs, it is a very rare clinical form of WS [4]. Type IV WS in addition to symptoms of type I have features of Hirschsprung disease [2]. As it is a genetic disease, there is no definitive treatment for WS, but supportive treatment with cochlear implants and surgery in case of association with Hirschsprung syndrome can be done [5]. Genetic counseling is necessary. We report here a unique case of WS in twin infants, for its rarity, relative paucity of reported literature, and also to emphasize the clinical presentation of this extremely rare disease in twins. Case Presentation Two twin infant boys were referred from the otolaryngology outpatient clinic (ENT) for the complaint of white patches of hair and pigmentary skin changes since birth. These symptoms were also associated with sensorineural deafness. They were born of consanguineous parents through normal vaginal delivery. Family history revealed the presence of similar skin symptoms and deafness in their paternal uncle ( Figure 1). Based on history, clinical examinations, and audiometry, they were diagnosed as a case of WS. They had been categorized in type II WS because they have a positive family history, sensorineural deafness, white forelock, and different coloured irises. Systemic examination and routine laboratory workup were normal in both infants. Topical emollients were given. The parents were counseled about the prognosis and association of syndrome with the complications of consanguineous marriages because for a child to have the disease, only one affected gene is necessary to pass on. We advised the parents to regularly follow the ENT outpatient clinic for further management. Discussion WS is a rare disease of great importance, particularly in the pediatric population [6]. This unique inherited disorder is characterized by pigmentary abnormalities, deafness, and neural crest-derived tissue defect [7]. Several different gene mutations (insertion, deletion, frameshifts, missense, and nonsense mutations) can cause WS [8]. During embryogenesis, there is an abnormal distribution of melanocytes, which results in patchy areas of depigmentation [8]. WS is mostly due to the changes in genes in Type 1 and Type 3 due to the point mutations and can be detected by the use of multiplex ligation-dependent probe amplification in specific genes [9]. WS can be recognized by some specific clinical features that appear after birth; not all affected individuals possess all the clinical features [2]. It has four clinical subtypes based on the mutant gene and characteristic morphology [10]. These morphological features are broad nasal root, white forelock, the difference in the colour of eyes, congenital leukoderma, and sensorineural deafness [10]. It is a clinical diagnosis, and Waardenburg consortium proposed diagnostic criteria. The major criteria are heterochromia of iris, sensorineural deafness, white forelock, lateral displacement of eyes canthi, and presence of WS in a firstdegree relative [1]. The minor diagnostic criteria are broad nasal root, white macules/patches on the skin, synophrys, premature greying of the scalp, and hypoplasia of nasal alae [1]. Two major or one minor and two minor criteria are needed to diagnose WS type I [2]. WS-II is characterized by sensorineural deafness and heterochromia of irides [11]. For diagnosing WS-II, Liu et al. have proposed two major criteria be fulfilled while premature greying replaced the dystopia canthorum [11]. WS-III is similar to WS-I with the addition of few musculoskeletal abnormalities like upper limb contractures and hypoplastic muscles [9]. WS type IV has a strong association with Hirschsprung disease [12]. We report an interesting case of WS in twin boys who fulfill the criteria of WS-II. Our cases had four major criteria (white forelock, heterochromia, sensorineural hearing loss, first degree relative with WS), and one minor criterion to establish the diagnosis of WS-II. Most clinical features of WS-II except sensorineural deafness are benign in nature and do not need active intervention but severe deafness can be a serious problem [10]. An early diagnosis with social and vocational training and rehabilitation depending upon the extent of involvement remains the only rescue for these patients [5]. Areas of hypopigmentation may diminish in size or even disappear with time [2]. Most of the clinical types of WS have a good prognosis [1]. Morbidity is related to the defect of neural crest-derived tissues, including mental disability, deafness and ocular disorders (cataracts), and skeletal anomalies [5]. We report a rare inherited disorder that had never been reported in twins. Through this unique case report, we recommend that early diagnosis and prenatal genetic identification is very essential. Spina bifida is a rare manifestation of WS but the severity can make it a relatively important element of the disease [13]. Prenatal screening is advisable through transvaginal ultrasound and to follow the guidelines for high-risk pregnancies to monitor neural tube defects [13]. Conclusions We report an unusual and interesting rare genetic disorder. The treatment approach should be multidisciplinary. Genetic counseling is necessary because one affected gene can pass the syndrome to the next generation. As there is no definitive treatment, the family and the patient's awareness regarding symptomatic treatment is also very essential. In such rare cases, early diagnosis and prenatal genetic identification are very essential. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-10-28T19:07:30.358Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "8c246c3debe0cf3f4fbb345c4a06a4d4be643bc0", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/39846-waardenburg-syndrome-type-ii-in-twin-siblings-an-unusual-audio-pigmentary-disorder.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64e830049dc38230801d03dd9d15d83b2d932392", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256054155
pes2o/s2orc
v3-fos-license
Valorization of Peels of Eight Peach Varieties: GC–MS Profile, Free and Bound Phenolics and Corresponding Biological Activities Sustainability, becoming essential for food processing and technology, sets goals for the characterization of resources considered as food waste. In this work, information about the GC-MS metabolites of peach peels was provided as a tool that can shed more light on the studied biological activities. In addition, distribution patterns and contribution of the chemical profile and free and bound phenolic compounds as antioxidant, antimicrobial, and enzymatic clusters in peach peels of different varieties of Bulgarian origin were studied. The two applied techniques (alkaline and acid hydrolysis) for releasing the bound phenolics reveal that alkaline hydrolysis is a better extraction approach. Still, the results indicate the prevalence of the free phenolics in the studied peach peel varieties. Total phenolics of peach wastes were positively correlated with their antioxidant activity. The antioxidant activity results certainly defined the need of an individual interpretation for each variety, but the free phenolics fractions could be outlined with the strongest potential. The limited ability of the peels’ extracts to inhibit α-amylase and acetylcholinesterase, and the moderate antimicrobial activity, on the other hand, indicate that the potential of peach peels is still sufficient to seek ways to valorize this waste. Indeed, this new information about peach peels can be used to characterize peach fruits from different countries and/or different food processes, as well as to promote the use of this fruit waste in food preparation. Introduction Traditionally, plants are known to be abundant in secondary metabolites [1]. Several studies have designated the use of plants (including fruits and vegetables) rich in bioactive compounds in the management of non-communicable diseases [2,3]. Some of them are also produced in their fruits [4]. Knowing the distribution of these metabolites in different parts of the fruit is a piece of valuable information, especially in the light of a circular economy and waste recovery. That is why research is no longer limited to the edible part of the fruit, but also to the generated by-products of fruit processing [5,6]. Moreover, fruit processing generates a large amount of waste. In fact, according to the FAO, fruit and vegetables account for up to 45% of food waste, in general [7]. Recently, many articles emphasize fruit waste as a source of health-promoting biologically active substances that could be identified, extracted, and further used [8][9][10][11]. This is a sustainable approach targeting the global goal for "zero waste" in the environment. Phenolic compounds are common in a number of natural raw materials-fruits, vegetables, cereals, herbs, etc. [12,13]. They draw scientific interest due to their biological activities, upper phase was intended to analyze the polar (amino and organic acids, carbohydrates) compounds, whereas the lower phase studied the non-polar (saturated and unsaturated fatty acids) metabolites. The two attained phases were vacuum-dried at 40 • C in a centrifugal vacuum concentrator (Labconco Centrivap, Hampton, NH, USA). In order to extract the saturated and non-saturated fatty acids, 1.0 mL 2% H 2 SO 4 in methanol was added to the dried residue of fraction "non-polar metabolites," then the mixture was heated for 1 h at 96 • C/300 rpm on a Thermo-Shaker TS-100. After cooling to the room temperature, the resulting solution was extracted with 3 × 10.0 mL n-hexane. Organic layers combined were vacuum-dried at 40 • C in a centrifugal vacuum concentrator (Labconco Centrivap). Prior to analysis by GC-MS, samples were derivatized by the following two procedures. Firstly, a 300.0 µL solution of methoxyamine hydrochloride (20.0 mg/mL in pyridine) was added to the fraction "polar metabolites," and the mixture was heated on thermo shaker for 1 h at 70 • C/300 rpm. After cooling, 100.0 µL N,O-Bis (trimethylsilyl) trifluoroacetamide (BSTFA) was added to the mixture, then heated on the thermo shaker for 40 min at 70 • C/300 rpm. Lastly, 1.0 µL from the solution was injected in the GC-MS. Secondly, 100.0 µL pyridine and 100.0 µL BSTFA were added to the fraction "non-polar metabolites" and then heated on the thermo shaker for 45 min at 70 • C/300 rpm. Then, 1.0 µL from the solution was injected in the GC-MS. GC-MS analysis was carried out using a gas chromatograph 7890A (Agilent) coupled to a mass selective detector 5975C (Agilent) and HP-5ms silica-fused capillary column coated with 0.25 µm film of poly (dimethylsiloxane) as the stationary phase (Agilent), 30 m × 0.25 mm (i.d.). The oven temperature program used was as follows: initial temperature 100 • C for 2 min, then 15 • C/min to 180 • C for 2 min, and after that, 5 • C/min to 300 • C for 10 min, run time 42 min. The flow rate of the carrier gas (Helium) was maintained at 1.2 mL/min. The injector and the transfer line temperature were kept at 250 • C; EI energy: 70 eV, mass range: 50 to 550 m/z at 1.0 s/decade. The temperature of the MS source was 230 • C. The injections were carried out in a split mode 10:1; the injection volume was 1 µL. Version 2.64 of the AMDIS software, (Automated Mass Spectral Deconvolution and Identification System, NIST, Gaithersburg, MD, USA) facilitated the comprehension of the obtained mass spectra and the recognition of the metabolites. Kovats retention index (RI) with reference compounds in the Golm Metabolome Database (http://csbdb.mpimpgolm.mpg.de/csbdb/gmd/gmd.html, accessed on 25 March 2022) and NIST'08 database (NIST Mass Spectral Database, PC-Version 5.0, 2008 from National Institute of Standards and Technology, Gaithersburg, MD, USA) were compared to the GC-MS spectra. The 2.64 AMDIS software verified the RIs of the compounds with a standard n-hydrocarbon calibration mixture (C 8 -C 36 , Restek, Teknokroma, Spain). Extraction of Free Phenolic Compounds Free phenolic compounds of each variety's peels were extracted in triplicate as follows: 0.5 g of sample was mixed with 10 mL 80% (80:16, v/v) ethanol, extracted at 50 • C for 30 min under ultrasound (UST 5.7150 Siel, Gabrovo, Bulgaria) and centrifuged at 10,000× g for 20 min. Phenolic extracts were filtered using filter paper (Whatman No. 1) and evaporated until near dryness (RV 10, Ika, Staufen, Germany). The final volume of the extracts was adjusted by adding 10 mL of 85% methanol (85: 15, v/v) and stored at −20 • C until further analysis. Extraction of Bound Phenolic Compounds Alkaline Hydrolysis Method Bound phenolic extracts were obtained according to the method described by Ding et al. [27] with modifications. The residues of the extraction process for free phenolics were subject to 18 h digestion with shaking at 30 • C, using 2 M sodium hydroxide (25 mL) and under a stream of nitrogen gas. The samples were acidified to pH 1.5-2.0 with 6 M hydrochloric acid and then extracted six times with 25 mL ethyl acetate. The upper layer was collected each time and combined. The ethyl acetate extracts were dried to complete dryness using a rotary evaporator at 38 • C (RV 10, Ika, Staufen, Germany). The dried bound extracts were reconstituted in 10 mL 85% HPLC grade methanol (85:15, v/v) and stored by avoiding light at −20 • C until analysis. Acid Hydrolysis Method Bound phenolic compounds of each variety were extracted using the method reported previously [28]. The residues of the extraction process for free phenolics were treated with 25 mL of methanol/H 2 SO 4 (90:10, v/v) at 70 • C for 24 h as the first hour was with sonication, and the resulting mixtures were neutralized with 10 M sodium hydroxide to pH 12.0 before being extracted six times with ethyl acetate. The supernatants were combined/merged and vacuum evaporated to dryness at 38 • C (RV 10, Ika, Staufen, Germany) before being reconstituted with 10 mL methanol/water (85:15, v/v) and stored by avoiding light at −20 • C until analysis. Determination of Total Phenolic Contents (TPC) A modified method of Kujala et al. [29] was used to analyze the TPC as described by Mihaylova et al. [30]. Determination of Total Flavonoid Content (TFC) The total flavonoid content was assessed following the description of Kivrak et al. [31]. Results are expressed as µg quercetin equivalents (QE)/g dw, as quercetin (QE) was used as a standard. Determination of Total Monomeric Anthocyanins (TMA) The TMA content was defined using the pH differential method [32]. Results are expressed as µg cyanidin-3-glucoside (C3GE)/g dw. The slightly modified method of Brand-Williams et al. [33], as described by Mihaylova et al. [30], aided in the identification of the capability of the extract's to donate an electron and scavenge 2,2-diphenil-1-picrylhydrazyl (DPPH) radicals. The antioxidant activity is presented as a function of the concentration of Trolox with equivalent antioxidant activity expressed as µM TE/g dw. ABTS •+ Radical Scavenging Assay The method of Re et al. [34] was used to estimate the extracts' ABTS •+ radical scavenging activity. The results are expressed as µM TE/g dw with Trolox as standard. Ferric-Reducing Antioxidant Power (FRAP) Assay The procedure of Benzie and Strain [35], with slight modification as described by Mihaylova et al. [30], was carried out in the FRAP assay, recording the absorbance at 593 nm, and expressing the results as µM TE/g dw with Trolox as standard. Cupric Ion-Reducing Antioxidant Capacity (CUPRAC) Assay The CUPRAC assay followed the procedure of Apak et al. [36]. Results are expressed as µM TE/g dw with Trolox as standard. Enzyme-Inhibitory Activities The Sigma Aldrich method [37] specified by Mihaylova et al. [24] was used to carry out the α-Amylase (AM)-Inhibitory Assay. An α-Glucosidase (AG)-Inhibitory Assay was completed as described in the paper by Mihaylova et al. [26]. The in vitro pancreatic-lipase- inhibitory activity was assessed as described by Saifuddin et al. [38] and Dobrev et al. [39], with modifications explained in previous research [26]. The experimental conditions of the in vitro AChE-inhibitory assay were built on the method defined by Lobbens et al. [40], with modifications available by Mihaylova et al. [26]. All results are expressed as the concentration of extract (IC 50 ) in mg/mL that inhibited 50% of the respected enzyme (α-amylase, α-glucosidase, lipase, and acetylcholinesterase). Culture Media Luria-Bertani agar medium supplemented with glucose (LBG) was prepared as prescribed by the manufacturer (Laboratorios Conda S.A.): 50 g of LBG-solid substance mixture was dissolved in 1 L of deionized water. The final pH was adjusted to 7.5, then the medium was autoclaved at 121 • C/20 min. Malt extract agar (MEA) was prepared as suggested by manufacturer (HiMedia ® , Thane, India): 50 g of the MEA-solid substance mixture was dissolved in 1 L of deionized water. pH was corrected to 5.4 ± 0.2, and then the medium was autoclaved at 115 • C/10 min. Antimicrobial Activity Assay The agar well diffusion method [41] was implemented in the antimicrobial activity determination. The test bacteria B. subtilis was cultured on LBG agar at 30 • C; S. aureus, L. monocytogenes, E. faecalis, S. enteritidis, E. coli, P. vulgaris, and P. aeruginosa were cultured on LBG agar at 37 • C for 24 h. The yeast C. albicans was cultured on MEA at 37 • C, while S. cerevisiae was cultured at 30 • C for 24 h. The fungi A. niger, A. flavus, Penicillium sp., Rhizopus sp., Mucor sp., and F. moniliforme were grown on MEA at 30 • C for 7 days or until sporulation. A small amount of biomass in 5 mL of sterile 0.5% NaCl was homogenized to prepare the inocula of test bacteria/yeasts, while 5 mL of sterile 0.5% NaCl was placed into the tubes for the test fungi. After stirring by vortex V-1 plus (Biosan, Riga, Latvia), they were filtered and transferred in other tubes prior to usage. A bacterial counting chamber Thoma (Poly-Optik, Germany) established the number of viable cells and fungal spores. Their final concentrations were adjusted to 10 8 cfu/mL for bacterial/yeast cells and 10 5 cfu/mL for fungal spores, and then inoculated in agar media that was preliminarily melted and tempered at 45-48 • C. The inoculated media were subsequently transferred in a quantity of 18 mL, in sterile Petri plates (d = 90 mm) (Gosselin™), and allowed to harden. After that, six wells (d = 6 mm) per plate were cut, and triplicates of 60 µL of the extracts were pipetted into the agar wells. The Petri plates were incubated at identical conditions. The inhibition zones' diameters around the wells were measured twice on the 24th and 48th hours of incubation to establish antimicrobial activity. Test microorganisms with inhibition zones ≥ 18 mm were regarded as sensitive; those with zones ranging from 12 to 18 mm were considered moderately sensitive; and those with zones ≤ 12 mm were considered resistant. Statistical Analyses Each sample was triplicated and the results are expressed as the mean ± SD. The impact of the peach variety and extraction type on the TPC, TFC, TMA, and AOA was evaluated using a two-factor variance analysis [42]. The Tukey-Kramer post hoc test (α = 0.05) [42] was used to statistically compare the data. The web-based MetaboAnalyst platform (www.metaboanalyst.ca, accessed on 27 June 2022) [43] was used to conduct the PCA and HCA of GC-MS data, as previously described by Mihaylova et al. [26]. GC-MS Volatile Profile Characterization of Analyzed Peach Peels Due to the beneficial properties of metabolites, the interest in metabolite profiling is constantly growing. Presenting information about the metabolite profile of different parts of the fruit is a piece of useful knowledge, especially in the light of waste recovery and resource scarcity. That is why research is no longer limited to the plant's edible part, but also includes the generated by-products of fruit processing, such as peels, stones, and pressed pulp, among others. In the view of the abovementioned, a semi-quantification GC-MS profile aids in the characterization of the peels of eight peach varieties. The identified metabolites (Table 1) in the current study are divided into five groups: sugars and sugaralcohols, organic acids, amino acids, phenolic acids, and fatty acids. It is important to highlight the availability of potentially active substances in the peels, as they are often regarded as food waste. Similar to the whole fruit [26], the peach peel is most abundant in sugars. Organic acids being intermediates in the degradation pathways of amino acids, fats, and carbohydrates, also affect properties such as the fruit's color, flavor, and aroma [44]. The organic acids content in the studied peaches, flat peaches, and nectarines is relatively similar. The documented contents of organic acids confirm that the skin is flavor-contributing and might influence the overall consumer's acceptance by referencing the color of the skin in terms of visual ripeness. Considering the amino acids content, it is insufficient for human daily needs, but it is visible that early ripening varieties have peels richer in amino acids compared to late ripening varieties. Shikimic acid, one of the major organic acids in all samples, has been recognized for its neuraminidase inhibition potential [45]. Shikimic acid has also presented its antiinflammatory effect [46]. The predominant fatty acids in the studied peels are the saturated palmitic, stearic, and behenic acids, as well as the unsaturated linoleic, oleic, and arachidic acids. Some researchers [47] point out that oleic and linoleic acids possess powerful inhibitory effects on the α-glucosidase activity, but they are also competitive inhibitors, and their interactions with α-glucosidase showed a character of static quenching, which directs them to bind to α-glucosidase to form a complex. Recent research [48] highlights palmitic acid as a strong α-amylase inhibitor. Fatty acids also have high potency in the therapy of Alzheimer's disease due to their inhibition of cholinesterases (AChe and BChe) [49]. Chlorogenic acid is a main phenolic compound frequently present in plants [50]. Expectedly, chlorogenic acid is a major compound identified in the extracts. It is a result from the esterification of caffeic with quinic acid. The production of chlorogenic acid reduces the ability of caffeic and quinic acids to inhibit α-amylase and α-glucosidase [51]. ND-not detected in the sample, RI-retention index; G-"Gergana", F-"Filina", U-"Ufo 4", JL-"July Lady", L-"Laskava", FQ-"Flat queen", Evm-"Evmolpiya", M-"Morsiani 90". Essential amino acids are marked with blue color. Total Phenolic, Flavonoid and Total Monomeric Anthocyanins Contents of Free and Bound Insoluble Fractions Phenolic compound are responsible for both the desirable and undesirable qualities of the peach fruit [52]. The presented GC-MS profile of the peels revealed the free phenolic acids (Table 1). Although they are in relatively small amounts, it is important the clear out their distribution in such parts of the fruit that are often left unconsumed. Phenolic compounds are one of the main classes contributing to the biological activity of plant matrices and fruits, in particular [53]. Traditional solvent extraction usually omits high quantities of bound phenolics that play an essential role in human health benefits. Due to their beneficial effects, the use of the full spectrum of polyphenolic potential has attracted considerable attention, and it seems reasonable to apply different recovery techniques. The peach fruit, itself, is reported as an excellent supply for phenolic components [54]. Peach peels, on the other hand, are yet to be thoroughly characterized. Bearing in mind the abovementioned, in targeting to reveal the biological potential of the peach peels, several phenolic compounds profile was assessed. Based on existing reports [55,56] about the richness of stone fruits in phenolic compounds, the current study focused on the evaluation of their distribution in both soluble and non-soluble forms. Various techniques, including alkaline, acid, enzymatic, and ultrasound-assisted hydrolyses, can be applied in order to release the bound insoluble phenolic fractions from the cell wall [18]. A two-way ANOVA aided in the evaluation of the peach variety and type of extraction on the TPC, TFC, and TMA. The interpretation of the results showed that the single effect of both factors and their combination was influential (p < 0.05) to the TPC, TFC, and TMA. The highest TPC was found in soluble phenolics extracts, showing the relationship of the total phenolic content in peach fruits with the extractable free polyphenols (Figure 1). The total TPC of the samples varied between 15.56 and 20.49 mgGAE/g dw, accounting mainly of extractable polyphenols-from 42 to 76%. The TPC of the free soluble polyphenols was in the range of 6.82 ± 0.13 to 13.12 ± 0.09 mgGAE/g dw. Previous research also points out that free phenolic compounds are predominant in plant-based extracts [57,58]. The alkaline hydrolyzed non-extractable polyphenols account for 16 to 31%. Other authors reported alkaline treatment, as such, of low yield [6]. Depending on the fruit variety, acid and alkaline hydrolyzed polyphenols contribute in different manners to the fruit's total TPC. Furthermore, the total phenolic content of the extractable polyphenols was statistically superior to that of non-extractable polyphenols within the same variety, valid for all eight. The samples with the highest TPC were the free fractions of the "Flat Queen" and "Laskava" varieties, and the lowest was the "Evmolpiya" variety. In terms of total TPC, the highest values were established in the "Flat Queen" variety, followed by "Laskava," confirming the contribution of the free soluble polyphenols to the total TPC. The contribution of bound phenolics amounted to a range of 7 to 31% of the total phenolics. The current results suggest that the alkaline hydrolysis method was more effective compared to the acid one, and efficiently liberated bound phenolic compounds (Figure 1), which is comparable to other research targeting by-products, i.e., fruit peels [59,60]. Alkaline hydrolysis successfully breaks the ether and ester bonds which link phenolic compounds to the cell wall, and are commonly spread in fruit peels [18]. This might explain the established trend in the current study. Flavonoids are the largest group of polyphenols. In respect to the total flavonoid content, the free phenolics extracts are with the predominant content in most of the varieties, ranging from 164.14 ± 4.72 ("Ufo 4") to 515.83 ± 30.59 µgQE/g dw ("Flat Queen"). The content of flavonoids was higher when alkaline hydrolysis was applied (Figure 1) for "Ufo 4" and "July Lady" compared to the soluble phenolics extracts. The contribution of acidic hydrolyzed flavonoids could be neglected as below the limit of detection for all the varieties. The TFC of the peach peels varied from 380.58 to 999.38 µgQE/g dw. The alkaline hydrolyzed fraction displayed a TFC of 0 and 76.93% from the total TFC of the samples. The established results follow the same trend as for the total phenolic content. Other researchers also acknowledge the fact that flavonoids in free form are predominant in plant-based matrices [61]. Antioxidant Activity (AOA) The potential to recover the antioxidant activity of the peach peels is reported for locally grown peach varieties [70,71]. Moreover, authors revealed peach peels with higher antioxidant activity in comparison to the pulp, suggesting that the fruit peel is a nutrient carrier [70]. In the current study, the peach variety, the type of extraction, and their combination had impact (p < 0.05) on the AOA (DPPH, ABTS, FRAP, and CUPRAC assays) as studied by two-way ANOVA. The assessed in vitro antioxidant potential of the investigated peach peels is the strongest with regard to the free phenolics fractions (Figure 2). According to the DPPH and FRAP assays, the most potential extract was the free phenolics one, and the one of the late-season "Morsiani 90" variety, in particular. The free phenolic fraction of "Morsiani 90" was the most active one according to the CUPRAC assay as well, although several alkaline extracts had good prospective ( Figure 2D). The antioxidant potential towards the DPPH free radical was in the range of 1.94 ± 0.04 to 43.43 ± 0.28 µ MTE/g dw (Figure 2A). The FRAP assay showed values from 5.14 ± 0.18 to 123.08 ± 4.0 µ MTE/g dw ( Figure 2C). According the CUPRAC assay, the results varied from 23.13 ± 0.84 to 122.97 ± 4.71 µ MTE/g dw ( Figure 2D). When the three abovementioned in vitro assays are applied, the free phenolics extracts show the greatest potential, significantly different from the bound phenolic fractions. The ABTS assay, for example, was revealed as most active for the early season varieties "Gergana," "Filina," and "Ufo 4" (free phenolics fractions). The total monomeric anthocyanins content was mainly due to the free extractable polyphenolics fraction, and the total content was in the range from 327.84 to 1246.77 µgCya-3-glu/g dw ("Gergana"). The distribution between soluble and insoluble phenolics ( Figure 1) showed no or limited contribution of the acid hydrolyzed phenolics, and relatively low input of the alkaline hydrolyzed ones, which is not surprising due to the degradation of the anthocyanin content with the increased temperature and pH value [62]. About 1%, 7%, 31%, 17%, 40%, 49%, 11%, and 0% of TAC were present in bound form in the peels of the investigated varieties. The uneven distribution of TAC among soluble and insoluble phenolics fractions confirms the need for personalized/individual evaluation of the potential of peel waste as a source of biologically active substances. In brief, most of the analyses are synchronous regarding the predominance of the potential of free phenolic extracts, revealing the effectiveness of commonly applied extraction techniques. In the current study, alkaline hydrolysis resulted in more bound phenolics, flavonoids, and monomeric anthocyanins compared to acid hydrolysis. Earlier studies have established that alkaline hydrolysis is a more comprehensive bound phenolics extraction technique than acid hydrolysis [60,63,64]. This might be attributable to the fact that alkaline hydrolysis has the ability to split the ester bonds between phenolic acid and polysaccharide, and decrease phenolic acids losses [65,66]. However, acid hydrolysis mainly breaks glycosidic bonds. Contrary to the abovementioned, far more bound phenolic compounds were released by acid hydrolysis from litchi pulp extracts [67] and apple and peach [68], which obviously indicates the need for optimal extraction conditions in each particular study. The variation in the food matrices, as well the difference in the bond types of the bound phenolics, should have an effect on the extraction process. Additionally, Verma et al. [69] stated that elevated-temperature acid hydrolysis resulted in the loss of some phenolics. This may reason the better efficacy of the alkaline hydrolysis in the release of bound phenolics from the evaluated peach peels compared to acidic hydrolysis. Antioxidant Activity (AOA) The potential to recover the antioxidant activity of the peach peels is reported for locally grown peach varieties [70,71]. Moreover, authors revealed peach peels with higher antioxidant activity in comparison to the pulp, suggesting that the fruit peel is a nutrient carrier [70]. In the current study, the peach variety, the type of extraction, and their combination had impact (p < 0.05) on the AOA (DPPH, ABTS, FRAP, and CUPRAC assays) as studied by two-way ANOVA. The assessed in vitro antioxidant potential of the investigated peach peels is the strongest with regard to the free phenolics fractions (Figure 2). According to the DPPH and FRAP assays, the most potential extract was the free phenolics one, and the one of the late-season "Morsiani 90" variety, in particular. The free phenolic fraction of "Morsiani 90" was the most active one according to the CUPRAC assay as well, although several alkaline extracts had good prospective ( Figure 2D). The antioxidant potential towards the DPPH free radical was in the range of 1.94 ± 0.04 to 43.43 ± 0.28 µMTE/g dw (Figure 2A). The FRAP assay showed values from 5.14 ± 0.18 to 123.08 ± 4.0 µMTE/g dw ( Figure 2C). According the CUPRAC assay, the results varied from 23.13 ± 0.84 to 122.97 ± 4.71 µMTE/g dw ( Figure 2D). When the three abovementioned in vitro assays are applied, the free phenolics extracts show the greatest potential, significantly different from the bound phenolic fractions. The ABTS assay, for example, was revealed as most active for the early season varieties "Gergana," "Filina," and "Ufo 4" (free phenolics fractions). The results for the bound phenolic compounds released by acid hydrolysis were substantially lower than those obtained by alkaline hydrolysis for most of the varieties. The discrepancy in the antioxidant activity results confirms the need for more than one assay to be applied, bearing in mind the different aspects of the antioxidant activity mechanism and contributing compounds, in particular. While retrieving the bound phenolic fractions, the conducted alkaline and acid hydrolyses resulted in less antioxidant activity in most of the peach peel extracts. Furthermore, some authors correlate the total phenolic content and antioxidant potential decrease with ripening. Nonetheless, more than 50% of the total antioxidant activity is contributed by the extractable polyphenols [54]. However, the assessed activity is an important contribution to the general antioxidant potential of the whole peach fruit, and the peels in particular. In general, the alkaline and acid hydrolyzed fractions showed moderate activity, and no clear trend could be pointed out. This certainly outlined the need for individual interpretation of the results for each particular variety. Tang et al. [59] reported the better potential of the bound phenolics extracts of the pitahaya peel when the alkaline hydrolysis method was applied. More specifically, the authors established good correlation of the antioxidant activity and the highest phenolic content achieved. Furthermore, the paper revealed that hydrolysis methods had a significant effect on the release of phenolics, in preference of the alkaline method. The present study is validating the fruit wastes potential and fruit peels, in particular. Therefore, the question set to peel or not to peel [72] seems to have an answer. Peach peels are worth researching and using, as the latter contribute to a more beneficial absorption of compounds when the unpeeled fruit is consumed. Inhibitory Potential towards α-Glucosidase, α-Amylase, Lipase, and Acetylcholinesterase of Analyzed Prunus Persica Peels Based on the results presented in Table 1 (GC-MS profile of the peels), indicating a possible inhibitory potential, extracts from the peels were analyzed for inhibitory activity against α-glucosidase, lipase, α-amylase, and acetylcholinesterase ( Table 2). The results are expressed as the concentration in mg/mL that inhibits 50% of the corresponding enzyme activity. The current finding may be explained by the specific metabolite profile of the peels (Table 1). For example, the fatty acids content, which is relatively high in the studied peel extracts, may be the reason for the inhibitory potential towards acetylcholinesterase. The results for the bound phenolic compounds released by acid hydrolysis were substantially lower than those obtained by alkaline hydrolysis for most of the varieties. The discrepancy in the antioxidant activity results confirms the need for more than one assay to be applied, bearing in mind the different aspects of the antioxidant activity mechanism and contributing compounds, in particular. While retrieving the bound phenolic fractions, the conducted alkaline and acid hydrolyses resulted in less antioxidant activity in most of the peach peel extracts. Furthermore, some authors correlate the total phenolic content and antioxidant potential decrease with ripening. Nonetheless, more than 50% of the total antioxidant activity is contributed by the extractable polyphenols [54]. However, the assessed activity is an important contribution to the general antioxidant potential of the whole peach fruit, and the peels in particular. In general, the alkaline and acid hydrolyzed fractions showed moderate activity, and no clear trend could be pointed out. This certainly outlined the need for individual interpretation of the results for each particular variety. Tang et al. [59] reported the better potential of the bound phenolics extracts of the pitahaya peel when the alkaline hydrolysis method was applied. More specifically, the authors established good correlation of the antioxidant activity and the highest phenolic content achieved. Furthermore, the paper revealed that hydrolysis methods had a significant effect on the release of phenolics, in preference of the alkaline method. The present study is validating the fruit wastes potential and fruit peels, in particular. Therefore, the question set to peel or not to peel [72] seems to have an answer. Peach peels are worth researching and using, as the latter contribute to a more beneficial absorption of compounds when the unpeeled fruit is consumed. Inhibitory Potential towards α-Glucosidase, α-Amylase, Lipase, and Acetylcholinesterase of Analyzed Prunus Persica Peels Based on the results presented in Table 1 (GC−MS profile of the peels), indicating a possible inhibitory potential, extracts from the peels were analyzed for inhibitory activity against α-glucosidase, lipase, α-amylase, and acetylcholinesterase ( Table 2). The results 2.6 ± 0.08 l ---"-" not detected; Different letters in the same column indicate statistically significant differences (p < 0.05), according to ANOVA (one-way) and the Tukey test (n = 3). *-a concentration that inhibits 30% of the corresponding enzyme under the described conditions. G-"Gergana", F-"Filina", U-"Ufo 4", JL-"July Lady", L-"Laskava", FQ-"Flat queen", Evm-"Evmolpiya", M-"Morsiani 90". The samples that are marked with "-" are not active, or it is impossible to calculate IC 50 . None of the extracts were able to inhibit the action of lipase. The action of alpha-glucosidase was suppressed by most of the samples. All extracts of "Ufo 4" and "Laskava" peels were active, resulting in IC 50 concentration in the range of 5.9 to 39.7 mg/mL. The alkaline hydrolysis of bound phenolics of the "Morsiani 90" sample seems to possess the best inhibitory activity toward alpha-glucosidase-2.6 mg/mL. The alkaline hydrolysates of "Filina" and "July Lady" samples were both able to inhibit α-amylase, at IC 50 -20.55 ± 0.51 and 17.38 ± 0.11, respectively, and AChE at IC 50 -31.1 ± 0.22 and 17.8 ± 0.52, respectively. Antimicrobial Activity of Peach Peel Extracts It has been proposed that phenolic compounds (i.e., flavonoids and phenolic acids) can exhibit antimicrobial properties [73]. Thus, the antimicrobial activity of the peach peel extracts was evaluated (Table 3) as an indicator of the biological potential of the peach peels. No particularly high inhibition was detected to calculate the minimal inhibitory concentration (MIC). However, the antibacterial potential is valuable for fruits in order to recover injuries and/or to have prolonged shelf life [74]. In respect of both Gram-positive and Gram-negative bacteria and yeasts, the free phenolics fraction showed no activity. The inhibitory effect of the bound phenolics was more pronounced against Bacillus subtilis ATCC 6633, Listeria monocytogenes NBIMCC 8632, Pseudomonas aeruginosa ATCC 9027, and Saccharomyces cerevisiae ATCC 9763. Other authors have reported the antibacterial activity of Prunus persica varieties to be more sensitive against Staphylococcus aureus and Listeria monocytogenes [75]. This is consistent with the current findings, as well as their relation to the polyphenolic content and their antioxidant activity. Koyu et al. [76] also reached a minimum inhibitory activity for Prunus leaves against Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis, Enterococcus faecalis, Enterococcus faecium, and Candida albicans. In line with the reports of Mocanu et al. [77], the better reaction towards B. subtilis may be a result of elevated flavonoid concentrations. Molecular weight, polarity, and side groups command the specific inhibitory effect of each phenolic compound [78]. Phenolic compounds, such as acids (ferulic acid, p-coumaric acid, among others), alcohols (guaiacol, catechol, vanillyl alcohol), and aldehydes (vanillin, syringaldehyde), are regarded as the most potent inhibitors of microbial growth [79]. Ferulic and p-coumaric acids are the second and third most abundant in the studied samples after the chlorogenic acid (Table 3). Organic acids may be responsible for their activity against Gram-positive and Gramnegative bacteria, due not only to their quantity and diverse biochemical nature, but also their ability to lower pH [80]. Regarding the antifungal activity of the peach extracts, results show limited activity (Table 3). For instance, the bound phenolics fractions of all varieties possess better activity compared to the free phenolics fractions. The peach peels show better activity toward Rhizopus sp. and Fusarium moniliforme. None of the samples inhibited the growth of the fungi Mucor sp., and as regarding A. flavus, only bound phenolic extracts under alkali conditions exhibit activity (inhibition zones of 8 mm). Like in other reports [81], the current findings may suggest a link between the antimicrobial activity and the fatty acid and flavonoid content of the studied extracts. --------9 9 8 9 9 8 8 8 9 9 9 9 9 -8 -Rhizopus sp. Correlation between Phenolic Compounds Content and Antioxidant Activity Several components are contributing to the antioxidant activity, usually such of phenolic nature. The carried correlation analysis is based on the Pearson correlation coefficient, also referred to as Pearson's r, in order to express the strength and direction of the linear relationship of correlation. Total phenolics compounds content was positively and significantly correlated to all of the antioxidant assays (r = 0.5059-0.8856, p ≤ 0.01). The positive relationship between total phenolics content and antioxidant activity was also previously stated [59,82]. No significant correlation between TPC and TFC was observed (r = 0.3249, p > 0.05). No significant correlation was established between TFC and TMA, or between TFC and ABTS either, pointing out that the contribution of the TFC is relatively low to the established activities. Previous reports also [59] stated no significant correlation between TFC and the antioxidant activities in fruit peel extracts. The relatively low content of TFC in the free and bound fractions is possibly contributing to the weak or non-existent correlation between TFC and the other assays. Among the antioxidant activity assays, the ABTS assay has a moderate correlation, significant at p ≤ 0.05, to the FRAP and CUPRAC. The ABTS was also significantly correlated to another antiradical assay (DPPH assay) at p ≤ 0.01. This pointed to the ABTS assay as a not very appropriate method by which to evaluate the potential of the particular free and bound phenolics in peach peels. The strongest positive correlation was observed between the DPPH and FRAP assays, and between the TPC and DPPH (Table 4). Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) of GC-MS and Phenolic Compound and AOA Assays Data In order to verify the sample differences or resemblances, principal component analysis (PCA) and hierarchical cluster analysis (HCA) of the volatile compounds identified were utilized. According to the PCA plot (Figure 3), the first two principal components PC1 (33.1%) and PC2 (18.6%) summed up 51.7% of the total variance of all identified volatile compounds in the analyzed peach peels. When taking into account the total flavonoid content, total phenolic content, total monomeric anthocyanins, and the antioxidant assays applied, the PCA plot reveals 50% of the total variance in the analyzed samples, namely 27.5% for PC1 and 22.5% for PC2. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) of GC−MS and Phenolic Compound and AOA Assays Data In order to verify the sample differences or resemblances, principal component analysis (PCA) and hierarchical cluster analysis (HCA) of the volatile compounds identified were utilized. According to the PCA plot (Figure 3), the first two principal components PC1 (33.1%) and PC2 (18.6%) summed up 51.7% of the total variance of all identified volatile compounds in the analyzed peach peels. When taking into account the total flavonoid content, total phenolic content, total monomeric anthocyanins, and the antioxidant assays applied, the PCA plot reveals 50% of the total variance in the analyzed samples, namely 27.5% for PC1 and 22.5% for PC2. High positive load scores in PC1 (Figure 3), which distinguish "Laskava" from the other studied peels, are shown by quinic acid, oleic acid, and fructose isomer 2. The high negative scores in PC1 clearly differentiate the "Filina" peels from the others. The "Gergana" peels stood out from the rest by the high negative scores of a number of amino acids in PC2. High positive load scores in PC1 (Figure 3), which distinguish "Laskava" from the other studied peels, are shown by quinic acid, oleic acid, and fructose isomer 2. The high negative scores in PC1 clearly differentiate the "Filina" peels from the others. The "Gergana" peels stood out from the rest by the high negative scores of a number of amino acids in PC2. Figure 4A,B reveal the PCA score plots of TPC, TMA, TFC, and AOA assays of peach (Prunus persica L.) peels from the studied samples. The content of total monomeric anthocyanins, as well as the FRAP antioxidant assay, can be characterized as important for the varieties "Laskava" and "July Lady," due to their high values. The total flavonoid content, on the other hand, is less dependent for the "Morsiani 90" peels. The ABTS values are not defining for the peels of the "Flat Queen" variety. The peels of the two nectarine varieties ("Gergana" and "Morsiani 90") were grouped in the same cluster, due to their phytochemical similarity, while "July Lady" and "Laskava" were clustered in another. The results from the HCA show significant differences from the ones established for the whole fruit of the same varieties [83]. A clear distinction of nectarines compared to peach and flat peach peels is shown in the current results ( Figure 5A). When taking into account the total flavonoid content, total phenolic content, total monomeric anthocyanins, and the antioxidant assays applied, the heatmap ( Figure 5B) shows that peaches from the same ripening period are clustered together. Figure 4A,B reveal the PCA score plots of TPC, TMA, TFC, and AOA assays of peach (Prunus persica L.) peels from the studied samples. The content of total monomeric anthocyanins, as well as the FRAP antioxidant assay, can be characterized as important for the varieties "Laskava" and "July Lady," due to their high values. The total flavonoid content, on the other hand, is less dependent for the "Morsiani 90" peels. The ABTS values are not defining for the peels of the "Flat Queen" variety. The peels of the two nectarine varieties ("Gergana" and "Morsiani 90") were grouped in the same cluster, due to their phytochemical similarity, while "July Lady" and "Laskava" were clustered in another. The results from the HCA show significant differences from the ones established for the whole fruit of the same varieties [83]. A clear distinction of nectarines compared to peach and flat peach peels is shown in the current results (Figure 5A). When taking into account the total flavonoid content, total phenolic content, total monomeric anthocyanins, and the antioxidant assays applied, the heatmap ( Figure 5B) shows that peaches from the same ripening period are clustered together. The statistically independent variables (assays or compounds) can be drawn from the heatmaps (Figures 4 and 5). Peels from each variety are characterized with different linear relationships. Considering the clade arrangement in the Figures, it can be assumed that The statistically independent variables (assays or compounds) can be drawn from the heatmaps (Figures 4 and 5). Peels from each variety are characterized with different linear relationships. Considering the clade arrangement in the Figures, it can be assumed that phenolic compounds and antioxidant assays lead to more distinct differences between the established clades. Conclusions This study represents new information concerning the phytochemical peculiarities of peels from four native Bulgarian peach varieties and four introduced in the geographical region of the Thracian valley. Information about the GC-MS metabolites can shed more light on the studied biological activities. The current findings are one of the few reports on the topic of free and bound phenolics in several peach peel varieties, including flat peaches, peaches, and nectarines. The results are consistent with other existing literature stating that alkaline hydrolysis is a better extraction approach for distributing bound phenolics. The results also indicate the prevalence of the free phenolics in the studied peach peel varieties. The present findings confirm, yet again, the well-known facts about the health-promoting properties of polyphenols, and the fact that fruit by-products can provide potential accessible sources of antioxidants for direct consumption. Furthermore, peach peels could be considered useful natural sources of bioactive compounds with prospective activities. In any case, they are worth contemplating for waste recovery. This study can be seen as a stepping stone in the context of functional foods enriched with natural extracts obtained through effective extraction. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2023-01-22T05:13:08.570Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "9aea720e6d3f7837f6ea3bf473e0b46f2adeea47", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/12/1/205/pdf?version=1673842964", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9aea720e6d3f7837f6ea3bf473e0b46f2adeea47", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
236957302
pes2o/s2orc
v3-fos-license
Cancellation of finite-dimensional Noetherian modules The Module Cancellation Problem solicits hypotheses that, when imposed on modules $K$, $L$, and $M$ over a ring $S$, afford the implication $K\oplus L\cong K\oplus M\Longrightarrow L\cong M$. In a well-known paper on basic element theory from 1973, Eisenbud and Evans lament the"great scarcity of strong results"in module cancellation research, expressing the wish that,"under some general hypothesis"on finitely generated modules over a commutative Noetherian ring, cancellation could be demonstrated. Singling out cancellation theorems by Bass and Dress that feature"large"projective modules, Eisenbud and Evans contend further that, although"[s]ome criteria of 'largeness' is certainly necessary in general [. . . ,] the need for projectivity is not clear."In this paper, we prove that cancellation holds if $K$, $L$, and $M$ are finitely generated modules over a commutative Noetherian ring $S$ such that $K_\mathfrak{p}^{\oplus (1+\text{dim}(S/\mathfrak{p}))}$ is a direct summand of $M_{\mathfrak{p}}$ over $S_{\mathfrak{p}}$ for every prime ideal $\mathfrak{p}$ of $S$. We also weaken projectivity conditions in the cancellation theorems of Bass and Dress and a newer theorem by De Stefani-Polstra-Yao; in fact, we obtain a statement that unifies all three of these theorems while obviating a projectivity constraint in each one. To illustrate the scope of our work, we construct a cancellation example that simultaneously eludes the three theorems just mentioned as well as many other observations from the module cancellation literature. Introduction The Module Cancellation Problem solicits hypotheses that, when imposed on modules K, L, and M over a ring S, afford the implication K ⊕ L ∼ = K ⊕ M =⇒ L ∼ = M. The problem traces back to a theorem of Frobenius-Stickelberger from 1879 asserting that, up to isomorphism, a finite abelian group is completely determined by a list of its elementary divisors [19]. The Module Cancellation Problem therefore predates the axiomatic definitions of module and ring, which first appear in a work by Dedekind from 1893 [11, page 255]. There are essentially four areas of module cancellation research now. Theorems from one area abide by the premise that S is a module-finite algebra over a commutative ring of dimension at most 2. From the members of this family, we learn, for instance, that cancellation holds for finitely generated modules over a 0-dimensional commutative ring Theorem 18]); finitely generated modules over a Dedekind domain (Hsü [34,Theorem 1]); and finitely generated torsion-free modules over a 2-dimensional regular affine C-domain, where C denotes the field of all complex numbers (Wiegand [57,Theorem 1.2]). Additional statements of this type abound in the following articles: [6] [24] [27] [28] [29] [36] [39] [41] [46] [57] [58]. Accomplishments of another strain require K, L, or M to be a direct sum of indecomposable modules. The Krull-Schmidt Theorem [38,Theorem X.7.5], for example, guarantees that cancellation holds for finite-length modules over an associative ring, and a series of observations due to Matlis [42,Theorems 2.4 and 2.5 and Propositions 2.7 and 3.1] ensures that cancellation holds for injective modules with finite Bass numbers over a commutative Noetherian ring. A more delicate result of Vasconcelos [53,Corollary] attests that cancellation holds if S is a Noetherian normal domain with a torsion-free Picard group, K is a finitely generated S-module, and L := I ⊕n and M := J ⊕n for some ideals I and J of S and some nonnegative integer n. More information on direct sums of indecomposable modules can be found in the following works: [18] [20] [26] [40]. Advances of a third variety impose finiteness on sr(End S (K)), the stable rank of the ring End S (K) of all S-linear endomorphisms of K. (See Section 2, Subsection "Stable rank".) One such contribution by Evans [17,Theorem 2] certifies that cancellation holds as long as sr(End S (K)) = 1; another result due to Warfield (as a solo author) [56, Theorems 1.2 and 1.6] avouches that cancellation holds if sr(End S (K)) is finite and K ⊕ sr(End S (K)) is a direct summand of M over S. Suslin [49,Corollary 8.4] supplies a large collection of examples to which we may apply the cancellation theorems of Evans and Warfield: For every nonnegative integer d, every d-dimensional affine C-algebra has stable rank 1 + d. The abovementioned theorems of Goodearl-Warfield and Krull-Schmidt embody two additional examples of Evans's Cancellation Theorem. We direct the reader to the following sources for further information on stable rank: [4] [7] [8] [16] [21] [30] [32] [33] [37] [47] [50] [54] [55]. Findings from a fourth clade of cancellation theorems feature projective modules; here, we focus on three such achievements due to Bass [4,Theorem 9.3], Dress [12,Theorem 2], and De Stefani-Polstra-Yao [9,Theorem 3.14]. These three theorems all begin with the supposition that S is an algebra over a commutative ring R with a finite-dimensional Noetherian maximal spectrum Max(R) or, equivalently ([52, Corollary following Proposition 1]), a finitedimensional Noetherian j-spectrum j-Spec(R). (See Section 2, Subsection "Topology".) With this hypothesis in place, Bass declares that cancellation holds if S is a module-finite R-algebra, K is a finitely generated projective S-module, and M is an S-module admitting a projective direct summand P over S with S ⊕(1+dim(Max(R))) m a direct summand of P m over S m for every m ∈ Max(R). Dress extends Bass by first introducing two new S-modules N and P : Dress takes N to be a finitely presented S-module such that End S (N) is a modulefinite R-algebra, and Dress assumes that P is a direct summand of M over S such that N ⊕(1+dim(Max(R))) m is a direct summand of P m over S m for every m ∈ Max(R). Dress also supposes that M is a direct summand of a direct sum of finitely presented S-modules and that K and P are direct summands of a direct sum of finitely many copies of N over S; the last condition on K and P is a projectivity hypothesis since it implies that Hom S (N, K) and Hom S (N, P ) are projective right modules over End S (N). Using all of these constraints, Dress proves the implication K ⊕ L ∼ = K ⊕ M =⇒ L ∼ = M. De Stefani, Polstra, and Yao extend Bass in a different direction, verifying that cancellation holds if R and S are the same ring, K is a finitely generated projective S-module, and M is a finitely generated S-module with S ⊕(1+dim X (p)) p a direct summand of M p over S p for every p ∈ X := j-Spec(S). The following articles offer more information on the cancellation properties of various projective modules: [10] [13] [23] [33] [35] [43] [44] [45] [47] [48]. Many cancellation counterexamples complement the preceding results. One collection of counterexamples due to Kaplansky [51,Theorem 3] demonstrates the sharpness of the local hypotheses on M in Bass, Dress, and De Stefani-Polstra-Yao: For every positive integer d different from 1, 3, and 7, if S = K is the coordinate ring of the real d-sphere, L is a free S-module of rank d, and M is the S-module corresponding to the tangent bundle of the real d-sphere, then K ⊕ L ∼ = K ⊕ M, but L ∼ = M. The reader can find additional instances of the failure of cancellation in the following works: [3] [6] [25] [29] [41] [57] [58]. On one hand, the numerous sources referenced here illustrate that module cancellation theory has grown a significant amount since its origins. On the other hand, the trends in module cancellation research described above have existed since 1973, suggesting that the terrain of this discipline has not changed dramatically over the last forty years or so. Accordingly, critical remarks made by Eisenbud and Evans in 1973 on the nature of this field are still relevant today: In a well-known paper on basic element theory from 1973, Eisenbud and Evans lament the "great scarcity of strong results" in module cancellation research, expressing the wish that, "under some general hypothesis" on finitely generated modules over a commutative Noetherian ring, cancellation could be demonstrated [15, page 302]. Acknowledging that Kaplansky's counterexamples contextualize the "large" projective modules in Bass and Dress, Eisenbud and Evans nevertheless contend that, although "[s]ome criteria of 'largeness' is certainly necessary in general [. . . ,] the need for projectivity is not clear" [15, page 302]. By subsequently citing cancellation theorems by Vasconcelos [53,Corollary] and Chase [6,Theorem 3.7] that already avoid projectivity conditions [15, page 302], Eisenbud and Evans intimate, moreover, that they envision a different theorem still-a cancellation theorem that not only obviates projectivity hypotheses but also accommodates a substantial class of finitely generated modules over every commutative Noetherian ring. In response to the foregoing entreaty of Eisenbud and Evans, we offer three cancellation results in this paper. Theorem 1.1 features a "general hypothesis" that affords cancellation for finitely generated modules K, L, and M over a commutative Noetherian ring S. As the reader can verify, our hypothesis weakens the local constraints on M in Bass, Dress, and De Stefani-Polstra-Yao, consequently boasting sharpness in light of Kaplansky's counterexamples. The title of our paper honors the fact that, when S is a Jacobson ring, the premises of the following theorem imply that K is a finite-dimensional Noetherian S-module. Theorem 1.1 (cf. Corollary 6.5). Let K, L, and M be finitely generated modules over a commutative Noetherian ring S such that K ⊕(1+dim X (p)) p is a direct summand of M p over S p for every prime ideal p ∈ X := j-Spec(S) ∩ Supp S (K). Theorem 1.2, our main theorem, unifies the cancellation results of Bass, Dress, and De Stefani-Polstra-Yao while relaxing a projectivity criterion in each. Since this fact may not be apparent from a simple glance at our main theorem, we prove this point later in the paper by marshalling two corollaries of our main theorem (Corollary 6.4 and Corollary 6.5). These corollaries reveal, among other things, that we can remove the projectivity constraint on K in Bass and De Stefani-Polstra-Yao, delete the requirement in Dress that P be a direct summand of a direct sum of finitely many copies of N over S, and replace the projectivity hypothesis on P in Bass with the milder requirement that P be a direct summand of a direct sum of finitely presented S-modules. We do not attempt to weaken Dress's requirement that K be a direct summand of a direct sum of finitely many copies of N over S since we see this hypothesis as an embellishment rather than as a restriction; the chief case of interest is when K = N, and our main theorem easily reduces to this case. Theorem 1.2 (Main Theorem, cf. Theorem 6.1). Let K, L, M, and N be right modules over a ring S that is an algebra over a commutative ring R, and let E := End S (N) denote the R-algebra of all S-linear endomorphisms of N. Assume the following: (1) X := j-Spec(R) ∩ Supp R (N) is a Noetherian subspace of the Zariski space Spec(R). (2) N is a finitely presented S-module, and E is a module-finite R-algebra. (3) There is a finitely generated left E-submodule F of Hom S (M, N) such that, for every More generally, if K is a direct summand of a direct sum of finitely many copies of N over S, To illustrate the scope of our work, we include a cancellation example that follows from Theorems 1.1 and 1.2 but eludes every other cancellation theorem cited above. In preparation for this example, we recall the Jacobian criterion for regularity [14,Corollary 16.20], which implies the following fact: If S is an affine C-domain of dimension d 1, then, for every h ∈ {1, . . . , d}, there are infinitely many height-h prime ideals q of S such that S q is factorial. (1) S is an affine C-domain of dimension d 3. (2) K := q is a prime ideal of S such that S q is factorial and 2 height(q) d − 1. Toward the end of the paper, we explain why this example falls outside the purview of previously published results. Before then, we establish our main theorem and discuss two of its corollaries. Section 2 lays the foundation for our work, covering conventions, definitions, and facts employed in subsequent sections. The proof of our main theorem begins in Section 3 with a study of modules with endomorphism rings of stable rank 1. In Section 4, we strengthen our hypotheses slightly, treating the case of a module with an endomorphism ring that is unit-regular modulo its Jacobson radical. Section 5 contains our main lemma (Lemma 5.4), the statement that constitutes the most difficult step in the proof of our main theorem. In Section 6, we formulate and certify our main theorem along with two immediate ramifications. The first of these is Corollary 6.4, which generalizes the cancellation theorems of Bass and Dress; the second is Corollary 6.5, which generalizes Theorem 1.1 and the De Stefani-Polstra-Yao Cancellation Theorem. The last section of our paper (Section 7) addresses why Example 1.3 evades prior responses to the Module Cancellation Problem. Foundations The purpose of this section is to collect conventions, definitions, and facts hailed throughout the rest of the paper. General conventions. Every ring is assumed to be associative with unity; every left and right module is taken to be unital; and every module over a commutative ring is presumed to be standard. The center Z(S) of a ring S is the commutative ring of all a ∈ S such that ab = ba for every b ∈ S. A ring S is an algebra over a commutative ring R with structure map σ : R → S if σ is a ring homomorphism with σ(1 R ) = 1 S and σ(R) ⊆ Z(S). An algebra S over a commutative ring R with structure map σ is module-finite over R if S is finitely generated as an R-module, relative to σ. If N is a right module over a ring S, then the ring End S (N) of all right S-linear endomorphisms of N is understood to act on the left of N. If a ring S is a subset of a ring E with 1 S = 1 E , then S is a subring of E. Associative rings. Let E be a ring. The opposite ring E opp of E is the underlying abelian group of E equipped with the reversed multiplication operation of E. The ring E is Noetherian if every ascending chain of distinct right ideals in E has finite length and every ascending chain of distinct left ideals in E also has finite length; the ring E is Artinian if it satisfies the same property as above except with the word ascending replaced by the word descending. An element e of E is (an) idempotent if e = e 2 . An element u of E is a unit if there is a (necessarily unique) element v of E with uv = vu = 1, in which case we call v the inverse u −1 of u. The ring E is Dedekind-finite if, for every element u of E, the existence of an element v of E with uv = 1 implies that vu = 1. A unit of E in Z(E) is a central unit; Topology. Let R be a commutative ring. The prime spectrum Spec(R) of R is the set of all prime ideals of R equipped with the Zariski topology; the maximal spectrum Max(R) of R is the subspace of Spec(R) consisting of the maximal ideals of R; and the j-spectrum j-Spec(R) of R is the subspace of Spec(R) composed of the prime ideals of R that are intersections of maximal ideals of R. A nonempty subset of a topological space X is irreducible if it is not the union of two of its proper closed subsets; the Krull dimension dim(X) of X is the supremum of the lengths of chains of distinct closed irreducible sets in the space; for every p ∈ X, the symbol dim X (p) refers to the dimension of the closure of {p} in X; and X is Noetherian if every descending chain of distinct closed sets in X has finite length. By Swan [52, Corollary following Proposition 1], Max(R) and j-Spec(R) have the same dimension, and one of these spaces is Noetherian if and only if the other is. For every closed set X in j-Spec(R), we let Min(X) denote the collection of the minimal members of X with respect to set inclusion in R. If j-Spec(R) is Noetherian, then Min(X) is finite for every closed set X in j-Spec(R) [16, page 344]. If N is an R-module, then the support of N over R, denoted Supp R (N), is the set of all p ∈ Spec(R) with N p = 0. If N is a right module over an R-algebra S such that E := End S (N) is a module-finite R-algebra, then Supp R (N) = Supp R (E) is closed in Spec(R), and consequently j-Spec(R) ∩ Supp R (N) is closed in j-Spec(R). The δ operator. Let M and N be right modules over a ring S that is an algebra over a commutative ring R, and let F be a left submodule of Hom S (M, N) over E := End S (N). The symbol δ(F ) signifies the supremum of the nonnegative integers m such that F ⊕m , when viewed as a subset of Hom S (M, N ⊕m ), harbors a split surjection. For every p ∈ Spec(R), the symbol δ(F p ) denotes the supremum of the nonnegative integers m for which a split surjection from M p onto N ⊕m p can be found in F ⊕m p . Suppose that F := Ef 1 + · · · + Ef n , where f := (f 1 , . . . , f n ) ⊤ ∈ Hom S (M, N ⊕n ) for some positive integer n, and let q ∈ W ⊆ X : Commutative rings. The symbol Z stands for the ring of all rational integers, and C signifies the field of all complex numbers. A Laurent monomial over a field k is a formal expression x g 1 1 · · · x gm m f , where x 1 , . . . , x m are variables; g 1 , . . . , g m are integers; m is a positive integer; and f is an element of k. A Laurent polynomial over a field k is a sum of finitely many Laurent monomials over k. An affine algebra over a field k is a ring of the form k[x 1 , . . . , x m ]/I, where I is an ideal of the polynomial ring k[x 1 , . . . , x m ] in m variables x 1 , . . . , x m over k and where m is a positive integer. The height of a prime ideal p in a commutative ring R is the supremum of the lengths of chains of distinct prime ideals in R with maximal member p. A domain is a commutative ring whose zero ideal is prime. If p is a prime ideal of a d-dimensional affine domain S over a field, then height(p) + dim X (p) = dim(X), where X := j-Spec(S) = Spec(S). A Noetherian domain is factorial if every height-1 prime ideal in the ring is principal. A domain is Bézout if every finitely generated ideal of the ring is principal. A domain is a principal ideal domain if every ideal of the ring is principal. Letting N denote the set of all positive integers, we deem a domain R to be Euclidean if there is a function ν : R \ {0} → N such that, for all a, b ∈ R with b = 0, there are q, r ∈ R such that a = bq + r and such that either r = 0 or ν(r) < ν(b). Stable rank. Let E be a ring. The stable rank sr(E) of E is the infimum of the positive integers m such that, for all integers n > m and elements a 1 , . . . , a n of E with Ea 1 + · · · + Ea n = E, there are elements b 1 , . . . , b n−1 of E with E(a 1 + b 1 a n ) + · · · + E(a n−1 + b n−1 a n ) = E. If S is a subring of E, then sr(S) sr(E) = sr(E opp ) = sr(E/ Jac(E)) by Vaserstein [ Grade and depth. Let M be a finitely generated module over a commutative Noetherian ring S with an ideal I such that IM = M. An element a of I is a nonzerodivisor on M if, for every x ∈ M, the equation ax = 0 implies that x = 0. An M-sequence in I is a sequence a 1 , . . . , a n of elements of I (for some positive integer n) such that, for every m ∈ {0, . . . , n−1}, the element a m+1 of I is a nonzerodivisor on M/(a 1 M + · · · + a m M). We begin working toward a proof of our main lemma (Lemma 5.4) in the next section. The case of a module with an endomorphism ring of stable rank 1 Our sole objectives in Sections 3-5 are to prove Lemma 3.1 and to prove our main lemma (Lemma 5.4); these observations are the only nonstandard results on which our main theorem (Theorem 6.1) depends. The proof of Lemma 3.1 is quick, essentially relying on nothing more than the fact that a ring of stable rank 1 is Dedekind-finite [37, Lemma 1.7]. The proof of our main lemma, on the other hand, is quite complicated, resting on a large collection of statements that populate the remainder of this section and the next two sections. We split this gallery of lemmas into three sections to help the reader keep track of the hypotheses active at various points in our discussion. Throughout this section, M and N stand for right modules over a ring S, and the ring E := End S (N) is assumed to have stable rank 1. We direct the reader to Section 2 for basic information on stable rank. Proof. Suppose first that Ef contains a split surjection. Then there is g ∈ Hom S (N, Thus f is split surjective, proving the forward implication. The reverse implication is clear. The proof of our main lemma begins with the next result, which establishes a fundamental connection between δ(F ) and µ E (F ) for a given finitely generated left E-submodule F of Hom S (M, N). The δ operator is defined in Section 2. The next lemma reveals that a version of Gaussian elimination holds for matrices with entries in E. We use this statement to certify Lemma 3.6 as well as two results (Lemmas 3.4 and 4.4) directly invoked in the proof of our main lemma. . . , g n ) ⊤ ∈ Hom S (M, N ⊕n ) for some integer n 2, and suppose that δ(Eg 1 + · · · + Eg n ) m for some m ∈ {1, . . . , n}. Then there exists A ∈ M m×n (E) with A(g 1 , . . . , g n ) ⊤ ∈ Hom S (M, N ⊕m ) split surjective and with each column of the m × m identity matrix I m occupying a predetermined column of A of our choice. From the special case in which the first m columns of A form I m , we learn that there is a split surjection in the set . . , g n ) ⊤ split surjective and with k columns of I m appearing in B at desired locations. To simplify notation, assume also that (1, 0, . . . , 0) ⊤ is not the leftmost column of B but that we wish for this to be the leftmost column of A. (The general case is similar in spirit but more cumbersome in notation.) Let (z 1 , . . . , z m ) ∈ Hom S (N ⊕m , M) be a section of B(g 1 , . . . , g n ) ⊤ . Since b 1,1 (g 1 z 1 ) + n j=2 b 1,j (g j z 1 ) = 1 and since sr(E) = 1, there are d, u ∈ E with u a unit such that We can now right-multiply by the inverse of the right side of the equation to produce I m , and we can then conjugate by a matrix in GL m (E) to clear b 2,1 , . . . , b m,1 while fixing I m on the right side of the equation. This process yields a matrix . . , g n ) ⊤ is split surjective. If k = 0, then our inductive step is complete; otherwise, let p ∈ {2, . . . , m} and q ∈ {2, . . . , n} be such that the pth column of I m is the qth column of B. Then b 1,q = 0, and so inspection reveals that the qth column of C is the qth column of B. Therefore, at least k + 1 columns of I m appear in C at desired positions. This completes our inductive step and our proof of the first statement of the lemma. The second statement of the lemma is an easy corollary of the first. The following lemma helps us compare lower bounds for the δ-values of certain left Esubmodules of Hom S (M, N). We appeal to this result explicitly in the proof of our main lemma. The only purpose of our next observation is to assist us in proving Lemma 4.3, where we construct a split surjection in Hom S (M, N) exhibiting a special form. Proof. Let p ∈ E and q, r ∈ Hom S (N, M) be such that We conclude this section with a statement that allows us, during our proof of Lemma 4.4, to initiate a row reduction process stronger than the one that yields Lemma 3.3. In the next section, we continue working toward a proof of our main lemma by specializing to the case in which E/ Jac(E) is a unit-regular ring. 4. The case of a module with an endomorphism ring that is unit-regular modulo its Jacobson radical In the present section of the paper, M and N stand for right modules over a ring S with E := End S (N) such that E/ Jac(E) is a unit-regular ring. We direct the reader to Section 2, Subsections "Associative rings" and "Stable rank", for background on unit-regular rings and their relatives. Our primary goal here is to certify Lemma 4.4, the only statement from this section directly cited in the proof of our main lemma (Lemma 5.4); the two other lemmas in this section (Lemmas 4. 1 (2) of our main theorem (Theorem 6.1). On the other hand, Example 4.2 below illustrates that, in Lemma 4.1, the ring E cannot be replaced by an arbitrary ring of stable rank 1. Lemma 4.1. Let a, b, c ∈ E be such that aE + bE = E = Eb + Ec. Then there is d ∈ E such that, for every central unit s of E, the element b + adsc is a unit of E. Proof. Every central unit of E represents a central unit in E/ Jac(E), and every unit of E/ Jac(E) can be represented by a unit of E. Therefore, we may assume that E is unitregular. Insofar as E is unit-regular, there is a unit v of E with b = bvb. Moreover, since aE + bE = E = Eb + Ec, there are e, f, g, h ∈ E with ae + bf = 1 = gb + hc. We now claim that we may take d := e(v −1 − b)h. To prove this, let s be a central unit of E. Since b = bvb, the elements bv and vb are idempotents with (1 − bv) As a result, x := (tu)(v −1 w) is a unit of E. We claim that x = b + adsc. We can verify this as follows. First, since ae + bf = 1 = gb + hc, we have hc. Using our new expressions for t and w, we get As promised, we now demonstrate that the preceding lemma fails if we replace the ring E with an arbitrary ring of stable rank 1. The particular ring that we construct in the following example is a Euclidean domain with a countably infinite maximal spectrum. This example is minimal in two ways: First, a commutative ring of stable rank 1 is unit-regular if and only if it is 0-dimensional reduced, and our example shows that the previous lemma does not hold for an arbitrary 1-dimensional domain of stable rank 1. Second, every commutative ring with a finite maximal spectrum is unit-regular modulo its Jacobson radical, and our example indicates that a commutative ring of stable rank 1 need not have an uncountable maximal spectrum in order to violate Lemma 4.1. Another notable property exhibited by our example is that, although our ring contains a field, neither the cardinality nor the characteristic of the field plays a role in our argument. Example 4.2. There is a Euclidean domain R of stable rank 1 with a countably infinite maximal spectrum such that the following statement holds: There are elements a, b, c of R with aR + bR = R = Rb + Rc such that, for every d ∈ R, there is a (necessarily central) unit s of R with b + adsc residing in a maximal ideal of R. Proof. Let G denote a free Z-module of countably infinite rank with standard basis elements e 1 , e 2 , e 3 , . . . . Define a partial order on G by letting ∞ m=1 e m g m ∞ m=1 e m h m if and only if g m h m for every positive integer m. It is easily verified that every nonempty finite subset of G has an infimum and a supremum with respect to . Let P := k x ±1 1 , x ±1 2 , x ±1 3 , . . . denote the ring of all Laurent polynomials in a countably infinite number of variables x 1 , x 2 , x 3 , . . . over a field k. Define γ (x g 1 1 · · · x gm m f ) := e 1 g 1 + · · · + e m g m ∈ G for every nonzero Laurent monomial x g 1 1 · · · x gm m f ∈ P , where m is a positive integer; g 1 , . . . , g m are integers; and f is a nonzero element of k. Next, define γ(p 1 + · · · + p n ) := inf {γ(p 1 ), . . . , γ(p n )} ∈ G for all positive integers n and nonzero nonassociate Laurent monomials p 1 , . . . , p n ∈ P . Let Q denote the field of fractions of P , and let Q * denote the group of units of Q. Define for all nonzero Laurent polynomials p, q ∈ P . By Heinzer [30], the map γ : Q * → G thus defined is a surjective group homomorphism with kernel equal to the group of units of the ring R := {r ∈ Q * : 0 G γ(r)}∪{0 Q }, and R is a one-dimensional Bézout domain of stable rank 1 with Max(R) = {x 1 R, x 2 R, x 3 R, . . .} and with x 1 R, x 2 R, x 3 R, . . . distinct. Since every prime ideal of R is principal, R is Noetherian by a well-known theorem of Cohen. Since R is Noetherian Bézout, R is a principal ideal domain. Since R is a principal ideal domain of stable rank 1, the ring R is a Euclidean domain by Estes-Ohm [16, Theorem 5.3]. Let (a, b, c) := (x 1 , x 2 , 1) so that aR + bR = R = Rb + Rc. We must show that, for every d ∈ R, there is a unit s of R with b + adsc residing in a maximal ideal of R. If d = 0, then we may let s be an arbitrary unit of R to finish. Suppose that d = 0. Let γ(d) := e 1 g 1 + · · · + e m g m for some positive integer m and nonnegative integers g 1 , . . . , g m . Then d = x g 1 1 · · · x gm m u for some unit u of R. Let t := x 1 du −1 , and note that Hence, The next lemma supplies a hypothesis guaranteeing the existence of a particular kind of split surjection in Hom S (M, N). Let (e, f ) be a split surjection in Hom S (N ⊕ M, N). Let g ∈ Hom S (M, N), and suppose that Ef + Eg contains a split surjection. Then there are maps d ∈ E and z ∈ Hom S (N, M) such that, for every central unit s of E, the map (f + edsg)z is a unit of E with (f + edg)z = 1 in particular. Proof. Let h be a split surjection in Ef + Eg. By Lemma 3.5, there is z ∈ Hom S (N, M) such that Now, by Lemma 4.1, there is d ∈ E such that, for every central unit s of E, the element (f + edsg)z is a unit of E. If u := (f + edg)z = 1, then we may replace z with zu −1 . The last result of this section affords a major inductive step in the proof of our main lemma. We establish our main lemma in the next section after adapting three results from a previous paper by the author [2] to suit our purposes here. Our main lemma In this section, we assume that M and N are right modules over a ring S that is an algebra over a commutative ring R, and we let E := End S (N) denote the R-algebra of all S-linear endomorphisms of N. We also fix the following hypotheses, which manifest as Conditions (1) and (2) in our main theorem (Theorem 6.1): (1) X := j-Spec(R) ∩ Supp R (N) is a Noetherian subspace of the Zariski space Spec(R). (2) N is a finitely presented S-module, and E is a module-finite R-algebra. Proofs of the first three lemmas of this section are similar to proofs of three analogous lemmas from a previous paper by the author [2]. Accordingly, rather than provide proofs of the next three lemmas, we simply point out the results on which they depend and the lemmas from the earlier paper [2] to which they correspond. We remind the reader that the set Λ appearing in Lemmas 5.2 and 5.3 is defined in Section 2. Our main lemma rounds out this section. N). Then the set Λ of test points of δ(F − ) in X is finite. On top of that, for every p ∈ X \ Λ, there is q ∈ Λ with q p and δ(F q ) = δ(F p ). Proof. The proof of [2, Lemma 5.7] can serve as a guide to the reader, although here we must appeal to Lemmas 3.4 and 5.2 from this paper. Once again, the ring S need not be a module-finite R-algebra here. On the other hand, the requirement in [2, Lemma 5.7] that X have finite dimension is harmless here since it is implied by the conditions of the present lemma; see Remark 6.2. We are now ready to establish our main lemma. This statement confirms and generalizes Conjecture 3.2.9 from the author's PhD dissertation [1]. Our main lemma also confirms [1, Conjecture 3.2.8] and affirmatively answers [2, Question 8.23] if R = S = End S (N) and if, in our definition of a q-split map from Section 2, we replace the number 1 + dim X (q) with t + dim X (q) for some fixed positive integer t. a map (f 1 , . . . , f n ) ⊤ ∈ Hom S (M, N ⊕n ) that is X-split for some integer n 2, and let e ∈ E be such that (e, f 1 ) ∈ Hom S (N ⊕ M, N) is split surjective. Then there are d 1 , . . . , d n−1 ∈ E such that Necessarily, will be split surjective. Proof. First of all, the conditions listed at the beginning of this section imply that E p ∼ = (End S (N)) p is a ring of stable rank 1 and that E p / Jac(E p ) is a unit-regular ring for every p ∈ X. Hence, whenever we localize at a member p of X, we may apply the lemmas from Sections 3 and 4 to the right S p -modules M p and N p and the R p -algebra E p . We will use this fact tacitly at various points in the proof at hand. Now, let Λ denote the set of test points of δ(F − ) in X. By Lemma 5.2, the set Λ is finite. List the members of Λ so that no member contains any of its predecessors. Let q ∈ Λ, and suppose inductively that there are a 1 , . . . , a n−1 ∈ E such that (g 1 , . . . , g n−1 ) ⊤ := (f 1 + ea 1 f n , f 2 + a 2 f n , . . . , f n−1 + a n−1 f n ) ⊤ is p-split for every predecessor p of q. Then, necessarily, (e, g 1 ) will be split surjective. Let J be the intersection of the predecessors of q, and let g n := f n . To complete our inductive step, it suffices to find b 1 , . . . , b n−1 ∈ E and r 1 , . . . , r n−1 ∈ J \ q such that (h 1 , . . . , h n−1 ) ⊤ := (g 1 + eb 1 r 1 g n , g 2 + b 2 r 2 g n , . . . , g n−1 + b n−1 r n−1 g n ) ⊤ is q-split: Since r 1 , . . . , r n−1 ∈ J \ q, Nakayama's Lemma will imply that (h 1 , . . . , h n−1 ) ⊤ is also p-split for every predecessor p of q, and, necessarily, (e, h 1 ) will be split surjective. Suppose first that m = n. Then m 2, and so δ(G q ) n − 1 by Lemma 3.4. Now, (g 1 , . . . , g n−1 ) ⊤ is q-split, which implies that we may take b 1 := · · · := b n−1 := 0 E and that we may take r 1 , . . . , r n−1 to be arbitrary members of J \ q to complete our inductive step. For the remainder of the proof, then, suppose that m n − 1. Since (f 1 , . . . , f n ) ⊤ is q-split, we must have m 1 + dim X (q). (We use this fact near the end of the proof.) By Lemma 4.4, there is c 1 ∈ E q such that, for every central unit s 1 of E q , there is c 2 ∈ E q such that, for every central unit Let k ∈ {0, . . . , m − 1}, and suppose inductively that we have chosen appropriate elements c 1 , . . . , c k of E q and central units s 1 , . . . , s k of E q relative to the last display. Based on these choices, select c k+1 ∈ E q appropriately; find b k+1 ∈ E and r k+1 ∈ J \ q such that c k+1 = b k+1 /r k+1 ∈ E q ; and let s k+1 := r 2 k+1 /1 ∈ E q so that c k+1 s k+1 = b k+1 r k+1 /1 ∈ E q . By induction, there are b 1 , . . . , b m ∈ E and r 1 , . . . , r m ∈ J \ q such that, if (h 1 , . . . , h n−1 ) ⊤ := (g 1 + eb 1 r 1 g n , g 2 + b 2 r 2 g n , . . . , g m + b m r m g n , g m+1 , . . . , g n−1 ) ⊤ and H := Eh 1 + · · · + Eh n−1 , then δ(H q ) m 1 + dim X (q). Hence (h 1 , . . . , h n−1 ) ⊤ is q-split. So we may complete our inductive step by setting b m+1 := · · · := b n−1 := 0 E and by letting r m+1 , . . . , r n−1 be arbitrary members of J \ q. Now, by induction on the members of Λ, there are d 1 , . . . , d n−1 ∈ E such that is Λ-split and, therefore, X-split by Lemma 5.3. This certifies the first statement of the lemma, and the second statement of the lemma is evident, given the first. With the proof of our main lemma now complete, we are poised to establish our main theorem. We accomplish this goal in the next section. Our main theorem and two corollaries In this section, we prove our main theorem (Theorem 6.1) and two corollaries (Corollaries 6.4 and 6.5). As promised in our introduction, Corollary 6.4 contains the cancellation theorems of Bass [4, Theorem 9.3] and Dress [12,Theorem 2], and Corollary 6.5 recovers Theorem 1.1 from this paper as well as the De Stefani-Polstra-Yao Cancellation Theorem [9, Theorem 3.14]. We rehash our main theorem here to aid the reader. The statement below is identical to Theorem 1.2 with the exception of Condition (3), which we rephrase here using the δ operator. Theorem 6.1 (Main Theorem, cf. Theorem 1.2). Let K, L, M, and N be right modules over a ring S that is an algebra over a commutative ring R, and let E := End S (N) denote the R-algebra of all S-linear endomorphisms of N. Assume the following: (1) X := j-Spec(R) ∩ Supp R (N) is a Noetherian subspace of the Zariski space Spec(R). (2) N is a finitely presented S-module, and E is a module-finite R-algebra. (3) There is a finitely generated left E-submodule F of Hom S (M, N) such that, for every p ∈ X, we have δ(F p ) 1 + dim X (p). Then N ⊕ L ∼ = N ⊕ M =⇒ L ∼ = M. More generally, if K is a direct summand of a direct sum of finitely many copies of N over S, then Proof. There is a right S-module Q with Q ⊕ K ∼ = N ⊕m for some positive integer m. Hence N ⊕m ⊕ L ∼ = N ⊕m ⊕ M, and so, by induction on m, we may assume that is an isomorphism. By Condition (3), there are f 2 , . . . , f n ∈ F (for some integer n 2) such that F = Ef 2 + · · · + Ef n , and so (f 1 , . . . , f n ) ⊤ ∈ Hom S (M, N ⊕n ) is X-split. Hence, by Conditions (1) and (2), we may apply our main lemma (Lemma 5.4) iteratively n − 1 times to obtain f 0 ∈ F such that f 1 + ef 0 is X-split. Lemma 3.1, combined with Conditions (1) and (2), then implies that f 1 + ef 0 is split surjective over S. Let g ∈ Hom S (N, M) be a section of f 1 + ef 0 , and let Then U is a unit of End S (N ⊕ M). Hence is an isomorphism. By the Five Lemma, L ∼ = M. Remark 6.2. The conditions of our main theorem collectively imply that dim(X) is finite: If not, then there is q ∈ Min(X) with δ(F q ) 1 + dim X (q) = ∞, and so Lemma 3.2 tells us that N q = 0, contrary to the hypothesis that q ∈ X ⊆ Supp R (N). By the same reasoning, we do not need to assume in Corollary 6.5 below that dim(X) is finite since this property is forced by the constraints there. However, we do assume that the dimension of Y := Max(R) ∩ Supp R (N) is finite in Lemma 6.3 and Corollary 6.4 below; the reason is that, in each of these findings, we must account for the possibility that Hom S (P, N) is not finitely generated as a left module over End S (N). Lemma 6.3. Let N and P be right modules over a ring S that is an algebra over a commutative ring R, and let E := End S (N) denote the R-algebra of all S-linear endomorphisms of N. Assume the following: (1) Y := Max(R) ∩ Supp R (N) is a finite-dimensional Noetherian subspace of the Zariski space Spec(R). (2) N is a finitely presented S-module. (3) P is a direct summand of a direct sum of finitely presented right S-modules, and N ⊕(1+dim(Y )) m is a direct summand of P m over S m for every m ∈ Y . Then there is a finitely generated left E-submodule F of Hom S (P, N) such that, for every p in X := j-Spec(R) ∩ Supp R (N), we have δ(F p ) 1 + dim X (p). Proof. The proof is similar to that of [2, Lemma 4.2] and does not rely on the requirement in [2] that S be a module-finite R-algebra. The proof here depends only on Lemma 5.1 from this paper. We can now state and prove our joint generalization of Bass [4, Theorem 9.3] and Dress [12, Theorem 2]. The following corollary of our main theorem recovers Bass when we assume that P is a projective S-module and that N = S = E. Corollary 6.4 reduces to Dress when we require M to be a direct summand of a direct sum of finitely presented S-modules and we require P to be a direct summand of a direct sum of finitely many copies of N over S. (2) N is a finitely presented S-module, and E is a module-finite R-algebra. Proof. From Condition (1), we glean that X := j-Spec(R)∩Supp R (N) is a Noetherian space. Condition (2) here coincides with Condition (2) of our main theorem. Conditions (1)-(3), in tandem with Lemma 6.3, indicate that Hom S (P, N) contains a finitely generated left Esubmodule F of Hom S (M, N) such that δ(F p ) 1 + dim X (p) for every p ∈ X. To finish our proof, we now simply appeal to our main theorem. We close this section with a joint generalization of Theorem 1. (1) R is a Noetherian ring. (2) N is a finitely generated S-module, and S is a module-finite R-algebra. (3) P is a finitely generated direct summand of M over S, and N ⊕(1+dim X (p)) p is a direct summand of P p over S p for every p ∈ X := j-Spec(R) ∩ Supp R (N). Proof. Condition (1) guarantees that X is a Noetherian space. Conditions (1) and (2) work together to ensure that N is a finitely presented S-module and that E := End S (N) is a module-finite R-algebra. Conditions (1)-(3) collectively imply that F := Hom S (P, N) is a finitely generated left E-submodule of Hom S (M, N) such that δ(F p ) 1 + dim X (p) for every p ∈ X. An application of our main theorem completes the proof. Note that the preceding corollary covers Example 1.3. In the next section, we demonstrate that this example dodges many previously published cancellation theorems, including all those cited in our introduction. A cancellation example In our introductory section, we claim that Example 1.3 is not covered by any of the cancellation theorems preceding Theorem 1.1. We prove this assertion in the present section of the paper after establishing the two lemmas below. Lemma 7.1. Assume the following: (1) S is a Noetherian local factorial domain of dimension at least 2 with maximal ideal q. (2) M := q ⊕g ⊕ S ⊕h for some positive integers g and h. (3) N is an S-module such that N ⊕(g+h) is a direct summand of M over S. Then N = 0. Proof. Suppose, by way of contradiction, that N = 0. Then N is a rank-1 torsion-free S-module and can, therefore, be identified with a nonzero ideal of S. Moreover, letting f := g + h, we find that N ⊕f ∼ = q ⊕g ⊕ S ⊕h . By Heitmann-Wiegand [31,Theorem 8], the display implies that the ideals N f and q g are isomorphic. Since q g has height at least 2 in the Noetherian factorial domain S, the ideal q g has grade at least 2 on S, and so the natural map S → Hom S (q, S) is an isomorphism. Thus, there is a ∈ S such that N f = aq g . Now, for every height-1 prime ideal p of S, the equation N f = aq g implies that N f p = aS p . This observation, combined with our assumption that S is a factorial domain, ensures that there is an element b of S (possibly a unit of S) with b f S = aS. Hence, N f = b f q g . Letting F := b −1 N, we may write F f = q g , which implies that F ⊆ q. Thus, F f ⊆ q f ⊆ q g = F f , forcing q f = q g . Since f − g = h 1, Nakayama's Lemma then implies that q = 0, a contradiction. Lemma 7.2. Assume the following: (1) R is a commutative ring with a Noetherian maximal spectrum of finite dimension e. (2) S is a d-dimensional affine C-algebra that is also an R-algebra. (3) N is a faithful S-module such that E := End S (N) is a module-finite R-algebra. Proof. Since S is a d-dimensional affine C-algebra, we have (1) S is an affine C-domain of dimension d 3. (2) K := q is a prime ideal of S such that S q is factorial and 2 height(q) d − 1. (3) M := q ⊕d ⊕ S. Then, for every S-module L, we have Some cancellation theorems are easy to rule out as potential precedents for this example. For instance, in Goodearl-Warfield [22,Theorem 18], Krull-Schmidt [38, Theorem X.7.5], and Evans [17,Theorem 2], it is the case that sr(End S (K)) = 1, but here K is a proper ideal of an affine C-algebra S of dimension d 3 with grade S (K) 2, so sr(End S (K)) = sr(S) = 1 + d 4 by Suslin [48], it is assumed that K, L, or M is a projective S-module, but here K is an ideal of height at least 2 in the commutative Noetherian ring S, and K is a direct summand of L and M over S, so none of the three S-modules in question is projective. Dismissing Vasconcelos [ Recall that, in Vasconcelos, M ∼ = J ⊕n for some ideal J of S and some nonnegative integer n. Suppose, by way of contradiction, that this is true in our example. Then, by rank considerations, n = 1 + d. Applying Lemma 7.1, we find that J q = 0. As a result, M q = 0, contrary to our hypothesis that the domain S embeds into M. So Vasconcelos does not apply to our example. Warfield ensures that cancellation holds if sr(End S (K)) is finite and K ⊕ sr(End S (K)) is a direct summand of M over S. Of course, by induction, we can replace Warfield's specifications with the requirement that K be a direct summand of a direct sum of finitely many copies of some module N over S for which sr(End S (N)) is finite and N ⊕ sr(End S (N )) is a direct summand of M over S. Suppose, by way of contradiction, that this more general hypothesis holds in our example. Since K is a faithful direct summand of a direct sum of finitely many copies of N over S, we see that N is a faithful S-module. Since N is a submodule of the Noetherian S-module M by hypothesis, we see that N is a Noetherian S-module and that, consequently, End S (N) is a module-finite S-algebra. Combining the last two observations with our assumption that S is a d-dimensional affine C-algebra, we can apply Lemma 7.2 with R := S to conclude that sr(End S (N)) = 1 + d. Applying this to our hypothesis that N ⊕ sr(End S (N )) is a direct summand of M over S, we find that N ⊕(1+d) q is a direct summand of M q over S q . Lemma 7.1 then implies that N q = 0, contrary to the claim that N is a faithful Noetherian S-module. This contradiction certifies that even the more general version of Warfield described above does not cover our example. In Dress, S is an algebra over a commutative ring R with a finite-dimensional Noetherian maximal spectrum; N is an S-module such that End S (N) is a module-finite R-algebra and such that N ⊕(1+dim(Max(R))) m is a direct summand of M m over S m for every m ∈ Max(R); and K is a direct summand of a direct sum of finitely many copies of N over S. Suppose, by way of contradiction, that these three conditions hold in our example. As in our treatment of Warfield above, since K is a faithful direct summand of a direct sum of finitely many copies of N over S, we see that N is a faithful S-module. Since S is a d-dimensional affine C-algebra by hypothesis, we may set e := dim(Max(R)) and apply Lemma 7.2 to conclude that 1 + d 1 + e. Let σ : R → S be the structure map of the R-algebra S. Since q is a prime ideal of S, the set p := σ −1 (q) is a prime ideal of R, and σ(R \ p) is a multiplicatively closed subset of S \ q. By Dress's local condition, N ⊕(1+e) p is a faithful direct summand of M p over S p . Combining two previous observations with the last display, we find that N ⊕(1+d) q is a nonzero direct summand of M q over S q , contrary to Lemma 7.1. Dress, therefore, cannot yield our example. To close, we would like to return to the issue of why neither Bass nor De Stefani-Polstra-Yao applies to our example. Above, we simply point to the fact that K is not a projective Smodule, and indeed this shows that we cannot apply Bass or De Stefani-Polstra-Yao directly. However, the reader might wonder whether there is a way to reduce our example to one that is covered by Bass or De Stefani-Polstra-Yao. Below, we entertain three ruminations of this type and arrive at a dead end in each situation. One approach would be to take the isomorphism K ⊕ L ∼ = K ⊕ M and apply Hom S (K, −) or Hom S (−, S) to it. In the first case, we would find that S ⊕ Hom S (K, L) ∼ = S ⊕ S ⊕(1+d) , and then we would infer from Bass or De Stefani-Polstra-Yao that Hom S (K, L) ∼ = S ⊕(1+d) ; however, with the last isomorphism, we would fail to recover M = K ⊕d ⊕ S. In the second case, we would initially discover that S ⊕ Hom S (L, S) ∼ = S ⊕ S ⊕(1+d) and subsequently marshal Bass or De Stefani-Polstra-Yao to conclude that Hom S (L, S) ∼ = S ⊕(1+d) , but by then M would have once again escaped us. Another approach would be to apply Hom S (−, K) to the isomorphism K ⊕L ∼ = K ⊕M. In this approach, we would learn that S ⊕Hom S (L, K) ∼ = S ⊕S ⊕d ⊕K and then gather from Bass or De Stefani-Polstra-Yao that Hom S (L, K) ∼ = S ⊕d ⊕ K. Upon applying Hom S (−, K) to the last isomorphism, we would ascertain further that Hom S (Hom S (L, K), K) ∼ = M. However, at that point, we would need to know that L ∼ = Hom S (Hom S (L, K), K) in order to prove that L ∼ = M. In contrast, our main theorem reveals that L ∼ = Hom S (Hom S (L, K), K) as a corollary of the fact that L ∼ = M. A third approach would be to try to deduce, without using our main theorem, that the isomorphism K ⊕ L ∼ = K ⊕ M implies the isomorphism S ⊕ L ∼ = S ⊕ M in our example; this approach is inspired by Chase [6,Theorem 3.6]. The hope underlying this approach is that, upon procuring the second isomorphism, we would be able to apply Bass or De Stefani-Polstra-Yao. To dismiss this approach, it suffices to show that M satisfies neither the local condition in Bass nor the local condition in De Stefani-Polstra-Yao. We can settle the case of Bass by mimicking our treatment of Dress above with K and N redefined as S and with M still equal to q ⊕d ⊕ S for some prime ideal q of S such that S q is factorial and 2 height(q) d − 1. For the other case, first recall that, in De Stefani-Polstra-Yao, S is a commutative ring such that S ⊕(1+dim X (q)) q is a direct summand of M q over S q . Suppose, by way of contradiction, that our example satisfies this condition. Let c := dim X (q). Then M q ∼ = G q ⊕ S ⊕(1+c) q for some S-module G. Since M q = q ⊕d q ⊕ S q by hypothesis and since sr(S q ) = 1, Evans's Cancellation Theorem [17, Theorem 2] implies that G q ⊕ S ⊕c q ∼ = q ⊕d q . Applying Hom Sq (−, q q ) to the last isomorphism, we find that Hom Sq (G q , q q ) ⊕ q ⊕c q ∼ = S ⊕d q . Since S q is a Noetherian local factorial domain of dimension at least 2 with maximal ideal q q , we have grade qq (q q ) = 1 and grade Sq (q q ) 2. Hence, regarding the last display, the grade of q q on the left side is at most 1 whereas the grade of q q on the right side is at least 2, a contradiction. So, even if there is a way to prove that S ⊕ L ∼ = S ⊕ M without our main theorem, neither Bass nor De Stefani-Polstra-Yao can be applied to this isomorphism to yield the conclusion that L ∼ = M. In summary, our main theorem reveals new information relative to more than eight cancellation results spanning four schools of module cancellation research. Each school exploits one of the following types of mathematical objects: module-finite algebras over commutative rings of dimension at most 2, direct sums of indecomposable modules, modules with endomorphism rings of finite stable rank, and projective modules. Our main theorem responds to an adjuration of Eisenbud and Evans from 1973 calling for a unified cancellation theorem that, on one hand, circumvents projectivity conditions and, on the other hand, covers a robust collection of finitely generated modules over every commutative Noetherian ring. Our main theorem fulfills this request by generalizing three established cancellation results while weakening a projectivity hypothesis in each one. The cancellation example from this section provides one concrete way to distinguish our main theorem from its many predecessors.
2021-08-10T01:16:15.331Z
2021-08-08T00:00:00.000
{ "year": 2021, "sha1": "796691d308e5e9c4cd3c3afab8c9221275eb4f74", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "796691d308e5e9c4cd3c3afab8c9221275eb4f74", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8589129
pes2o/s2orc
v3-fos-license
Distal amputations for the diabetic foot Minor amputations in diabetic patients with foot complications have been well studied in the literature but controversy still remains as to what constitutes successful or non-successful limb salvage. In addition, there is a lack of consensus on the definition of a minor or distal amputation and a major or proximal amputation for the diabetic population. In this article, the authors review the existing literature to evaluate the efficacy of minor amputations in this selected group of patients in terms of diabetic limb salvage and also propose several definitions regarding diabetic foot amputations. O ne of the most valuable strategies for managing the diabetic foot is to prevent the development of foot complications since neuropathic foot ulceration can often lead to loss of a limb due to a major amputation, i.e. below the knee amputation (BKA). For this to be achieved, patients diagnosed with diabetes mellitus must be subjected to early annual foot screening programs (1). Once a diabetic foot complication has developed, the next best strategy is to treat this complication early in a hospital setting by a multidisciplinary diabetic foot team (2,3). The objective of early and efficacious treatment is to achieve limb salvage in order to avoid the loss of a limb from a major amputation. However, controversy still remains in the existing literature as to what constitutes successful or nonsuccessful diabetic limb salvage as well as a definition of a minor or distal amputation and a major or proximal amputation. Izumi et al. (4) referred to a Syme amputation as major amputation and in a study by Evans et al. (5), no Syme amputation was included where the authors compared forefoot and midfoot amputations to BKA. Svensson et al. (6) compared a BKA group with an abovethe-ankle amputation. Based on the current literature, the authors propose the following definitions regarding the terms of distal or minor amputation versus a proximal or major amputation (Table 1). In addition, the terms successful versus non-successful amputations for diabetic limb salvage are still needed to be defined based on the amputation level, functionality, recurrence of ulceration and/or amputation and in larger cohort studies. The ASEAN Plus Expert Group Forum for the Management of Diabetic Foot Wounds that was held in Singapore on November 10, 2002, developed clinical practice guidelines for the management of diabetic foot wounds and has adopted these criteria. The senior author of this article is the chairman of this forum and the workgroup includes two experts each from Indonesia, Malaysia, Philippines, Singapore, Sri Lanka, and Thailand. The first amputation for patients with diabetic foot complications should preferably be a minor (distal) amputation. When a major (proximal) amputation, such as BKA is performed, the mortality rate is significantly higher than when a minor amputation (such as ray) is performed (4,5). Izumi et al. (4) reported a significant difference in mortality with the hazard rate being 1.6 times in major amputees compared to ray amputees. Evans et al. (5) found that 80% of minor amputees were still alive after 2 years; 73% of minor amputees preserved their lower limb; and 64% were still fully ambulatory. In the BKA group, 52% died within 2 years and only 64% of patients ambulated with a prosthetic limb (5). Svensson et al. (6), in a study of 410 patients undergoing minor amputations, found that limb salvage could be achieved in almost twothirds of patients. Distal or minor amputations for the diabetic foot Ray amputation, which involves the excision of the toe and part of the metatarsal, provides a more viable option of ensuring an adequate surgical debridement of the septic margins. Indications may include a wet or dry gangrene of a toe, osteomyelitis of the metatarsal head and/or proximal phalanx, septic arthritis of the metatarsophalangeal joint (MTPJ) and gross infection of the toe. Suggested inclusion criteria for this type of amputation may include one or two palpable pedal pulses, ankle brachial index (ABI) ] 0.8 and toe brachial index ] 0.7. Borkosky et al. (8) reported a 19.8% incidence of reamputation in patients with diabetes and peripheral sensory neuropathy undergoing partial first ray resection. Wong et al. (9) reported a 70% success rate of ray amputation in a cohort of 150 patients with diabetic foot problems. Absence of pulses, delayed capillary filling, high erythrocyte sedimentation rate, high creatinine and high neutrophil counts were found to be predictive factors for a poor clinical outcome (9). Indications for transmetatarsal (TMA) amputation may include wet or dry gangrene involving only the forefoot and/or infection involving the forefoot while the inclusion criteria are the same as those mentioned above required for a ray amputation. Brown et al. (10) in a retrospective study of 21 patients reported a high functioning level and durability of the stump in patients undergoing TMA and concluded that it provides an ambulatory advantage. However, TMA has been reported to give significant complication and failure rates. Anthony et al. (11) reported 82% of patients requiring further surgery while Pollard et al. (12) reported a woundhealing rate of only 54%. Amputation at the metatarsal level causes a muscular imbalance due to resultant equinovarus deformity from unopposed action of gastrocnemius, tibialis anterior, and tibialis posterior tendons, which is coupled with the deficiency of the muscular tension of the extensor tendons (12). Adjunctive soft tissue procedures such as tendo-Achilles lengthening and split tibialis anterior tendon transfer for muscular imbalance are needed to correct for the equinovarus deformity. In addition, special footwear modifications are needed to reduce complication rates (13). Midfoot amputation Lisfranc's disarticulation is a disarticulation through the tarsometatarsal joint, while Chopart's disarticulation is a disarticulation through the talonavicular and calcaneocuboid joints leaving only the hindfoot (talus and calcaneum) behind (Fig. 1). These amputations are rarely performed in diabetic foot infections due to high failure rate and the proximity of infected tissue to the heel pad. However, Brown et al. (10) reported high ambulatory levels for Chopart's disarticulation in his series of 10 patients. This suggests a favorable advantage for patients to ambulate if peri-operative and post-operative complications could be avoided. Elsharawy (14) studied the outcome of midfoot amputations in diabetic gangrene in his cohort study of 32 patients. There were wound-healing complications in eight patients (27%), which necessitated a BKA. Successful limb salvage, which was defined as a stump with functional ambulation, was seen in only 30 patients (67%) (14). A systematic review of the existing literature was conducted by Schade et al. to identify any factors that may be associated with a successful Chopart amputation in diabetic foot problems (15). The efficacy of tendinous and/or osseous balancing could not be assessed due to (15). Hindfoot amputation This category included the amputations as shown in Table 1 and has the indications and inclusion criteria as mentioned in the forefoot amputation category. Syme's amputation has been advocated for trauma cases (16); however, with strict selection criteria, Syme's amputation can give good results in patients with diabetic foot infections (17). It is well known that Syme's amputation should be reserved for patients with at least a palpable posterior tibial pulse and an ankle-brachial index of more than 0.5 (17Á19). There are several disadvantages to performing a Syme's amputation. This includes instability of the calcaneal flap due to poor adherence of the soft tissue of the calcaneal flap to the tibial surface. Also, the dissection of the calcaneum from the underlying flap in a Syme's amputation may lead to devascularization of the flap (20). A third disadvantage is that the Syme's amputation with excision of the calcaneum leads to a shorter stump. This causes significant limb length discrepancy, which makes walking barefoot difficult (21). The Boyd and Pirogoff amputation is designed to give better results than the Syme's amputation (21Á24). In the Boyd and Pirogoff amputation, the tibio-calcaneal bony fusion gives added stability to the flap. There is also reduced devascularization of the flap since the calcaneum is not dissected. Limb length discrepancy is also minimized. Along with a stable full weight-bearing stump due to the tibio-calcaneal fusion, the additional length makes it easier for the patient to walk without a prosthesis (23). In addition, a part of the medial and lateral malleolus preserved in these amputations makes it easier for prosthesis to be fitted. The prosthesis can be worn with less friction and is more rotationally stable compared to a Syme's prosthesis. Nather et al. (25) reported good outcomes in all six patients undergoing Pirogoff's amputation (Figs. 2 and 3) followed up over a minimum of 1 year. Strict selection criteria included a palpable posterior tibial pulse, ABI of more than 0.7, Hemoglobin level of more than 10 g/dL and serum albumin level of more than 30 g/L (25). The outcome of Pirogoff's amputation is still controversial. The cost of the prosthesis for Pirogoff's amputation is similar compared to that of a BKA. In terms of function, the Pirogoff's amputation is a weight-bearing stump. This has many advantages, including load sharing, which reduces the friction between the stump and the prosthesis and patients are able to ambulate short distances without wearing their prostheses. However, as the supramalleolar stump is bulbous in shape, it is difficult to fit a prosthesis for the Pirogoff's amputation. Discussion Further sub-categorizing of operative methods is useful for being more accurate in the level of amputation for diabetic limb salvage surgery. Different levels of amputations provide unique problems in function and prostheses fitting due to the anatomical differences in each region. Problems in the midfoot will require muscle tendon transfers to ensure a well-functioning stump. Even though at times when a BKA stump is a better option for ambulation, patients may still choose a limb salvage Conclusion Minor amputations in patients with diabetic foot problems have been shown to be effective in limb salvage and reducing morbidity and mortality in patients. The authors have proposed several definitions regarding diabetic foot amputations while further studies are needed for a consensus on the definition on a successful versus nonsuccessful diabetic limb salvage surgery.
2018-04-03T00:06:11.398Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "40885b50ad8fe70ae9db62d7d97b8d463afd5490", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3714676?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "40885b50ad8fe70ae9db62d7d97b8d463afd5490", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15727591
pes2o/s2orc
v3-fos-license
Making Anti-de Sitter Black Holes It is known from the work of Banados et al. that a space-time with event horizons (much like the Schwarzschild black hole) can be obtained from 2+1 dimensional anti-de Sitter space through a suitable identification of points. We point out that this can be done in 3+1 dimensions as well. In this way we obtain black holes with event horizons that are tori or Riemann surfaces of genus higher than one. They can have either one or two asymptotic regions. Locally, the space-time is isometric to anti-de Sitter space. It came as a surprise when Bañados et al. produced a "black hole" solution of Einstein's equations, with a negative cosmological constant, in 2+1 dimensions [1]. This was unexpected because all such solutions are locally isometric to anti-de Sitter space, which has constant curvature. Indeed the solution can be obtained by a suitable identification of points in anti-de Sitter space. The original papers spawned a rather large literature (reviewed recently by Carlip and by Mann [2]), but it appears to have gone unnoticed that the construction can be generalized to higher dimensions, in particular to four dimensions. Our purpose here is to remedy this deficiency. It will become evident as we proceed that the essential ingredient which makes the construction possible is the peculiar asymptotic structure of anti-de Sitter space, which has a timelike boundary at spatial infinity. The dimension of space-time is not essential. We wish to acknowledge the work of Brill and Steif [3], who stressed that it is helpful to look at the BHTZ construction from an initial data point of view. Before we begin our construction we give a thumbnail sketch of anti-de Sitter space. It is defined as the surface embedded in a five dimensional flat space with the metric This is a solution of Einstein's equations with the cosmological constant Λ = −3. Its intrinsic curvature is constant and negative. We find it helpful to think sometimes in terms of the coordinates in the embedding space, and sometimes in terms of the intrinsic coordinates (t, ρ, θ, φ), where [4] Most of our reasoning will employ the embedding coordinates, but we will use the intrinsic coordinates for drawing the pictures. The intrinsic coordinates cover all of space-time, and in terms of them the intrinsic metric is where The metric dl 2 is the metric on hyperbolic three-space represented as the interior of a unit ball. We refer to this ball as the Poincaré ball, since it is the generalization to three dimensions of the Poincaré disk as a model for Lobachevskian geometry (which was reviewed for physicists by Balasz and Voros [5]). Hyperbolic three-space can be defined as one sheet of the hyperboloid embedded in flat Minkowski space. In our coordinate system anti-de Sitter space has been foliated with Poincaré balls having zero extrinsic curvature. Some elementary facts about hyperbolic three-space will be used below. Its geodesics are segments of circles orthogonal to the boundary of the Poincaré ball. Its isometries are elements of SO(3, 1) which we call rotations and boosts, using a terminology familiar from the study of the Lorentz group. A boost can be characterized as an isometry that has two fixed points, both situated on the boundary of the ball. Some elementary facts about anti-de Sitter space will also be needed, in particular the fact that a light ray that passes the spatial origin at time t = 0 will strike spatial infinity at t = π/2. Conversely, the future domain of dependence of the hypersurface t = 0 ends at t = π/2, because of information leaking in from infinity. We now turn to a description of the black hole found by Bañados et al. [1]. They work in 2+1 dimensional anti-de Sitter space, which can of course be obtained as the intersection of the hypersurface Z = 0 with the four dimensional space-time given above. To obtain their black hole (more precisely what they call their spinless black hole, and what we will call the BHTZ space-time) one will have to identify points that can be connected with each other by an isometry generated by the Killing vector The Killing vector field is time-like in a part of space-time. The "identification surfaces" are chosen in such a way that there are no closed time-like curves in the solution, which means that they should lie entirely within the region where the Killing vector field is space-like. This we will call the "allowed region" -the covering manifold of the BHTZ space-time. It is given by (3) (where θ = π/2 since Z = 0). All points inside the cylinder belong to anti-de Sitter space, its surface (ρ = 1) representing spatial infinity. The BHTZ spacetime lies between the two surfaces inside the cylinder which are identified under an isometry generated by (7). Since this isometry has fixpoints at φ = π/2, t = ±π/2 the surfaces merge and we have singularities there. The future singularity is hidden by an event horizon which "splits up" at t = 0. This horizon is indicated by the dashed lines in the constant time slices to the right. In figure 1 we have depicted a pair of suitable surfaces. They can be obtained by moving the "vertical" hypersurface X = 0 backwards or forwards along the Killing vector field, and are given by the equation for suitable values of the constant u. Identifying corresponding points on these surfaces gives us the BHTZ space-time. The region bounded by the identification surfaces is a regular solution of Einstein's equations which is locally isometric to anti-de Sitter space. However, when the surfaces merge (at t = ±π/2) the quotient space becomes singular and the BHTZ spacetime ends there. The singularities are of the "Misner type" [6] -they are clearly not curvature singularities. There are two asymptotic regions in the directions of positive and negative Y , and the space-time topology is To see why this is a black hole, consider a light ray that starts out from the origin at time t = 0. As we observed above, this light ray will strike spatial infinity at t = π/2. If we look into the BHTZ space-time from the asymptotic region lying in the positive Y direction nothing that passes the t = 0 hypersurface with a negative Y value can be seenwe have an event horizon and therefore a black hole. The location of the event horizon at three different times are shown by the dashed lines on the spatial slices depicted in figure 1. Evidently, the Penrose diagram is that drawn in figure 2. Note that this is the same Penrose diagram as that of the Schwarzschild-anti-de Sitter solution. The Penrose diagram of anti-de Sitter space is also shown, for comparison. respectively. Note that the latter has two disconnected infinities. We are now ready to study the situation in four dimensions. In our first construction we simply rotate the BHTZ space-time around the X-axis. The identification surfaces are still given by eq. (9). Again the two surfaces merge at t = ±π/2, so our space-time begins and ends in singularities at these times. To visualize the resulting space-time, figure 3 may be helpful. It shows the location of the identification surfaces in the Poincaré balls at three different values of t (the first picture is taken at t = 0). The main new feature compared to 2+1 dimensions is that spatial infinity is connected -there is only one asymptotic region in 3+1 dimensions. Now watch the dashed line -actually it is a circle -that connects the identification surfaces at t = 0. Photons emitted from this line will reach spatial infinity at t = π/2, which is the time when space-time ends in a singularity. It is clear that a toroidal event horizon will grow up from this line, as shown in the two subsequent Poincaré balls in figure 3. This is quite similar to the 2+1 dimensional solution, but due to the fact that there is now only one asymptotic region there is also an important difference. The Penrose diagram is given in figure 4. Unlike its 2+1 dimensional counterpart this is not an eternal black hole. Our next example is less trivial. We will identify points that can be connected by a discrete subgroup Γ of the SO(2, 1) group of isometries generated by the Killing vectors J XU , J Y U and J XY . The "identification surfaces" must lie in the region where the three Killing vector fields are spacelike, that is to say that the allowed region is defined by It is clear that the hypersurface defined by eq. (9), and hypersurfaces obtained from it by performing rotations generated by the Killing vector J XY , are suitable choices. The solution will then necessarily be singular at t = ±π/2, because at those times every identification surface will form a plane containing the Z-axis and going straight through the middle of the Poincaré ball. It is necessary to exercise some care to ensure that these are the only singularities that arise. Consider first a simpler case, that of a hyperbolic plane H 2 defined byX It is well known [5] that one can select a discrete subgroup Γ of boosts in SO(2, 1) such that the quotient space Σ = H 2 /Γ becomes a compact Riemann surface of genus greater than one. The idea is to choose a polygon bounded by geodesics as the fundamental region for the discrete group Γ, whose generators exhange pairs of edges of the polygon. In order to prevent that conical singularities arise in the quotient space the sum of the angles of the polygon has to be equal to 2π. The simplest candidate for a polygon -a square -is ruled out because in the hyperbolic plane the sum of its angles is less than 2π. To do the trick one needs a polygon with 4g sides, where g ≥ 2. The sum of the angles can then always be set equal to 2π by adjusting the size of the polygon, since the angles shrink as the area of the polygon increases. The regular surface that arises when the edges have been identified is then a compact Riemann surface of genus g. The simplest possible case is that of a regular octagon with opposing edges identified, as illustrated in figure 5. An elementary calculation shows that the Euclidean coordinate distance d between the origin and the symmetrically placed edges has to be (in coordinates where the Euclidean coordinate radius of the disk is unity). So what we intend to do is to define a two-parameter family of Poincaré disks which is such that every point in the allowed region -defined by eq. (10) -lies on a unique disk, and such that each disk is mapped into itself by the SO(2, 1) group of isomorphisms generated by J XU , J Y U and J XY . Then we select a discrete subgroup Γ of SO(2, 1) and use it to compactify all the disks at one stroke. We have to check that all the compactified disks are smooth manifolds, so that the resulting solution will have the topology R 2 ⊗ Σ, where Σ is a Riemann surface of genus higher than one. This is easier than it sounds. First we rewrite the equation that defines anti-de Sitter space as We see that Z parametrizes a family of three-dimensional anti-de Sitter spaces that foliate the four-dimensional space. A coordinate system that takes advantage of this situation is and (X,Ŷ ,Û ) obey eq. (11). This coordinate system covers all of the allowed region (where the Killing vector fields we will use for identification are spacelike). In this region the space-time metric then takes the form where dσ 2 is the metric on the hyperbolic plane defined by eq. (11). Thus every point in the allowed region lies on a unique disk with a radius of curvature R that depends on z and v. Our SO(2, 1) Killing vectors are and so on. Therefore they lie in the disks and the intersections of any disk with the level surfaces of the Killing vector fields are geodesics on the disk. Indeed figure 5 applies to all the disks if it applies to one of them, since the radius of curvature does not affect the coordinate distance d. This is all we need to see that our construction works; in particular conditions such as the condition on d given in eq. (12) will be fullfilled on all the disks if it is fullfilled on one. To visualize the solution consider figure 6, which shows the Poincaré ball defined by t = 0. It lies entirely within the allowed region. The identification surfaces are seen as segments of spheres going through the ball. Figure 6 illustrates the simplest case, where the identification surfaces have been chosen so that their intersections with the hyperbolic planes that foliate the ball form regular octagons. The compactified planes will then be compact surfaces of genus two. If we -mentally -add the fourth dimension to the picture every such compact surface can be thought of as an initial data surface for a solution of Einstein's equations in 2+1 dimensions, giving rise to a locally anti-de Sitter space-time which begins and ends in a singularity. It remains to locate the event horizons. There are two asymptotic regions in the ±Z directions. A light ray from the origin has just enough time to reach infinity before the singularity at t = π/2 terminates the solution, and therefore the origin lies on an event horizon. It is obvious on symmetry grounds that the event horizon at t = 0 is precisely the Riemann surface that in figure 6 is depicted as the octagon that lies on the plane that goes through the middle of the ball. This event horizon then splits in two and moves outwards; the Penrose diagram for our solution is the same as the Penrose diagram for the BHTZ solution in figure 2. We have now completed the constructions that were announced in the abstract of our paper. Our first construction gave non-eternal black holes with toroidal event horizons and one asymptotic region, and the second gave eternal black holes with event horizons of genus higher than one and two asymptotic regions. (Actually a black hole with a toroidal event horizon can be constructed along the lines of the second construction as well, but it is an extremal black hole with one of the asymptotic regions "replaced" by a singularity.) We end with some comments about the possible significance of the black holes that we have made. First of all we see no way to obtain a black hole solution with the topology R 2 ⊗ S 2 through identifications in anti-de Sitter space, and so we are unable to produce a constantly curved black hole with a physically sensible asymptotic behaviour. Therefore (and also because of the sign of the cosmological constant) we conclude that our constructions are of little direct relevance for physics. In particular they are of little relevance to the question of whether -and if so, for how long -event horizons of real black hole can be toroidal (see ref. [7] for recent contributions). On the other hand our black holes appear to be close relatives to the one constructed by Lemos [8], which also requires a space-time with the kind of asymptotic behaviour found in anti-de Sitter space. Whatever their direct relevance may be, we do believe that our constructions deserve attention as an amusing and perhaps an instructive footnote. Moreover, the occurence of event horizons in hyperbolic three-spaces is a subject of potential relevance for cosmology.
2014-10-01T00:00:00.000Z
1996-04-02T00:00:00.000
{ "year": 1996, "sha1": "2029dda66f5b8e960dfba5d30d44dd32267b6d38", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9604005", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2029dda66f5b8e960dfba5d30d44dd32267b6d38", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
264607388
pes2o/s2orc
v3-fos-license
Interfacial growth of large-area single-layer metal-organic framework nanosheets The air/liquid interface is an excellent platform to assemble two-dimensional (2D) sheets of materials by enhancing spontaneous organizational features of the building components and encouraging large length scale in-plane growth. We have grown 2D molecularly-thin crystalline metal-organic-framework (MOF) nanosheets composed of porphyrin building units and metal-ion joints (NAFS-13) under operationally simple ambient conditions at the air/liquid interface. In-situ synchrotron X-ray diffraction studies of the formation process performed directly at the interface were employed to optimize the NAFS-13 growth protocol leading to the development of a post-injection method –post-injection of the metal connectors into the water subphase on whose surface the molecular building blocks are pre-oriented– which allowed us to achieve the formation of large-surface area morphologically-uniform preferentially-oriented single-layer nanosheets. The growth of such large-size high-quality sheets is of interest for the understanding of the fundamental physical/chemical properties associated with ultra-thin sheet-shaped materials and the realization of their use in applications. typically obtained as insoluble and unprocessable powders by conventional synthetic techniques such as solvothermal/hydrothermal methods. Therefore, aligning them on various surfaces in desired ways and controlling their size and dimensionality at the nanoscale have remained as important challenges. In fact, reports on various attempts to process bulk polycrystalline MOFs and to fabricate nanoscaled ones by developing new methodologies such as microwave synthesis, inverse emulsion technique, and liquid phase epitaxy, have been increasing in recent years [29][30][31][32][33][34][35][36][37][38][39][40][41] . Especially, 2D sheet assemblies are necessary when considering the use of such coordination materials, which frequently incorporate functional p-electron components, in nanotechnological thin film devices. We have ourselves recently succeeded in preparing a family of crystalline perfectly-oriented MOF thin films with nanometer scale thickness derived from metalloporphyrin building units and metal ion connectors (NAFS: nanofilms of metal-organic frameworks on surfaces) using a bottom-up solution-based method, which combines the Langmuir-Blodgett and the layer-by-layer (LB-LbL) methodologies at ambient conditions [36][37][38] . The NAFS family of materials have layered structures, comprising 2D molecular networks, which are first assembled at the air/liquid interface in a Langmuir trough (Langmuir-Blodgett (LB) films) 42 , then transferred to a solid surface by horizontal dipping, and finally stacked sequentially following a layer-by-layer (LbL) film deposition protocol [36][37][38] . Applying the LB technique to hard molecular components without any soft alkyl chains had been reported before using p-electron molecular units with flat shape, which float on the liquid surface of the Langmuir trough and form LB film assemblies [43][44][45][46][47] . However, since such p-electron flat molecules easily self-stack, the obtained films are invariably characterized by densely packed structural motifs -their formation driven by the stabilization of the intermolecular interactions -of individual molecules standing at an angle to the air/ liquid interface. Employing functionalized molecules as the spreading substance and aqueous solutions of metal salts as a subphase to supply the connecting units was expected to allow the formation of metal-mediated molecular arrays [45][46][47] . However, no direct structural evidence at the molecular level for such an alignment pattern had been ever reported before. Our strategy for obtaining MOF sheets was to devise chemical reactions at the 2D interface by aligning the molecular entities two dimensionally in such a way that each component adopted ideal topological proximity and separation distances that could lead to the creation of cavities and the development of porosity. The coordinative architecture of the NAFS family of nanofilms was directly proven by synchrotron X-ray crystallography -the films were endowed with highly crystalline order not only in the outof-plane but also in the in-plane orientation with the flat molecular building units aligned with their planar parts parallel to the surface. Thus employing air/liquid interfaces on which to assemble MOF nanosheets is a highly promising technique as the 2D interface encourages in-plane growth over a large length scale and the liquid substrate enhances spontaneous organizational features of the building components 10,48,49 . In order to develop the strategy of growing 2D molecularly-thin morphologically-uniform nanoassemblies of hard molecules with fine control of domain size and molecular arrangement, in-depth understanding of the in-situ sheet formation mechanism associated with the coordinative bonding linkages between the building blocks and the connecting joints is essential. Preliminary insight into the formation process of MOF nanofilms prepared at the air/liquid interface and transferred onto solid substrates in a sequential layer-by-layer manner has been obtained by ex-situ characterization techniques 50 . Herein we make use of in-situ synchrotron X-ray crystallographic techniques together with input by complementary Brewster angle microscopy and X-ray reflectivity measurements to probe directly the lateral assembly of single-molecule thin nanonetworks of a new member of the NAFS family (NAFS-13, vide infra) at the air/liquid interface. We demonstrate that crystalline NAFS-13 nanosheets perfectly oriented with their planar parts parallel to the air/liquid interface form under operationally simple ambient conditions on the liquid surface of a Langmuir trough. We then establish a growth protocol -post-injection of the metal connector units into the water subphase on the surface of which the molecular building units are pre-oriented -which achieves excellent crystallinity, uniform film morphology, and significant enlargement of the average crystalline sheet domain size. Creation of such highly-regulated uniform nanosheets, large enough to connect patterned electrodes or to obtain as free standing films, offers the possibility of their use in applications such as ultra-thin electronic devices 51,52 and molecular/ion filters 53,54 . Results Nanosheet growth. Our procedure for preparing molecule-thin crystalline MOF sheets starts with the spreading of a solution of molecular building units (metalloporphyrin, 5,10,15,20-tetrakis(4carboxyphenyl)-porphyrinato-palladium(II) (PdTCPP, 1) in chloroform/methanol) onto a subphase of metal ion joints (aqueous solution of Cu(NO) 3 ?3H 2 O, 2) at the air/water interface of a Langmuir trough ( Fig. 1 and Supporting Information). The formation of the two-dimensional (2D) copper-mediated PdTCPP arrays (NAFS-13) is followed by measuring the surface pressure -mean molecular area (p-A) isotherm as the surface is compressed to a pressure of 40 mN/m by moving the barrier walls of the trough at a constant speed (Fig. 1, red line). The observed p-A isotherm of the NAFS-13 PdTCPP-Cu sheet is in good accord with those of the CoTCPP-py-Cu (NAFS-1) 36 and H 2 TCPP-Cu (NAFS-2) 38 analogues providing evidence that the linkage motif responsible for sheet formation is also the same -namely, linking of the tetratopic PdTCPP molecules occurs via copper ion joints and the resulting arrays lie flat on the air/water interface. When for comparison the same PdTCPP solution was spread onto a pure water subphase in the Langmuir trough and compressed at the same barrier speed, the mean molecular area, A measured at the same surface pressure, p was significantly smaller (Fig. 1, black line). This implies that either the PdTCPP molecules now stand vertically at some angle to the liquid surface or remain in the horizontal orientation but pack more closely. It also confirms the necessity of the Cu 21 ion joints for film formation. The PdTCPP sheet arrangements consistent with the p-A isotherm measurements are supported by UV-visible absorption spectroscopy measurements on films formed at a surface pressure of 10 mN/m and deposited onto quartz substrates by horizontal dipping (Fig. S1a). The absorbance of the characteristic Soret band of the porphyrin units measured for the PdTCPP sheet formed on pure water is higher than that for the array fabricated on copper ion aqueous solution, implying a larger surface packing density of PdTCPP molecules. In-situ grazing incidence X-ray diffraction. Detailed insight into the formation and in-plane order, parallel to the liquid surface, of the NAFS-13 sheets prepared in a dedicated Langmuir trough mounted on the six-circle diffractometer at the ESRF beamline, ID10B was obtained from in-situ synchrotron XRD (l 5 1.549 Å ) measurements. These were carried out directly at the air/liquid interface in grazing incidence (GI) in-plane mode with the incident beam nearly parallel to the liquid surface [55][56][57] , as illustrated in Fig. 1. Fig. 2a shows the evolution of the in-plane GIXRD patterns measured for the NAFS-13 sheets with increasing surface pressure (p 5 0, 1, 5, 10, 20, and 30 mN/m -the surface pressure, measured by the Wilhelmy plate method, was kept constant during each measurement). The observation of a number of sharp clearly-resolved Bragg peaks in the in-plane XRD profiles provides the signature of the formation of a large-scale structure with highly crystalline organization. Very importantly, the same profile is obtained at p < 0 mN/m after initial www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 2506 | DOI: 10.1038/srep02506 spreading of the metalloporphyrin solution onto the copper ion subphase. This implies that formation of NAFS-13 sheets with excellent crystallinity and a high coherence length does not necessitate surface compression -it occurs in a self-assembling manner induced by the two-dimensional interfacial reaction between the copper ions and the carboxylic acid groups at the periphery of the PdTCPP molecular units. For comparison, we also performed in-situ GIXRD measurements at the air/liquid interface for PdTCPP arrays formed on a pure water subphase. The absence of Bragg reflections reveals the lack of molecular organization and absence of crystalline order in the PdTCCP arrays formed without the presence of copper ions in the subphase (Fig. S2). However, in this case as the surface pressure, p approaches higher values (p 5 30 mN/m), there is gradual appearance of a broad diffuse scattering hump which peaks near Q xy 5 1.8-1.9 Å 21 . This corresponds to an intermolecular distance of about 3.3-3.5 Å , which is comparable to the typical p-p stacking distances between adjacent (metallo)porphyrin units and supports the physical picture developed from the UV spectra that the PdTCPP molecules are packed closely and stand vertically at some angle to the air/liquid interface. All six observed peaks in the GIXRD profile ( Fig. 2a) of the NAFS-13 nanosheet up to a scattering angle, 2h 5 30u (Q xy 5 2.1 Å 21 ) index as (hk0) and correspond to (110), (200), (220), (320), (400), and (330) Bragg peaks of a metrically tetragonal unit cell with in-plane lattice parameters, a 5 b < 16.6 Å . These are extremely close to those of the multilayered NAFS-1 and NAFS-2 thin films deposited on silicon substrates (a 5 b 5 16.5 Å ) 36,38 . The comparable lattice sizes signify that the in-plane molecular arrangement in NAFS-13 is also that of metalloporphyrin linkers and paddle-wheel dimeric Cu 2 (COO) 4 secondary building units, which form a two-dimensional checkerboard pattern (Fig. 2b,c). The number of diffraction peaks remains the same as the surface is progressively compressed implying that the molecular organization of the NAFS-13 sheet is preserved intact upon increasing surface pressure. On the other hand, the position, width and intensity of the Bragg reflections gradually evolve with surface compression (Fig. 2d-g). In the low surface pressure region, p 5 0-1 mN/m and before p begins to increase sharply with decreasing mean molecular area, A, the unit cell size of the crystalline structure (a 5 16.628(6) Å at 0 mN/m and 16.625(3) Å at 1 mN/m, Fig. 2e) and the average crystalline domain size (,140 nm, as estimated from the peak width of the most intense (110) peak using Scherrer's equation, Fig. 2f) of the NAFS-13 nanosheet do not change, indicating that there is little influence on the sheet assembly process by the surface compression. This contrasts with the behavior of the peak intensity, which increases sharply with compression in the same surface pressure range (Fig. 2g). This can be understood by considering that, upon spreading, the surface area is relatively large in comparison with the number of spread molecules. Therefore the effect of increasing surface compression is initially to increase the surface coverage by gathering the pre-assembled floating NAFS-13 domains to a smaller area without affecting the crystalline domain size. The growth in the intensity of the Soret band of the porphyrin units with increasing surface pressure (Fig. S1b) measured by UV-vis spectroscopy for NAFS-13 films formed at different compression and deposited onto quartz substrates supports this phenomenological interpretation. However, further increase of the surface pressure has a more pronounced effect on both the crystalline structure of the nanosheets and their morphology. Firstly, the in-plane lattice parameter, a, contracts monotonically from 16.628(6) to 16.496(5) Å as p increases to 30 mN/m (Fig. 2e). This is accompanied by a decrease in the sheet domain size from 140(3) to 93(1) nm (Fig. 2f) and by the continuous growth of the intensity of the (110) Bragg reflection in the same surface pressure range (Fig. 2g). In order to understand this, we recall that as the surface compression increases, the coverage of the Langmuir trough surface increases until it is fully covered by the molecular sheets. Upon further compression, the sheets gather into a continuously decreasing area until their edge parts begin to touch neighboring sheets thereby leading to increased surface roughness Complementary X-ray reflectivity (XRR) and Brewster angle microscopy (BAM) measurements of the NAFS-13 nanosheets and of the PdTCPP arrays formed on pure water subphase at various surface pressures also support this interpretation. Increased surface roughness generally causes a lowering of the XRR. Here the observed continuous decrease of XRR with increasing surface compression ( Fig. S3 and S4) can be thus attributed to sheet morphology changes which lead to increased roughness. Similar conclusions can be drawn by the recorded BAM images (Fig. S5). These show first progressively increasing coverage of the surface of the Langmuir trough by wellformed nanosheets upon compression, followed by development of cracks and surface morphology deformations at high surface pressures. Furthermore, white spots are observed in the BAM images even at the low compression stages implying the presence of some molecular aggregation in the sheets. Discussion Thus far the study of the formation of the PdTCPP-Cu MOF nanosheets at the air/liquid interface has revealed that the most important step in the film assembly process is the interfacial coordinative reaction which occurs immediately after spreading the solution of the metalloporphyrin building units, 1 on the surface of the solution containing the metal ion joints, 2 -this is also the critical process in determining the sheet domain size. However, in the conventional assembly protocol of the NAFS-13 sheets that we applied here, it is unavoidable that, upon spreading, droplets of solution, 1 produce surface ripples and the spread molecules have the freedom to move rapidly around the surface. This is clearly captured by the BAM images, which show the random movement of the floating NAFS-13 domains before compression (Fig. S5). Such a procedure does not utilize past experience in successfully growing large single crystals of coordination compounds in the bulk where slow diffusion of the reactant solutions is invariably employed. Therefore, we have attempted to integrate slow diffusion protocols into the nanosheet growth strategy and developed a modification of the above methodology in order to target more optimal conditions for enlarging the sheet domain size and avoiding any aggregation. As illustrated in Fig. 3a, we first spread the solution of the molecular building units, 1 directly on the pure water subphase, and then gently inject the copper ion metal joint solution, 2 into the subphase. After spreading the PdTCPP solution but before the copper ion injection, the in-plane XRD profile collected on the water surface is featureless with no Bragg peaks present (Fig. 3b, black line). This confirms that PdTCPP molecules do not self-assemble to form a crystalline sheet without copper ion linkers. Formation of hydrogen bonded networks on solid substrate surfaces such as Au(111) and HOPG has been reported for the analogous porphyrin with four peripheral carboxylic groups, 5,10,15,20-tetrakis(4-carboxylphenyl)-21H,23H-porphyrin (H 2 TCPP) 58,59 . We consider that the lack of formation of a comparable array of self-assembled PdTCPP molecules is related to the presence at the air/water interface of a considerable number of water molecules, which surround the PdTCPP units and prevent formation of H-bonded COOH networks. After injecting the concentrated copper ion solution into the subphase, a number of very sharp well-resolved peaks emerge in the GIXRD profile (Fig. 3b, red line), implying the formation of NAFS-13 nanosheets with excellent lateral crystalline structural order in the absence of barrier compression. The observed Bragg peaks and their positions coincide with those recorded for the NAFS-13 nanosheet formed by the conventional method on precontained Cu 21 solution subphase at p 5 0 mN/m (inset Fig. 3b). This confirms the adoption of the same crystalline sheet structure with identical lattice constants (16.620(2) vs 16.628(6) Å , respectively; Fig. S8). Very importantly, however, the peak widths of the Bragg reflections in the GIXRD pattern of the nanosheet assembled by the injection method are now considerably smaller (for the most intense (110) reflection, FWHM 5 0.011 Å 21 ) than those observed for the nanosheet prepared earlier by the conventional method (FWHM 5 0.017 Å 21 ) (Fig. S7), thereby reflecting a significant increase in the lateral size of the crystalline domains. The sheet domain size estimated using Scherrer's formula is now about 220 nm in length corresponding to a more than doubling of the average area of individual NAFS-13 domains by adopting the postinjection film growth method -each domain now contains on the order of twenty thousand molecular building units. At the same time, the resolution of a number of additional Bragg peaks (inset Fig. 3b) confirms the formation of nanofilms with superior crystalline structural order using the new methodology (Fig. S8). Additional information about the molecular organization of the NAFS-13 nanofilms is obtained by 2D grazing incidence X-ray diffraction patterns, i.e. reciprocal space maps of the diffracted intensity along orthogonal Q xy (horizontal) and Q z (vertical) diffraction vector axes. As shown in Fig. 3c, the scattered intensity for each reflection is seen as a scattering rod, which is strictly parallel to the Q z axis. This provides unambiguous evidence that the NAFS-13 nanosheets are perfectly oriented with the 2D sheet plane aligned parallel to the liquid surface. At the same time, integrated out-of-plane line scans for each Bragg reflection as a function of scattering vector, Q z provide information on the out-of-plane coherence length and therefore allow an estimate of the sheet thickness to be made. Figure S7 shows the broad Q z profile of the (110) reflection integrated in the range, Q xy 5 0.51-0.55 Å 21 . The coherence length and therefore the sheet thickness can be then estimated as 3.2(2) Å from the fitted FWHM of the out-of-plane profile. This is in good agreement with the thickness of a single metalloporphyrin building unit (Fig. 2c), thereby confirming that NAFS-13 is essentially a mono-molecularly thin nanosheet. The trends in the evolution of the lattice parameter, the average sheet domain size, and the intensity of the Bragg peaks with increasing surface pressure mimic those observed for the NAFS-13 nanosheets prepared by the conventional method (Fig. S8). Nonetheless, the sheet domain size for NAFS-13 assembled by the post-injection method remains significantly larger at all surface pressures (Fig. S8b). The observed changes in the XRR curves recorded before and after injection (Fig. S10) confirm that reaction of the PdTCPP molecules with the injected copper ions results in the formation of NAFS-13 sheets with different film thickness and morphology from those of the PdTCPP arrays on the pure water subphase. In addition, the recorded BAM images of NAFS-13 formed by the post-injection method (Fig. S11) reveal (i) a much smoother sheet morphology (Fig. 3d, e) than that of the conventionally assembled sheets (Fig. 3f, g) and (ii) a complete absence of white spots associated with the existence of molecular aggregations which were evident even at the beginning of the sheet formation stage (p < 0 mN/m) for the conventionally formed NAFS-13. Therefore surface rippling and immediate reaction at the interface not only limit sheet domain growth but also lead to the creation of bulk aggregates restricting the formation of smooth 2D sheets. On the other hand, the slow injection growth method we developed leads to enlarged sheet domain sizes at the microscopic level, while at the same time provides smooth surface morphology of the mono-molecularly thin sheets at the macroscopic level. In the work presented here, we have studied the formation of the porphyrin-based MOF nanosheets, NAFS-13, by in-situ GIXRD at the air/liquid interface. The results reveal that the highly-crystalline preferentially-oriented ultranarrow NAFS-13 sheets form immediately after spreading the molecular building units on the copper ion aqueous solution subphase. Surface compression results in gathering these crystalline domains together, and leads to an increase in surface coverage but also to a decrease in the average domain size. Following the insight obtained on the nanosheet formation process, a new approach of sheet assembly -post-injection of the metal linkers into the subphase -was developed that effectively led to the enlargement of the NAFS-13 sheet domain size. The new methodology also resulted in the generation of smooth surface of the sheets at the micrometer scale. Sheet domain size and surface roughness strongly influence the properties of molecularly-thin 2D materials (molecular separation, electronic/ionic/photo conduction, magnetism) -these have to be carefully tuned when one considers them for use in nanodevices. The bottom-up sheet growth technique at the air/liquid interface we demonstrated here is sufficiently versatile to open the way for the facile transfer of the prepared large-size uniform ultrathin 2D structures on various planar substrates while retaining their excellent crystallinity. We will investigate the transfer of the nanosheets on suitable solid substrates and layer them with other types of 2D nanosheets such as graphene, metal oxides, and organic polymers with the aim of achieving the creation of multifunctional 2D systems in future work. A) isotherm measurements were performed at a continuous pressing speed for the barrier of 500 mm/s at room temperature. The surface pressure was measured with the Wilhelmy plate method. 300 mL of the same 0.2 mM PdTCPP solution was also spread onto a pure water subphase and p-A isotherms were measured for comparison. For the post-injection film growth method, the 0.2 mM PdTCPP, 1 solution was first directly spread onto a pure water subphase. Then 200 mL of a concentrated aqueous solution of Cu(NO) 3 ?3H 2 O, 2 (2.3 M) was slowly injected with a microsyringe from the side surface into the water subphase while keeping the compression barrier fully open. The final concentration of the Cu(NO) 3 ?3H 2 O, 2 aqueous solution after injection was 1 mM. The barrier was again moved at the same pressing speed of 500 mm/s at room temperature. Materials Brewster angle microscopy (BAM). Formation and morphology of the sheets in the Langmuir trough were followed by BAM experiments performed with a NIMA Technology BAM model 712 system in which a PTFE Langmuir trough (750 3 100 mm 2 ) with two compression barriers was installed. The PdTCPP-based films were prepared as described above. The laser wavelength was 532 nm. The incidence angle of the laser light was adjusted to 53.12u (magnitude of the Brewster angle for the air/water interface) with respect to the surface normal. Images of the films at the air/ liquid interface were captured by a CCD camera at room temperature. Grazing-incidence synchrotron X-ray diffraction (GIXRD) and X-ray reflectivity (XRR) measurements. In-situ synchrotron GIXRD and XRR measurements were performed at room temperature with the six-circle diffractometer on beamline ID10B (E 5 8.003 keV, l 5 1.549 Å ) at the ESRF (Grenoble, France). The dedicated PTFE Langmuir trough (460 3 170 3 5 mm 3 ) mounted on the diffractometer was equipped with a single movable barrier for film compression. The PdTCCP-based films were prepared as described above. The surface pressure was kept constant during individual GIXRD and XRR measurements. The Langmuir trough was mounted on an active antivibration system and was enclosed inside an air-tight acrylic case with polyimide windows. Water-saturated helium gas was introduced into the case. The incidence angle for the GIXRD measurements was set as 0.12u. The scattered X-rays were recorded by an one-dimensional (1D) gas-filled position-sensitive detector with vertically located counting wires (VANTEC). The XRD profiles were collected by scanning over the in-plane h angle and the vertical (out-of-plane) scattered intensity was recorded at each 2h angle. In order to improve the 2h resolution to the in-plane direction and reduce background contribution, a Soller collimator (0.08u) was placed in front of the 1D detector. XRR data were collected by a two-dimensional (2D) gasfilled position-sensitive detector at the glancing angular range by a h-2h scan in the out-of-plane geometry. The average sheet domain size was estimated with Scherrer's equation using a value for Scherrer's constant of 1.84, appropriate for (hk) Bragg reflections of layer-structured crystalline materials.
2018-04-03T04:23:41.640Z
2013-08-26T00:00:00.000
{ "year": 2013, "sha1": "98f29f76d9403c4708f7f1f1b75f7e304b524ee5", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep02506.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98f29f76d9403c4708f7f1f1b75f7e304b524ee5", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
251038947
pes2o/s2orc
v3-fos-license
Prediction of HIV sensitivity to monoclonal antibodies using aminoacid sequences and deep learning Abstract Motivation Knowing the sensitivity of a viral strain versus a monoclonal antibody is of interest for HIV vaccine development and therapy. The HIV strains vary in their resistance to antibodies, and the accurate prediction of virus-antibody sensitivity can be used to find potent antibody combinations that broadly neutralize multiple and diverse HIV strains. Sensitivity prediction can be combined with other methods such as generative algorithms to design novel antibodies in silico or with feature selection to uncover the sites of interest in the sequence. However, these tools are limited in the absence of in silico accurate prediction methods. Results Our method leverages the CATNAP dataset, probably the most comprehensive collection of HIV-antibodies assays, and predicts the antibody-virus sensitivity in the form of binary classification. The methods proposed by others focus primarily on analyzing the virus sequences. However, our article demonstrates the advantages gained by modeling the antibody-virus sensitivity as a function of both virus and antibody sequences. The input is formed by the virus envelope and the antibody variable region aminoacid sequences. No structural features are required, which makes our system very practical, given that sequence data is more common than structures. We compare with two other state-of-the-art methods that leverage the same dataset and use sequence data only. Our approach, based on neuronal networks and transfer learning, measures increased predictive performance as measured on a set of 31 specific broadly neutralizing antibodies. Availability and implementation https://github.com/vlad-danaila/deep_hiv_ab_pred/tree/fc-att-fix Introduction HIV is characterized by a high mutation rate, enabling the virus to adapt rapidly and to circulate under diverse strains. Some of the strains are neutralized by the antibodies, but some resistant ones remain and continue the infection. Due to HIV diversity, combinations of broadly neutralizing antibodies are more likely to be efficient than a single antibody in combating the virus . In addition to the potency of neutralization, the breadth of neutralization, or how many strains can be neutralized by a particular antibody is essential, and some works focus on this aspect (Cheng et al., 2018;Conti and Karplus, 2019;Sevy et al., 2018;Williamson et al., 2021a;Yu et al., 2019). A model that can accurately determine the neutralization potency for a given antibody-virus pair can be useful for the analysis of neutralization coverage and for finding ideal antibody combinations. The neutralization potency was predicted by machine learning techniques in Hepler et al. (2014), Buiu et al. (2016), Hake and Pfeifer (2017), Rawi et al. (2019), Conti and Karplus (2019), Yu et al. (2019), Magaret et al. (2019) and Williamson et al. (2021a). SLAPNAP predicts the neutralization of specific antibodies with more predictors: elastic net (Zou and Hastie, 2005), random forests (RF) (Breiman, 2001), gradient boosted machines (GBM) (Friedman, 2001) and extreme gradient boosting (XGBoost) (Chen and Guestrin, 2016). The user can choose a predictor or combine more of them using an ensemble named Super Learner (van der Laan et al., 2007). In addition, SLAPNAP calculates the importance of features and predicts the neutralization of combinations of antibodies using either an additive or Bliss-Hill model (Wagh et al., 2016). GBM (Friedman, 2001) was used in Rawi et al. (2019) to predict the sensitivity of viruses to 33 antibodies from the CATNAP database (Yoon et al., 2015). The input consisted of one-hot encoded virus aminoacid sequences . The GBM outperformed other algorithms, such as logistic regression, RF and the support vector machine (SVM) from Hake and Pfeifer (2017). In Hake and Pfeifer (2017), an SVM with string kernels (Meinicke et al., 2004;Rä tsch et al., 2005) was compared against RF (Liaw et al., 2002), a neural network, least absolute shrinkage and selection operator (LASSO) (Friedman et al., 2010), and a linear SVM (Karatzoglou et al., 2004). The virus neutralization was determined for eleven selected antibodies and the measurements uncovered an increase of virus resistance in time V C The Author(s) 2022. Published by Oxford University Press. (Hake and Pfeifer, 2017). In Conti and Karplus (2019), neural networks (NNs) with one or two layers, k-nearest neighbors (Altman, 1992), RF (Ho, 1995;Svetnik et al., 2003) and SVM (Cortes and Vapnik, 1995) receives an input of atomistic descriptors and predicts the potency of antibodies that target the highly conserved CD4 region. The glycans that cover the virus envelope play an essential role in the interaction with antibodies, and Yu et al. (2018) used a system composed of Metropolis-Hastings algorithm (Andrieu et al., 2003;Hastings, 1970) and support vector regression (SVR) (Cortes and Vapnik, 1995;Drucker et al., 1997) to assess the importance of specific glycans and protein sites for antibody binding. This system is used in Yu et al. (2019) as well for feature selection prior to regression of the neutralization sensitivity. HIV neutralization and feature importance was studied for a singular broadly neutralizing antibody VRC01 in Magaret et al. (2019) using LASSO (Tibshirani, 1996), RF (Liaw et al., 2002), Naïve Bayes (John and Langley, 1995), XGBoost (Chen and Guestrin, 2016), generalized linear models and an ensemble named Super Learner (van der Laan et al., 2007). Buiu et al. (2016) regressed the neutralization measures for a panel of selected antibody-virus pairs using NN. B-cell receptor sequence repertoires were analyzed using phylogenetic trees for uncovering potentially effective antibodies and determining favorable mutations in Ralph and Matsen (2020). The detection of epitopes, which are sites on the antigen bound by the antibody, is an important study topic for vaccine design and is sometimes analyzed together with virus neutralization potency. Some of the works focused on epitope detection are Gnanakaran et al. Kaku et al. (2020), Ralph and Matsen (2020) and Williamson et al. (2021a). In the current material, we are not investigating epitope detection. Approach Most authors (Buiu et al., 2016;Hake and Pfeifer, 2017;Hepler et al., 2014;Magaret et al., 2019;Rawi et al., 2019;Williamson et al., 2021a;Yu et al., 2019) create multiple classifiers/regressors, and each of those models is trained with a subset of viruses as input and the outcomes specific to a certain antibody as ground truths. For example, if the dataset contained assays specific to ten antibodies, ten separate models are trained, one for each antibody. If the neutralization potency of a combination of antibodies against a virus needs to be estimated, that is achieved by combining the estimations from the models trained on each antibody. CATNAP also provides assays for certain combinations of antibodies, which can be used for validation. SLAPNAP took this approach to predict the potency of antibody cocktails by leveraging an additive and a Bliss-Hill model (Wagh et al., 2016). The sequences of the virus envelopes are used as input without taking into account the antibodies sequences. This has the advantages of lowering the feature space dimensionality and simpler modeling. We take a different approach and use both antibody and antigen sequences at once as input to our model. Our rationale is that more generic interactions can be modeled this way. Moreover, we can leverage substantially more data, $32 000 combinations of antigen-antibody sequence pairs. In contrast, when grouping viruses by antibodies, the data amount is reduced to hundreds of samples per antibody at best. Therefore, if an antibody has too little data available, it becomes impossible to analyze with the previous approaches; however, our setup does not have this drawback. Using both antigen and antibody sequences and NNs, we can take advantage of transfer learning to pretrain on the majority of the antibody-antigen pairs and fine-tune the model on specific antibodies of interest. This is an essential advantage provided by NNs that would not be possible with the decision-tree or SVM-based algorithms mentioned in Section 1. As shown in Figure 1, the architecture of our system consists of: • a module that encodes antibody sequences • a module that analyzes the virus sequence and the encoded antibody • a decoder Each module can take multiple forms, as described in Section 3, we experimented with GRU (Cho et al., 2014), fully connected layers, attention, transformers , more input encoding strategies and multitask learning. Since Rawi et al. (2019) and Williamson et al. (2021a) report state-of-the-art results in virus neutralization binary classification, we compare with those works and display significant improvements in terms of recorded accuracy, Matthews correlation coefficient (MCC) and receiver operating characteristic area under the curve (AUC). Materials and methods Due to the costly nature of training NNs, we did not perform an exhaustive search across the combinations of models, input processing and hyperparameters. Despite this, in our search, we came across several configurations that measured promising results. To keep the article concise, we document only the most representative models. Models In this subsection, we elaborate on the models' structures and architectures. In all variations, the decoder consists of a fully connected layer with dropout and sigmoid activation; however, the encoders and input will vary. For avoiding overfitting, the GRU, transformers and all fully connected networks are only one layer deep. 1. ICERI-GRU encoders for both antibody and virus: in the current article, we are building on top of our previous work (D an ail a and Buiu, 2021), where we processed the antibody light chain, heavy chain and the virus envelope sequences by three GRUs (Cho et al., 2014) to classify viruses as resistant or sensitive to a particular antibody. The hidden states resulting from running the light chain and heavy chain-specific GRUs are concatenated and form the initial hidden state of the virus GRU (D an ail a and Buiu, 2021). 2. FC-ATT-GRU-Fully connected and attention for antibody and GRU for virus: each of the antibody light and heavy chains is processed by a separate module consisting of a fully connected layer, dropout and attention as in Algorithm 1. The light and heavy chains encodings are concatenated and form the initial hidden state of the GRU. The GRU receives as input the virus envelope sequence. 3. 6CDR-FC-GRU-Fully connected for complementary determining regions (CDR) and GRU for virus: in this case, we do not consider the complete sequence, only the six CDRs. Each CDR is encoded by a separate fully connected layer and dropout. These encodings are concatenated and form the initial hidden Fig. 1. The system architecture. The antibody decoder used for antibody type prediction is applied only for multitasking state of the virus processing GRU network. We do not use attention since the CDRs are implicitly the most important regions. 4. TRANSF-Transformers : the antibody sequence is input to the encoder part of the transformer and the virus sequence to the decoder. The resulting feature vector is processed by a fully connected layer to predict the binary outcome. 5. MULTITASK: it is the same model as FC-ATT-GRU, but trained with multitasking. Data preprocessing The aminoacid sequences are strings containing 22 letters, 20 denote the DNA encoded aminoacids, '-' for gaps and 'X' for unknown elements. For encoding each aminoacid element, we used the following methods: learned embeddings, one-hot-encodings and a vector of size seven that summarizes the properties of the aminoacid (Meiler et al., 2001). The potential N-linked glycosylation sites (PNGS) are of significant importance for modeling the antibody-antigen interactions (Yu et al., 2019). In the current work, as well in D an ail a and Buiu (2021), we represent PNGS as a binary mask that we concatenate to the virus sequence features. Similarly to D an ail a and Buiu (2021), every time we used GRU networks, the input consisted of encoded k-mers, which were overlapping substrings of length k from the aminoacid sequence. Therefore, each step of the sequence fed to the GRU consisted of a k-mer. Other works that used k-mers are Ren et al. (2021). The length and stride of the k-mers were established as in Section 3.4. If the data were input to fully connected layers or transformers, k-mers were not used anymore. PNGS binary masks were transformed in k-mers as well and concatenated with the aminoacid sequence k-mers (D an ail a and Buiu, 2021). For models using CDRs, each of the six CDRs was modeled by a numeric array encompassing the aminoacids of the CDR and a continuous value denoting the position of the CDR inside the sequence as in Algorithm 2. We used Paratome (Kunik et al., 2012) and AbRSA (Li et al., 2019) to find the antibodies CDRs sites. For transformers, we constrained the antibody input se quence to the sites between 17 to 77 and 84 to 133 for the light chain and from 13 to 79 and 83 to 135 for the heavy chain to reduce the data dimensionality. The intervals were established based on the minimum and maximum positions of the CDRs aminoacids as found through Paratome (Kunik et al., 2012) and AbRSA (Li et al., 2019). The binary outcomes (ground truth) were determined by comparing the IC50 (half maximal inhibitory concentration) with a threshold, which in our experiments was 50, as in Rawi et al. (2019) and Williamson et al. (2021a). However, in some CATNAP assays, the IC50 was expressed as censored values, which means that the precise quantity is unknown, only that it is less or more than a certain threshold; the most frequent censored quantity is '>50'. Also, for some antibody-virus combinations, there are recorded multiple IC50 values, some can be exact, and others censored. For such cases, we estimated the mean IC50 using a popular method for censored regression, the Tobit model (Amemiya, 1984;Olsen, 1978;Tobin, 1958). Our implementation of the Tobit model is based on PyTorch and optimized through gradient descent. Optimization For optimization, the PyTorch RMSprop was used in all cases, except for training the transformers, where we used the Noam optimizer . Hyperparameter optimization The hyperparameters, such as k-mer length and stride (see Section 3.2), batch size, learning rate, gradient clip, dropout rates, and parameters defining the network structure were found automatically, through hyperparameter optimization, using the Optuna implementation of TPE (Tree-structured Parzen Estimator) (Bergstra et al., 2011). In all cases, the TPE was univariate, except for transformers when it was multivariate, see the TPE documentation. For efficiency, we employed a pruner that interrupted the training for unpromising experiments based on intermediary results or for those taking too much time. Multitask learning In the multitask setting, the entire network is trained to predict the virus-antibody sensitivity, and the antibody encoder is attached a fully connected layer (antibody decoder) to classify the type of antibody as in Figure 1. Some antibodies can belong to multiple classes. The two tasks are trained simultaneously, having as loss a weighted sum of two binary cross-entropies. Results We are comparing with bNAb-ReP and SLAPNAP . The two were also compared in Williamson et al. (2021a), and bNAb-ReP recorded a median AUC of 0.84 and SLAPNAP of 0.81; however, the two were not evaluated in the same way. In our work, we also look at MCC, which is a more discriminative metric than AUC. sequence is padded to the left and to the right (centered), with a particular character denoting unknown aminoacids so that we obtain a sequence of length max size. In bNAb-ReP, the hyperparameters of the GBM were found through grid search and the model was evaluated by repeating for ten times a cross-validation having 10 folds . In SLAPNAP, the Super Learner (van der Laan et al., 2007) model is trained and evaluated on one round of five-fold crossvalidation . However, the Super Learner algorithm performs automatic hyperparameter optimization based on cross-validation as part of its' training process . Therefore, in SLAPNAP, nested cross-validation is happening, the inner cross-validation is used for hyperparameter optimization, and the outer cross-validation is used for evaluation . The test data from the outer crossvalidation folds is not found in any of the folds used for inner crossvalidation; therefore, the evaluation data is completely uncoupled from the rest of the dataset. For comparing with the other works, we follow similar evaluation procedures as in the compared papers, repeated crossvalidation for bNAb-ReP and nested cross-validation for SLAPNAP. In addition, we pretrain on the CATNAP data. We also optimize the hyperparameters in two parts. Part one is related to pretraining on CATNAP and finding the network structure. The second part is specific to each antibody and aims to find the learning parameters such as batch size, learning rate, gradient clip and dropout rates. Part two was performed in 1000 iterations per antibody and used crossvalidation. Due to the larger size of the dataset, part one optimization was performed in over 400 iterations using a training/validation split of CATNAP. In both cases, the hyperparameter optimization is performed by the TPE algorithm (Bergstra et al., 2011), as described in Section 3.4, by maximizing the MCC. The antibody-specific training occurs only for the virus encoder and decoder, while all parts for handling antibody data remain frozen. In all training procedures, we employed early stopping by selecting the model with the highest MCC from all epochs. Algorithm 3 displays the complete procedures for comparing with both works, which include pretraining and antibody specific fine-tuning.Comparing with the other works is a costly operation because it implies fine-tuning for each antibody. Therefore, we resorted to a simplified method to select the best model architecture. We first found the ideal hyperparameters on CATNAP. Then, for each antibody in bNAb-ReP, we trained on the rest of CATNAP (excluding the records having that antibody) and evaluated using the data containing that antibody. This is similar to the procedure used for comparing with the other works but without fine-tuning per antibody. Table 1 displays the results for the model selection. All models had similar results, and the best MCC is recorded for FC-ATT-GRU and 6CDR-FC-GRU. Both networks had aminoacid properties as input. Between the two, we selected FC-ATT-GRU for comparison with bNAb-ReP and SLAPNAP because it Algorithm 3 Comparison with bNAb-ReP in function evalua-te_by_repeated_cross_validation and with SLAPNAP in function evaluate_by_nested_cross_validation. is a more practical model; determining the CDR complicates the input processing while providing only a minor performance advantage. Also, an input formed out of the aminoacid properties shows better performance while lowering the dimensions of the tensors and speeding up the computation. Table 2 shows the results for the finetuned FC-ATT-GRU model versus bNAb-ReP, and Table 3 compares FC-ATT-GRU with SLAPNAP. Discussion Our approach yields substantially better results in terms of averaged cross-validated metrics compared to the other methods: 0.75 versus 0.66 (bNAb-ReP) and 0.71 versus 0.43 (SLAPNAP) for MCC, 0.89 versus 0.84 (bNAb-ReP) and 0.88 versus 0.83 (SLAPNAP) for AUC, 0.89 versus 0.85 (bNAb-ReP) and 0.87 versus 0.83 (SLAPNAP) for accuracy. We recommend comparing the models by the MCC since it is a more discriminative metric that typically yields lower values. The nested cross-validation is a very stringent evaluation methodology and a computationally taxing one. The results agree with this, and we obtain slightly lower performance on this evaluation procedure, 0.71 versus 0.75 for MCC, 0.88 versus 0.89 for AUC, 0.87 versus 0.89 for accuracy. If we only pretrain (without fine-tuning for a specific antibody), we still achieve decent results, 0.55 to 0.57 MCC, 0.82 to 0.84 AUC, and 0.79 to 0.80 accuracy, as shown in Table 1. The variate models provide very similar results. Despite the improvements in predictive performance, several aspects can be further explored and improved. The greatest drawback of our current model is that it is not explainable, unlike Note: The numbers between the parentheses are the standard deviations (calculated with N-1 degrees of freedom) of the metrics recorded on the 100 rounds of cross-validation. The bNAb-ReP metrics are taken from the Supplementary Table S1 from Rawi et al. (2019), except for the standard deviations, which we recomputed by running the bNAb-ReP software to ensure the same calculation method as in our work. The boldface values highlight the best metrics between bNAb-ReP and our model. bNAb-ReP and SLAPNAP. Knowing the epitope/paratope is valuable and can provide insights into the mechanics of the antibody-antigen interaction. Furthermore, explainable models tend to be more trusted, and the analysis of the features might serve as an added validation. NNs usually require more effort to be made explainable, but this is something achievable. A possible solution to this challenge is Grad-CAMþþ (Chattopadhay et al., 2018). This method makes NNs explainable by considering the gradients that flow through the networks' layers. An alternative is to use methods for finding feature importance that treat the model as a black-box, such as Williamson et al. (2021b) and Williamson et al. (2022). The Metropolis-Hastings algorithm from Yu et al. (2018) and Yu et al. (2019) is another method for feature selection which the authors combined with an SVR (Cortes and Vapnik, 1995;Drucker et al., 1997). The SVR is used to evaluate the states sampled by the Metropolis-Hastings algorithm. However, we believe this method can be combined with other models as well, such as a NN. Another drawback of our model is that, at the moment, it does not tackle regression. However, this extension is also feasible. One challenge related to regression is the handling of the censored values, which are the values expressed as open intervals, such as '>50'. In the current work, we focused on recurrent networks and transformers; however, convolutional networks are another architecture that might be useful. Experimenting with convolutional NNs for modeling the antibody-virus interaction is a theme that could be explored in future works. Note: The numbers between the parentheses are the standard deviations (calculated with N-1 degrees of freedom) of the metrics recorded during the nested cross-validation. The metrics for SLAPNAP were obtained by running the script available from SLAPNAP . The boldface values highlight the best metrics between SLAPNAP and our model. Code and data availability The code and data underlying this article are available on Git and Zenodo (D an ail a, 2022). The Git repository contains more branches: Conclusion The main ideas of our article are to leverage both antibody and virus sequences to capture more generic relationships instead of focusing on specific antibodies, which may have less data available, and to take advantage of the full CATNAP dataset through NNs and transfer learning. It is known that NNs are versatile, and often, they may outperform the other types of algorithms on different tasks, especially when the available data is large. Nevertheless, their training and hyperparameter tuning are computationally expensive and complex. In the current work, we used a modern hyperparameter tuning method, the TPE (Bergstra et al., 2011), to automate the process and find a suitable setup. We combined recurrent, fully-connected and attention layers to model the relationships between the antibody and virus sequences. We also looked into transformers and multitasklearning, but those did not bring any meaningful advantage. While the transformers architecture is considered state-of-the-art in natural language processing, for our given task, the data might be insufficient to derive benefits from this type of network. The aminoacids were expressed in multiple ways: static properties, one-hot encodings, or learned features. Overall, the static properties gave the best results and were also the most computationally efficient since they had a smaller dimension compared to the other approaches. Considering only the CDRs instead of the whole variable region for the antibody complicates the data preprocessing and provides a non-significant increase in predictive performance. Further research ideas are related to making the model explainable, investigating convolutional architectures, handling regression and censored data, finding additional data sources for network pretraining, and building a hybrid method that takes advantage of both sequence and structure data. Financial Support: none declared. Conflict of Interest: none declared.
2022-07-26T06:17:06.121Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "530bd0afa8a915c63a54550935af5ab953e2b7ec", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btac530/45226767/btac530.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5c788183293a469d5f93d840c967af0968a0b74c", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249312363
pes2o/s2orc
v3-fos-license
A prospective feasibility study evaluating the 5x-multiplier to standardize discharge prescriptions in cancer surgery patients Background We designed a prospective feasibility study to assess the 5x-multiplier (5x) calculation (eg, 3 pills in last 24 hours × 5 = 15) to standardize discharge opioid prescriptions compared to usual care. Methods Faculty-based surgical teams volunteered for either 5x or usual care arms. Patients undergoing inpatient (≥ 48 hours) surgery and discharged by surgical teams were included. The primary end point was discharge oral morphine equivalents. Secondary end points were opioid-free discharges and 30-day refill rates. Results Median last 24-hour oral morphine equivalents was similar between arms (7.5 mg 5x vs 10 mg usual care, P = .830). Median discharge oral morphine equivalents were less in the 5x arm (50 mg 5x vs 75 mg usual care, P < .001). Opioid-free discharges included 33.5% 5x vs 18.0% usual care arm patients (P < .001). Thirty-day refill rates were similar (15.3% 5x vs 16.5% usual care, P = .742). Conclusion The 5x-multiplier was associated with reduced opioid prescriptions without increased refills and can be feasibly implemented across a diverse surgical practice. INTRODUCTION It is estimated that there are 72-81 opioid prescriptions per 100 people in the United States and therefore a surplus of diverted opioids in the community that have triggered both state and national efforts to combat this public health crisis [1][2][3][4]. Although government-level interventions are well intentioned, these efforts could instead be refocused on Surgery Open Science 9 (2022) 51-57 providers' opioid prescribing protocols, which are often discordant with patients' actual needs [5]. This would have a tremendous impact, as opioid prescriptions written by surgeons are often the initial exposure for patients with opioid dependence. Although surgeons are not solely responsible, new persistent opioid use has been identified in up to 10-15% cancer surgery patients [6][7][8][9][10]. Practices relying heavily on provider bias, rather than a patient-centered approach, are associated with excess opioid dissemination and potential community diversion [11][12][13][14][15]. We previously identified variations in opioid prescribing patterns among surgical providers and found that excess opioids were being prescribed across all our department's abdominal cancer sites [6,12,16,17]. This led to the creation of our "4 Pillars" of perioperative opioid reduction, which addressed (1) provider and patient education, (2) limiting the initial peak bolus of inpatient opioid use in the first 24 hours, (3) nonopioid bundles and purposeful weaning to zero or near-zero by the last 24 hours, and (4) standardizing discharge prescription volumes [10,18,19]. Our early efforts focused on the first pillar (education), which was associated with a dramatic shift in department-wide practice from the time we started these efforts in August 2018 to our first reassessment in summer 2019, with a reduction in median discharge oral morphine equivalent (OME) from 200 mg across our department in 2017 to 50 mg by summer 2019 [18,19]. Although progress was made in reducing initial inpatient opioid exposure (second pillar) and improving inpatient weaning (third pillar), one provider-based problem that remained was calculating discharge opioid prescriptions (fourth pillar). We then introduced a novel "5x-multiplier" calculation to tailor "right size" discharge opioid prescriptions based on each patient's last 24-hour opioid use, which was designed based on a March 2017 Centers for Disease Control and Prevention Morbidity and Mortality Weekly Report which demonstrated an inflection point at 5-day prescription length in predicting long-term opioid dependence in opioid-naive patients [20,21]. This, along with general recommendations in 2017 to limit prescriptions sizes to less than 1 week, led to the idea of multiplying actual use on the last inpatient day by 5. Our retrospective cohort study of hepatopancreatobiliary (HPB) patients treated in 2018-2019 demonstrated a 67% reduction in discharge opioid volumes compared to usual care, with no reflexive increase in 30-day refill rates [20], suggesting that adopting a patient-centered model for opioid volume calculations can overcome provider bias and variation in discharge opioid volumes. To expand on our prior study and assess generalizability of the 5xmultiplier, we designed a prospective feasibility study within a quality improvement (QI) initiative to evaluate the 5x-multiplier in a broad, cancer surgery cohort of non-HPB patients. We hypothesized that a similar impact seen in our study of HPB patients would be realized across the full range of surgical oncology procedures and that the 5x-multiplier could be easily implemented across a diverse surgical practice. Furthermore, to address common concerns about underprescribing, we sought to measure refill rates and volumes to find if the 5x-multiplier can accurately estimate a patient's outpatient opioid needs. Study Design and Team Assignments. This prospective feasibility study was designed in summer 2019 and approved by all section chiefs in the Department of Surgical Oncology at The University of Texas MD Anderson Cancer Center. The protocol was approved in August 2019 by the institutional QI Assessment Board for a September 9, 2019, to December 31, 2019, study. Because this was a nonrandomized study of 2 discharge methods within the standard of care, informed consent was waived. The analyses and publication of these data were approved by the Institutional Review Board (PA17-0726). No changes were made to the study protocol during the study period. There were 5 non-HPB specialties (sarcoma, colorectal, gastric/ peritoneal, endocrine, general surgery) included. Inpatient provider teams (faculty, advanced practice providers [APPs], and fellows) were voluntarily assigned by individual faculty to the 5x or usual care (UC) arms and educated on the 5x-multiplier using a distributed slide deck with screen shots of our electronic health record's location of daily opioid use (to calculate the last 24 hours of inpatient use). A laminated pocket card showed all the faculty names and which arm each volunteered for so that APPs and fellows would know the faculty choice. Faculty who did not have an opinion were assigned based on predicted case volume to balance total patients in both arms. There was no attempt to balance all case metrics (eg, operative extent, open versus minimally invasive, expected hospitalization), as this was not a randomized clinical trial. Considering that this study was conducted to assess the feasibility of the 5x-multiplier in a convenience sample, no additional measures were taken to balance cohorts. Additionally, no mandate was given to inpatient APPs or fellows regarding compliance with the faculty-volunteered study arm, and crossover was allowed (ie, those in UC could be discharged with 5x-multiplier volumes and vice versa). UC was defined as the discharging provider's (APP or fellow) discretion, and 5x was defined as taking the last 24-hour OME and multiplying by 5 (eg, if the last 24-hour use was 3 tramadol pills, the discharge prescription would be 3 × 5 = 15 tramadol pills). The primary end point was median discharge OME. The secondary end points were opioid-free discharges, 30-day refill rates, and refill volumes. These were compared by intent-to-treat (by assigned arm) and by per-protocol analyses (by actual 5x or non-5x). Patients and Discharge Process. Patients undergoing inpatient surgery were eligible for inclusion, and exclusion criteria included surgical hospitalizations with discharge < 48 hours, discharge by a primary team not involved in the study (eg, inpatient rehabilitation team), or patients with duplicate encounters in the medical record for the same admission. Use of a standardized discharge summary ( Supplementary Fig 1) was encouraged to assist with 5x OME calculations (in addition to a laminated pocket card with sample OME conversions) and to track opioid and multimodal pain medications ( Supplementary Fig 1). Discharges were processed by APPs and fellows only. Statistical Analyses and Reporting. Demographic, clinical, inpatient, and discharge medication prescriptions and 30-day opioid refill data were obtained through the electronic medical record. Last-24-hour, discharge, and 30-day refill opioid volumes were converted to OME with institutionally approved tables (eg, 7.5 mg OME = 5 mg oxycodone = 7.5 mg hydrocodone = 75 mg tramadol). Nonparametric statistical comparisons were performed with the Mann-Whitney U test for continuous variables and χ 2 test or Fisher exact test (when percentage < 5%) for categorical values using SPSS version 22 (IBM, Armonk, NY). Unadjusted analyses were performed given the nonrandomized, pragmatic nature of this study. All tests were 2-sided. Figures were assembled with GraphPad Prism version 8 (GraphPad Software, La Jolla, CA). The "Standards for QUality Improvement Reporting Excellence" 2.0 guidelines were used during the design of this study and as a framework for the reporting of our findings [22]. RESULTS Rounding Teams and Operations. Twenty-two attending surgeons in 5 surgical specialties voluntarily enrolled to participate in either study arm (Supplementary Table 1). A total of 753 consecutive cases were performed by these surgeons participating in the predetermined study period of 4 months. Three-hundred and eight cases were excluded because of discharge in < 48 hours or discharge by another primary team; 22 cases were excluded owing to being discharged by a surgical team in a different study arm; and 14 cases were excluded because of duplicate records for patients who underwent multiple procedures during the same admission. A total of 409 index hospitalizations were considered evaluable for this study, with 200 in the UC arm and 209 in the 5x arm (Fig 1). Patients and Primary Analyses. Table 1 depicts the demographics and clinical characteristics for patients included in this feasibility study. Both groups had similar baseline demographics regarding age, sex, smoking status, and body mass index. There was a higher proportion of patients with preoperative opioid prescriptions listed in the medical record in the UC arm (45.0% UC vs 25.4% 5x, P < .001). A greater proportion of patients in the 5x arm underwent minimally invasive surgery (26.0% UC vs 39.7% 5x, P = .003) and received regional anesthetic blocks (41.0% UC vs 56.9% 5x, P = .001). There was no difference in 30-day readmission rates (10.0% vs 10.0%, P = .999). Median last-24-hour OME was still similar between groups (10 mg [interquartile range {IQR}: 0-20 mg] UC vs 7.5 mg [IQR: 0-20 mg], P = .830). For the primary end point, median discharge OME was greater in the UC arm (75 mg OME [IQR: 25-150 mg] UC vs 50 mg OME [IQR: 0-100 mg] 5x; P < .001). Figure 2, A shows the difference in median discharge OME in both arms, with the 5x arm's IQR overlapping with zero. There were more 5x arm patients discharged opioid-free (18.0% UC vs 33.5% 5x; P < .001). Compliance and Refills by Intent to Treat. We retrospectively assessed utilization of the 5x-multiplier and 30-day refill prescriptions in both study arms (Table 2). There were 58.4% cases in the 5x arm which used the 5x-multiplier to calculate discharge opioid prescriptions, 16.7% cases which were sub-5x, and 24.9% cases which were over-5x. In the UC arm, there were 28% patients who received actual 5x prescriptions, 17.5% cases sub-5x, and 54.5% over-5x. Opioid refill rates for the UC and 5x arms were similar at 16.5% and 15.3%, respectively (P = .742). By intent to treat, initial opioid refill size was median 355 mg in the UC arm vs 200 mg OME in the 5x arm, not accounting for crossovers (P = .069). Figure 2, B shows the difference in the point estimates with narrower IQR in the 5x arm. Actual Prescriptions and Associated Refills. To assess the impact of 5x-standardized prescriptions on discharge OME and prescription volumes agnostic of study arm, patients receiving 5x prescriptions in both study arms were compared to those who received non-5x prescriptions. Patients receiving actual 5x prescriptions had a median discharge OME of 0 mg (IQR: 0-100 mg) compared to 100 mg (IQR: 50-200 mg) for patients receiving non-5x prescriptions (P < .001, Fig 3, A). In Table 2 Figure 2 depicts the slope (the multiplier) of the line representing the real-world multiplier of the x axis (last-24-hour OME) to calculate the discharge prescriptions (y axis). Using the 5x-multiplier blunts the rate of rise by objectively calculating discharge OME based on the last-24-hour OME (x axis) regardless of the actual value. The stack of blue dots (UC arm patients) in the left corner of Supplementary Figure 2 also shows the number of patients who despite being opioid-free in the last 24 hours still received discharge opioids. DISCUSSION As an expansion of our work assessing the 5x-multiplier calculation to standardize discharge prescriptions in HPB surgical patients, we designed a prospective feasibility study to test the generalizability of this method in a diverse cancer surgery practice. Using the 5x-multiplier calculation of last-24-hour inpatient opioid use was associated with a 33% lower median discharge prescription opioid volume in comparison to usual care. Additionally, one third of patients in the 5x arm were discharged opioid-free vs 18% of UC arm patients. Similar to our previous retrospective HPB study [20], in this prospective feasibility study, there was no difference in 30-day opioid refill rates. And for the first time, we found additional evidence that using the 5x-multiplier was a "right size" calculation for most patients, as refills for patients in both arms receiving 5x prescriptions were dramatically smaller, highlighting how the patient-centered 5x multiplier's positive effects go beyond the initial opioid dissemination, and may provide guardrails for limiting the volume of refills as well. UC patients treated by crossover using a 5xmultiplier prescription derived benefits similar to patients assigned to the 5x arm, showing how a proverbial "rising tide" of a positive QI effort can lift all boats. These promising findings provide supporting rationale for studying the 5x-multiplier in a prospective, randomized fashion. Standardization of discharge opioid prescriptions remains a pragmatic challenge, as it requires a departure from both the "procedurespecific" one-size fits all and the "provider-specific" usual care approaches [13,18,23]. Provider bias is one of the primary barriers to reducing the volume of opioids prescribed at discharge [12,24]. Hill et al were the first to publish that the amount of postdischarge opioids consumed at home was highly correlated with last-24-hour inpatient opioid usage, suggesting that a patient-centered approach for discharge opioid volumes could be satisfied by categorizing patients into a 3-tier discharge volume guideline [25]. Based on this concept of reviewing the actual opioid use in the last 24 hours, the 2016-2017 federal recommendations to limit initial prescriptions to ≤ 7 days, and Centers for Disease Control and Prevention data on increased long-term dependence in patients prescribed more than 5 days of initial volume [21], we introduced the 5x-multiplier in 2017-2018 [20]. The 5x-multiplier provides a patient-centered paradigm to simplify decision making and minimize both positive and negative biases of providers. It also removes the guesswork of the "optimal" time of weaning off opioids, which can range from only a few days to 2 weeks [26]. Bleicher et al have published studies evaluating both "2x" and "4x" multipliers for patient-centered opioid prescribing practices [13,27]. In the present study, we found that there were nearly twice as many patients receiving zero opioids at discharge in the 5x arm (33.5%) in comparison to the UC arm (18.0%). Although this study did not allow for a direct comparison to other tiered (eg, prescribing 5-15-30 pills depending on which tier of use in last 24 hours) protocols for standardizing postoperative opioid prescriptions, one theoretical benefit of the 5x-multiplier over a tiered system is that for patients weaned to zero before discharge, they are not prescribed any outpatient opioids [28,29]. An "opt-in" strategy, as highlighted in the randomized clinical trial by Zhu et al, found that less than half of the patients undergoing cervical endocrine surgery opted in for opioid prescriptions, and of those who opted out, none required rescue opioid prescriptions, suggesting that patients who do not need opioids at discharge are unlikely to desire them later [30]. These findings support the goal of aggressively weaning patients to zero opioids by the day of discharge so that the multiplier results in an opioid-free discharge. A major barrier to adopting a patient-specific model to reducing discharge opioid prescriptions is the discharging provider's concern for increased refill requests or reduced patient satisfaction [12,31,32]. However, a systematic review by Bicket et al found that 67%-92% of surgical patients had unused opioids, suggesting that overprescription is the far greater problem [5]. The fear that standardizing opioid prescriptions can lead to increased refill requests contributes to overprescribing patterns, which may facilitate nonmedical use and/or community diversion. In this prospective feasibility study, 30-day refill rates were remarkably similar between arms (16.5% UC vs 15.3% 5x). As the 30-day refill rate serves as a surrogate for patient opioid needs, these findings support that the 5x-multiplier can be used without fear of excessive refill requests. Interestingly, we also found that patients who had actual 5x discharges in either study arm had both lower refill rates and substantially lower refill volumes (131 mg vs 488 mg, compared to sub-5x and over-5x), suggesting that 5x is "just right." These results should be considered in the context of the opioid reduction education program in our department and at other institutions [33][34][35]. The current prospective feasibility study was initiated in September 2019, or about 1 year following our initial opioid reduction education program in August 2018, in an expanded spectrum of inpatient cancer surgery to test its generalizability outside our institution. Even in the UC arm, the discharge OME was 75 mg (eg, 15 50-mg tramadol pills), which is a remarkably lower than the median 200 mg (40 pills) that we reported in the pre-education era before summer 2018 [19]. This suggests that the providers in both study arms in fall 2019 were already limiting opioid prescriptions. On top of these previous gains, further standardizing discharge opioids with the 5x-multiplier led to an even greater reduction in both initial (discharge) and secondary (refill) opioid volumes. Additionally, although this feasibility study was conducted in different surgical sections than our previous study of the 5x-multiplier in HPB surgery, it is possible that overlapping APP/fellows between services could have a positive "spillover" effect which may influence opioid prescribing practices [20]. There are inherent limitations to this prospective cohort study, designed with feasibility and pragmatic intent. The most salient is the nonrandomized design, which was chosen owing to lack of equipoise among department providers for a prospective randomized trial. Although we hypothesized that the 5x-multiplier would reduce discharge opioids, the most important result to show to our own colleagues was its feasibility. We did not attempt to balance the treatment arms or enforce compliance, and crossovers were allowed. Although this decision was intentional, it is reflected in the imbalance of study arm composition as there were differences in cohort composition (length of stay, minimally invasive versus open approach, and proportion of regional blocks to epidurals), which limit the ability to perform exact comparisons between arms. This heterogeneity may be attributable to differences in referral composition and/or surgeon pain management philosophy, and although each surgical section was relatively evenly distributed in both arms, there are certainly biases which remain. Also, discharge pain scores were not analyzed, but presumably, clinical decisions were made based on patient needs and satisfaction as seen in the crossover rates. A greater proportion of patients in the UC arm had preoperative opioid prescriptions listed in their charts [36,37], although we have found in reality that many of these "as needed" opioid prescriptions are either not used or rarely used and simply not updated in the medical record. There was crossover in both arms because there was no enforcement of final prescriptions, with 58% compliance in the 5x arm and 72% compliance in the UC arm. Although at first glance those numbers may seem disappointing in that they are not 90%, even in a recent prospective trial of a 3-tier discharge prescription model, the compliance for correct prescription volumes was 91% in the lowest opioid users but down to 61% in the highest users (>30 mg OME in last 24 hours). Thus, our real-world compliance of 58%-72% in a QI study seems reasonable and externally valid [28]. Lastly, there is a possibility that refill rates were underreported if occasional patients received opioid refills elsewhere (outside of our hospital), although this would be expected to be similar in both arms and is furthermore unlikely because most patients call their original surgeon for postoperative issues including refills. Despite these limitations, this study represents a first-of-its-kind, 22-team prospective evaluation of the 5xmultiplier concept across a diverse practice of cancer surgery within a pragmatic QI initiative that did not require mandatory compliance but still yielded dramatic results, validating the feasibility and positive outcomes of using the 5x-multiplier. Additionally, this pragmatic study sets up a potential randomized controlled trial comparing the 5x-multiplier to a 3-tier discharge protocol and/or provider choice (usual care) [28]. In conclusion, this prospective feasibility study within a QI initiative demonstrated that use of the 5x-multiplier to standardize discharge prescriptions was associated with reduced discharge OME, more opioid-free discharges, similar refill rates, and smaller refill volumes for patients undergoing inpatient cancer surgery. These findings provide evidence that the 5x-multiplier can be feasibly implemented across a spectrum of cancer surgery sites and provide clinical equipoise to justify its evaluation in a randomized clinical trial. Author Contribution TPD, TEN, and CDT contributed to the acquisition of data and drafting of the manuscript. TPD, TEN, YJC, and CDT contributed to the data analysis. TPD, TEN, RWD, YJC, WLD, EMA, MLB, CPS, CLR, MHGK, JNV, GJC, BDB, NDP, EGG, JEL, and CDT contributed to the conception and study design, revised the article critically for important intellectual content, and have given final approval of the version to be published. Ethics Statement This study was approved by the Institutional Review Board of The University of Texas MD Anderson Cancer Center with a waiver of informed consent.
2022-04-29T15:40:08.024Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "66f5bf96faf46efd06ebb27268640ad6a1f8e079", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sopen.2022.04.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5158c5f7569014e64c0021a4d402f76327926d1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244779281
pes2o/s2orc
v3-fos-license
Lyssaviruses and the Fatal Encephalitic Disease Rabies Lyssaviruses cause the disease rabies, which is a fatal encephalitic disease resulting in approximately 59,000 human deaths annually. The prototype species, rabies lyssavirus, is the most prevalent of all lyssaviruses and poses the greatest public health threat. In Africa, six confirmed and one putative species of lyssavirus have been identified. Rabies lyssavirus remains endemic throughout mainland Africa, where the domestic dog is the primary reservoir – resulting in the highest per capita death rate from rabies globally. Rabies is typically transmitted through the injection of virus-laden saliva through a bite or scratch from an infected animal. Due to the inhibition of specific immune responses by multifunctional viral proteins, the virus usually replicates at low levels in the muscle tissue and subsequently enters the peripheral nervous system at the neuromuscular junction. Pathogenic rabies lyssavirus strains inhibit innate immune signaling and induce cellular apoptosis as the virus progresses to the central nervous system and brain using viral protein facilitated retrograde axonal transport. Rabies manifests in two different forms - the encephalitic and the paralytic form - with differing clinical manifestations and survival times. Disease symptoms are thought to be due mitochondrial dysfunction, rather than neuronal apoptosis. While much is known about rabies, there remain many gaps in knowledge about the neuropathology of the disease. It should be emphasized however, that rabies is vaccine preventable and dog-mediated human rabies has been eliminated in various countries. The global elimination of dog-mediated human rabies in the foreseeable future is therefore an entirely feasible goal. INTRODUCTION Lyssaviruses are responsible for rabies, which is arguably the deadliest encephalitic disease known. The prototype, rabies lyssavirus (RABV), is thought to be able to infect all terrestrial mammals. Transmission is through virus-laden saliva, typically through the bite of an infected animal, but sometimes through other means such as scratches and in rare occasions, organ transplants and other means (1,2). The genus Lyssavirus (family Rhabdoviridae) is presently composed of 17 viral species and one putative (3). All lyssaviruses are bullet-shaped particles containing negative sense RNA genomes of approximately 11 000 nucleotides in length. The genome encodes 5 structural proteins, namely the nucleoprotein, phosphoprotein, matrix protein, glycoprotein, and the polymerase (5'-N-P-M-G-L-3') with a 5' -3' transcriptional bias (4,5). The N protein encapsidates the viral RNA, and together with the P and L proteins, forms the ribonucleoprotein (RNP) complex, which can initiate viral transcription and replication (6). The M protein condenses the RNP into the characteristic bullet-shape and recruits the RNP to the cellular membrane during replication. The M protein is also essential for the budding of the enveloped virus from the cell and specifically interacts with the G proteinalso known as the transmembrane spike protein, which is the primary antigenic determinant (7,8). RABV is not only the type species of the genus, but by far poses the most significant public health threat among all the lyssaviruses. The domestic dog is the primary reservoir for RABV in dog-rabies endemic countries, but several other terrestrial mammalian species can maintain transmissionmost notably carnivores such as raccoons, skunks, foxes, and jackals. THE GLOBAL BURDEN OF DOG RABIES Globally, an estimated 59,000 people die from dog-mediated rabies every year, of which approximately 40% are children under the age of 15 years (9). Rabies affects the poorest and most underserved communities, with the burden being greatest in developing countries of Africa and Asia (10). However, the disease is seriously underreported for a variety of reasons and remains among the most significant diseases of neglect in the world (11). By continent, Africa has the second highest burden of rabies, with an estimated 23,500 deaths annually, and has the highest per capita death rate (9). RABV is endemic throughout mainland Africa, with only a handful of island nations having never detected rabies in domestic or wildlife species (e.g., La Reúnion, Mayotte, Mauritius) (12). PATHOPHYSIOLOGY Viral Entry, Spread and Proliferation The most common method of viral entry is through the injection of virus-containing saliva into the muscle tissue or other peripheral tissue through the bite of an infected animal ( Figure 1). After inoculation, RABV typically infects muscle cellsthought to be facilitated through the nicotinic acetylcholine receptorand replicates therein at a low rate (16). The virus remains localized to the inoculation site for variable periodswhich may contribute to the variable incubation period characteristic of rabies (17). In contrast, in the case of higher titers of inoculum, RABV can infect motor endplates without the need for the initial replication in the muscle (18). RABV gains entry into the peripheral nervous system (PNS) via motor endplates at the neuromuscular junction, but the exact means of virus internalization remains poorly understood. RABV travels through the PNS towards the CNS via microtubule dependent retrograde fast axonal transport (19,20). The virus travels from neuron to neuron, replicates, and continues its progression towards the CNS and the brain (21). This neuronal spread is facilitated by the p75NTR receptor, which is non-essential for infection, but facilitates directed and more rapid transport of RABV to the CNS (22). The L protein manipulates microtubules for improved transport efficiency (23), while the M protein facilitates the depolymerization of microtubules resulting in improved viral transcription and replication efficiency (24) (Figure 1). While retrograde transport occurs at an approximate rate of 50 -100mm per day in humans [with species-dependent variation (20,25)], evidence also suggests that RABV undergoes active, G proteindependent anterograde transport in peripheral neurons -such as Dorsal Route Ganglion (DRG) neuronsat a rate three times faster than that of retrograde transport (25). However, the significance of this anterograde transport mechanism is unclear, but recent evidence signifies its importance in the spread of RABV through the PNS (including to non-neuronal organs) after centrifugal spread from the CNS (26), contrasting previous evidence that suggested that RABV spreads by both axonal and trans-synaptic transport exclusively in the retrograde direction (21,27). Once in the CNS, RABV continues to spread via retrograde axonal transport thought to be facilitated by metabotropic glutamate receptor subtype 2, which is a cellular entry receptor that is abundant throughout the central nervous system (CNS) (28). The virus reaches the brainstem and subsequently the brain, where it proliferates and clinical symptoms manifest. It spreads to the salivary glands along terminal axons via anterograde transport (29) where it continues to proliferate and is subsequently shed in the saliva for transmission to another host. RABV can spread to peripheral, non-neuronal organs anterograde transport, and can be detected in these sites after the onset of clinical symptoms (21,26). Symptoms, Disease Progression, Prevention, and Treatment Rabies presents with a wide variety of clinical manifestations that vary depending on multiple factors, many of which remain unknown. However, the species of lyssavirus or the strain of RABV influences the presentation of differing clinical symptoms. For example, bat RABV infections more commonly present with tremors and involuntary twitching/jerking (myoclonus), while dog strains more frequently present with classical hydrophobia and aerophobia (30). Moreover, the presentation of symptoms localized to the wound were more common in bat rabies exposures than in dog-rabies exposures (30). Two forms of rabies can manifest, namely encephalitic (furious or classical) and paralytic (dumb) rabies. The encephalitic form of rabies is more common and presents in approximately 80% of patients, of which between 50 -80% present with the classic symptoms such as hydrophobia and aerophobiasymptoms that are unique to rabies (31,32). However, the remaining symptoms are common to many encephalitic diseases, especially in African countries where diseases such as cerebral malaria are endemic and can result in misdiagnosis of rabies (33). Encephalitic rabies typically progresses to severe flaccid paralysis, coma and death caused by multiple organ failure, in contrast to paralytic rabies which manifests with prominent muscle weakness early in the course of illness (31). While there remains a gap in the understanding of the causes for the manifestation of these two different forms of rabies, it is known that the anatomical site of the exposure is unrelated (34). Initially rabies symptoms were thought to be caused by large-scale neuronal cell death, but neuronal apoptosis is only stimulated during infection with low pathogenicity strains (35,36). Rather, symptoms are thought to be due to neuronal cell dysfunction (35,(37)(38)(39)(40)(41), partly induced by the increased production of Nitric Oxide (NO) via inducible nitric oxide synthase (iNOS) in neurons and macrophages (42)(43)(44). Elevated levels of NO produced by iNOS leads to mitochondrial dysfunction and as a result, axonal swelling (44, 45)a pathology that is associated with the onset of symptoms (41,46), and hypothetically explains the development of encephalitic symptoms (47). Another mechanism behind neurological dysfunction and the onset of neurological symptoms has been demonstrated to be reliant upon a host-derived mechanism that results in the loss of axons and dendrites as a means to prevent the spread of the virus (48). The survival time for patients manifesting paralytic rabies is approximately 41% longer than that of patients with encephalitic rabies (30,49), yet the incubation periods for both forms remain similarranging from 2 weeks to several months. For most cases, the incubation period is 2 -3 months in humans, but some exceptional cases have been documented with an incubation period of more than a year and even up to 8 years (50,51). There is no known accepted treatment for rabies after the onset of clinical symptoms. Palliative care is recommended for rabies patients, which is aimed to reduce suffering and may temporarily prolong survival time, but in all but the most exceptional circumstances, the victim succumbs to the disease (32,50) IMMUNE RESPONSE AND IMMUNE EVASION Upon initial infection, the innate immune response is triggered in the periphery and evidence suggests that this response is partially effective against even the most pathogenic strains, with some viral particles being eliminated (57). However, further clearance is not achieved as pathogenic strains poorly stimulate and inhibit the activation and maturation of dendritic cells, resulting in a poorer antibody immune response (58)(59)(60). This prevention of the maturation of DCs is achieved through the inhibition of the interferon (IFN) autocrine feedback loop that is dependent on JAK-STAT signaling, which is specifically inhibited by the P protein (61). The ability of lyssaviruses to evade the immune response is directly correlated to its pathogenicity, with pathogenic strains inducing a minimal response and successfully evading immune clearance (18). All the RABV proteins are multifunctional, with roles in viral entry, replication and spread, as well as in the sequestration of the immune systemeither directly or indirectly (62). This ability is reliant solely on the immune-suppressive capabilities of viral proteins -primarily being the P, G and N proteins. The P protein is typically involved in sequestering the innate immune response by inhibiting the production of multiple antiviral products such as MxA, OAS1 and IFN-stimulated gene products (62). Furthermore, the P protein inhibits type I IFN responses and subsequent innate and adaptive immune responses through the inhibition of various IFN-related signaling pathways (63)(64)(65)(66)(67). The evasion of IFN responses in infected neurons is likely to be essential for the spread of RABV through the PNS, enabling the virus to reach the brainstem and eventually the salivary glands for spread to a new host (57). Similarly, the N is also predominantly involved in the sequestration of the innate response, primarily through the inhibition of RIG-I activation (68)(69)(70). Apoptosis in macrophages, T cells (including infiltrating T cells in the CNS) and microglia plays an important role in immune evasion and is stimulated by the G protein of pathogenic strains (71,72), which appears to assist in the effective infiltration, replication and spread of the virus in the CNS (36,73,74). DISCUSSION While rabies has arguably been recognized for thousands of years, there remain many gaps in scientific knowledge of the disease and its causal agents. The rapid detection of 10 novel lyssaviruses in the past two decades raises multiple public health concerns, with their broader distribution and possible public health impact being yet unknown (13,75). While information relating to many of the lyssavirus species remains poor, studies suggest that sustained spillover events from non-RABV lyssaviruses are likely to be rare, as almost all lyssavirusesexcept for RABV and ABLVare restricted to a single host species (76). However, many lyssavirus species have only a single, or few, isolates, including the novel GBLV which has a recent common ancestor with ABLV (56). In addition, host shifts in areas where RABV is endemic are likely to remain undetected due to poor surveillance (76). While host shift events remain rare, their impact can be devastating. North America alone is endemic for multiple terrestrial RABV variants, each being resultant of a host shift event (77). While host shift events may be geographically restricted, the potential for the translocation of the virus through human means remains a distinct possibility and risk (78)(79)(80)(81). For example, the largest epizootic in recorded history resulted from the human-mediated translocation of a raccoon from the south-east of the United States to the north-eastern states (82). Further evidence suggests that raccoon rabies was enzootic at low levels for many years before its detection, natural spread, and subsequent human translocation (83). The raccoon RABV variant now accounts for nearly 75% of all terrestrial rabies cases in the USA and resulted in a significant increase in the number of human exposures in those areas where it is endemic (84). Thus, despite the rabies-related viruses not posing a significant health threat at present, continued efforts need to be made to ensure public health safety based on the limited knowledge and surveillance data available. Despite the availability of an effective prophylactic treatment before the onset of symptoms, there remains no cure once rabies symptoms manifest. In addition, the majority of immunopathological knowledge available pertains to RABV, with limited studies being available for the rabies-related lyssaviruses. Therefore, there is a need for continued investigation into the mechanisms of infection, disease progression, host biology and a better understanding of bat immunology. Over and above, there is a dire need for improved global surveillance for all lyssaviruses. Given the significant public health threat posed by dog-mediated RABV, such surveillance data should play a critical role in the elimination of the disease from those dog populations where it is still rampant due to a failure to effectively break transmission through mass vaccination. AUTHOR CONTRIBUTIONS TS: Conception, preparation of first draft, editing and final review. LN: Conception, editing and final review. All authors contributed to the article and approved the submitted version.
2021-12-02T14:26:48.938Z
2021-12-02T00:00:00.000
{ "year": 2021, "sha1": "bc7e001582ff6d65b6bc33b7924783c76b2c3c2f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.786953/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc7e001582ff6d65b6bc33b7924783c76b2c3c2f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232061156
pes2o/s2orc
v3-fos-license
The clinical application of Filmarray respiratory panel in children especially with severe respiratory tract infections Background Respiratory tract infections (RTIs) are the common diseases in children and the routine detection methods frequently fail to identify the infectious pathogens especially for viruses. The Filmarray respiratory panel (FARP) can reliably and rapidly identify viruses and bacteria pathogens. This study is to evaluate the performance and clinical significance of FARP in children. Methods Children diagnosed with RTIs in pediatric intensive care unit (PICU) were enrolled in this study. Nasopharyngeal secretion (NPS) samples of these children were collected and the FARP assay for 17 pathogens and routine microbiological methods were performed. Clinical data of all patients was also collected and evaluated. Results A total of 90 children were enrolled into this study and 58 patients (64.4%) were positive for 13 pathogens by FARP, with 18 being detected positive with multiple-virus (31.3%, 18/58). Human rhinovirus/enterovirus (21.0%%, 17/58) were the predominant pathogen, followed by adenovirus (18.5%). Higher proportions of various pathogens were identified in the infant and toddler (0–2 years) groups with human rhinovirus/enterovirus being mostly virus. Adenovirus were common in the group aged 3–5 years, but only three pathogens including M.pneumoniae, respiratory syncytial virus, and adenovirus were also found in age group (6–14 years). Among 58 FARP positive patients, significant differences were found in antibiotic prescription and use of glucocorticoid between the single-organism-positive group and the multi-organism-positive group (P < 0.05). Furthermore, there was significant difference in use of anti-virus and usage of glucocorticoid between severe respiratory infections group and non severe respiratory infections group (P < 0.05). Conclusions This study demonstrated that FARP can provide the rapid detection of respiratory virus and atypical bacteria for children, especially with severe respiratory tract infections. Background Respiratory tract infections (RTIs) including pneumonia, representing as the major infectious diseases in children with a high morbidity and mortality, are mainly caused by a series of bacteria and viruses [1,2]. Previous estimates found that RTIs caused more than 2-6 million deaths worldwide in 2013, making them the fifth leading cause of death overall and the leading infectious cause of death in children younger than 5 years [3]. As is known to all, culture and antigen/antibody methods are conventional methods to detect infectious pathogens, but their low sensitivity and long turn-around time limits the application in clinical. Therefore, introduction of a rapid, sensitive, and specific diagnostic tool is urgently required to understand the epidemiological surveillance and clinical characteristics of RTIs. More recently, advances in polymerase chain reaction (PCR) techniques have aided in the rapid and accurate detection of infectious pathogens, which are beneficial for precise selection of therapeutic scheme in children [4][5][6]. The FilmArray respiratory panel (FARP) is a multiplexed, fully automated nested PCR assay, which can detect seventeen common respiratory virus and three atypical bacterial pathogens with a turnaround time of approximately 1 h [7]. Previous studies have shown that FARP assay reveals excellent clinical utility over the more traditional laboratory methods of virus culture and direct antigen tests [8][9][10]. Owing to the sensitive detection of respiratory viruses, more and more clinical laboratories have introduced this technique to solve intractable cases for clinicians. Data about FARP application in children is still unclear. The goals of the present study are to retrospectively describe the clinical performance of FARP in children with RTIs and further to characterize the clinical effect of FARP in children with severe conditions. Study design and specimens This study was conducted in a children specialized hospital between July 1st, 2017 and June 30th, 2018. Children from pediatric intensive care unit (PICU) were diagnosed with RTIs. The diagnostic criteria of respiratory infections is depicted as follows: (1) patient with or without fever (defined as body temperature ≥ 37.5°C); (2) patient with at least one of the following clinical symptoms: cough, nasal obstruction, tachypnoea, nasal flaring, or hypoxia [11]. The etiology of RTIs of them could not be detected by conventional PCR and then the FARP assay for 17 pathogens was applied for further detection. Therefore, all of above children were enrolled into study. The diagnostic standard of severe RTIs should included at least with the following symptoms: poor conditions, unabting high fever, anorexia or dehydration, disturbance of consciousness, hypoxemia, cyanosis, dyspnea, radiological confirmation of multi-lobar involve, pleural effusion, extrapulmonary complications. This study was approved by the Ethics Committee of Shanghai Children's Hospital. Written informed consent was obtained from the patients' guardians on behalf of the children enrolled in this study. According to the instruction, nasopharyngeal secretion (NPS) samples were collected on the basis of standard technique from these enrolled children by clinicians and immediately placed in viral transport media (VTM). Specimens in VTM should be processed and tested as soon as possible. If storage is required, specimens in VTM can be held at refrigerator temperature (2-8°C) for up to 3 days. The FARP assay for 17 pathogens and routine microbiological methods including direct fluorescence assay (DFA) and culture method were simultaneously performed for the collected NPS samples. FARP assay The FARP assay was performed by multiplex PCR according to the manufacturer's instructions (BioMérieux, France) [12]. In short, 1 mLof hydration solution and 300 μl of NPS sample buffer were injected into the FilmArray pouch, respectively. Then the loaded pouch was placed into the FilmArray instrument, and a preprogrammed run was started. The procedure of FilmArray pouch included specimen extraction, nmPCR (nest multiplex PCR), and results interpretation. The following organism types and subtypes are identified: adenovirus, coronavirus 229E, coronavirus HKU1, coronavirus NL63, coronavirus OC43, human metapneumovirus, human rhinovirus/enterovirus, influenza A H1, influenza A H1 2009, influenza A H3, influenza B, parainfluenza virus 1, parainfluenza virus 2, parainfluenza virus 3, parainfluenza Virus 4, respiratory syncytial virus, Bordetella pertussis, Chlamydophila pneumoniae, and Mycoplasma pneumoniae. However, human rhinovirus and human enterovirus must be reported as indistinguishable since these they are closely related viruses and crosspositivity between those viruses is possible with the FARP assay [13]. Other common methods The eight viruses included adenovirus, influenza A, influenza B, parainfluenza Virus 1, parainfluenza virus 2, parainfluenza virus 3, respiratory syncytial virus, human metapneumovirus were commonly detected by direct fluorescence assay (DFA) according to the manufacturer's instructions (Diagnostic hybrids, INC, USA). The antibody of M.pneumoniae was analyzed by passive particle agglutination and B.pertussis was analyzed by culture methods. Other pathogens were not identified in our clinical laboratory. Statistics analysis All statistical analysis was performed by using SPSS 19.0 for Windows (version 22.0; SPSS Inc., Chicago, IL, USA). Clinical testing and the FARP assay were compared using the exact two-sided McNemar's test. A value of P ≤ 0.05 was considered statistically significant. Clinical characteristic of enrolled patients A total of ninety patients from PICU diagnosed with RTIs were enrolled into this study. The average age of all children was 2.55 ± 2.93 years, with 52 male and 38 female children. The children aged between 0 and 14 years were divided into four groups including infants (0-1 year, 34.4%), toddlers (1-2 years, 32.2%), preschoolers (3-5 years, 24.4%) and children (6-14 years, 8.9%). Moreover, 48 children (53.3%) were diagnosed with severe RTIs, and 46.7% of them were observed with underlying diseases including heart disease and intestinal diseases. Furthermore, a majority of children were improved after the treatment during hospitalization and 3 children were died. The general characteristics of these patients are presented in Table 1. Table 2). Several differences were detected among age. Higher proportions of various pathogens were identified in the infant and toddler (0-2 years) groups with human rhinovirus/enterovirus being mostly virus. Adenovirus were common in the group aged 3-5 years, but only three pathogens including M.pneumoniae, respiratory syncytial virus, and adenovirus were also found in age group (6-14 years). The distribution of multi-organism combinations was depicted in Table 3. A total of 18 multi-organism children were detected with 13 various combination types. The combination of human rhinovirus/enterovirus plus parainfluenza virus 3 and adenovirus plus respiratory syncytial virus were the most common combination type. Additionally, the majority of multi-organism-positive patients were observed with adenovirus and human rhinovirus/enterovirus. Comparison of FARP and other methods The FARP assay was significantly more likely to detect a respiratory virus than DFA assay (P < 0.05). Among the ninety children, 58 children (64.4%) were identified with 13 pathogens by FARP assay, while only 11 (12.2%) children were detected with 5 viral pathogens (adenovirus, influenza A, respiratory syncytial virus, parainfluenza virus 1, and parainfluenza virus 3) by using DFA method. Among these 11 positive samples by DFA assay, 7 out of them were detected with more than 2 viral pathogens by FARP assay. Furthermore, the NPS samples were observed by FARP method within 1.7 h, which showed lower turnaround time (TAT) than DFA method with 5.2 h. Seven samples detected with M.pneumoniae by FARP analysis while only one samples were positive Clinical significance of pathogens by FARP assay The detailed clinical significance of 58 FARP positive children was showed in Table 4 and Table 5. Among 58 FARP positive children, 38 children (65.5%) were diagnosed with severe RTIs. According to the number of organisms detected, these children were divided into two groups including the single-organism-positive group and the multi-organism-positive group (Table 4). There was no significant difference in the length of hospitalization stay, hospitalization cost, use of anti-virus, rate of secondary infection, and clinical outcome (P > 0.05), while significant differences were observed for days of antibiotic use and usage of glucocorticoid between these two groups (P < 0.05). Furthermore, there was significant difference in use of anti-virus and usage of glucocorticoid between severe respiratory infections group and non severe respiratory infections group (P < 0.05) ( Table 5). Discussion Over the past decades, RTIs comprise as the most common diseases among children less than 5 years of age with the majority in low-and middle-income countries. The etiology of RTIs always contribute to virus and bacterial, including influenza, respiratory syncytial virus, B.pertussis, S.pneumoniae, and H.influenzae [14,15]. In general, infrequent isolated pathogens were always found in several severe RTIs cases and viruses were considered as the leading cause, including adenovirus, respiratory syncytial virus, and metapneumovirus [16]. This study showed that human rhinovirus/enterovirus was the most common virus, especially in some patients where rhinovirus was the only virus identified. However, the role of rhinoviruses in serious respiratory infections remains controversial. Several researchers found that rhinovirus was the most prevalent virus in asymptomatic carriers with the rates ranging from 12 to 32%. A reviewer conducted by Jacobs SE et al. demonstrated that with the increasing implementation of PCR assays for respiratory virus detection in clinical practice, the recognition of rhinovirus as a lower respiratory tract pathogen had been facilitated, particularly in patients with asthma, infants, elderly patients, and immunocompromised hosts, and more data had emerged on the high incidence of rhinovirus infection, resulting in the further awareness of the widespread and sometimes serious disease manifestations [17]. Furthermore, respiratory viruses are responsible for bronchiolitis and pneumonia and can also lead to considerable economic burden in the terms of medical visit. In addition, atypical respiratory pathogens involving in B.pertussiss, M. pneumoniae, and C.pneumoniae, pose as the emerging respiratory pathogens and have become a health public problem in many countries. Several studies depict that clinical symptoms of atypical respiratory infections is indistinguishable from viral respiratory infections and that co-infection with other viruses also exist [18]. Recently, there has been an increasing interest that simultaneous infection with multiple pathogens is increasingly recognized as both common and important for disease manifestation. This study described that 18 children had more than two organisms, with human rhinovirus/enterovirus plus parainfluenza virus 3 and adenovirus plus respiratory syncytial virus being the most. It may make the treatment of simultaneous infections more difficult. Then co-infections between virus and bacterial isolates have been also detected in pediatric patients with RTIs [19] and this phenomenon was observed in 4 samples with B.pertussis or M.pneumoniae plus virus. However, in regard to viruses and atypical organisms, it is truly difficult to detect by the traditional culture methods, owing to the long culture period and low sensitivity of these methods. FARP assay, a new technology, has been provided for detecting unidentified pathogens in respiratory samples. Previous studies demonstrated that FARP assay, which can simultaneously identify 17 Moreover, it is proposed that early diagnosis of pathogens in children with RTIs could decrease the length of hospitalization stay and reduce the mortality, especially for multiple organism infections. Generally, antibiotics have been commonly prescribed for many children with RTIs. While the samples were detected with positive pathogens by FARP assay within 1 h, the clinicians would immediately adjust the therapeutic schedule for children. According to the clinical data of these patients, we found that children identified with virus infections received or prolonged antivirus therapy and also reduced the inappropriate use of antibiotics during this process. A previous study reported that the mean duration of antibiotic use was significantly shorter after implementation of FARP assay than that before the implementation [21]. Furthermore, the length of hospitalization stay and hospitalization cost in the single-organism-positive group were still higher than these in the multi-organism-positive group, although there were no statistical difference in between these two groups. Conclusions In conclusion, our study revealed that FARP assay can significantly detect respiratory virus and atypical bacteria in children, which could not be detected by conventional methods. Comparison of DFA assay, FARP assay can provide the rapid detection of a wide number of respiratory organisms within 1.7 h and especially render a valid choice for urgent pathogens in high-risk patients with severe respiratory infections. However, there still a limitation about the pathogen spectrum of FARP not including all pathogens. Therefore, combination of FARP and other molecular methods can make a significant improvement in diagnostic testing of respiratory pathogens.
2021-02-27T14:48:45.142Z
2021-02-27T00:00:00.000
{ "year": 2021, "sha1": "5b3007fed21247774d127bd3f4853a6c62d145e5", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-05900-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b3007fed21247774d127bd3f4853a6c62d145e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52985896
pes2o/s2orc
v3-fos-license
UK Pharmacy Students’ Opinions on Mental Health Conditions 1 Objective : Given the increased emphasis on mental health awareness recently, this study aimed to 2 ascertain future pharmacists’ opinions on mental health conditions (and investigate the influence of 3 gender), since they would soon be advising patients about this in their capacity as healthcare 4 professionals. 5 Methods : Following ethical approval and piloting, all final year Master of Pharmacy students at 6 Queen’s University Belfast were invited to complete a paper-based questionnaire during a compulsory 7 class. Section A was an adapted version of a United Kingdom (UK) public opinion questionnaire on 8 mental health (‘Attitudes to Mental Illness’), largely consisting of rating questions. Section B gathered 9 non-identifiable demographic data. Descriptive statistics were undertaken; Mann-Whitney U test and 10 Chi-square test were used for gender comparisons with significance set at p<0.05. 11 Results: An 89% (97/109) response rate was obtained. Most considered that pharmacological and 12 non-pharmacological measures were beneficial in the management of mental health conditions (89% 13 and 96%, respectively) and that people with mental illness had the same rights to jobs as anyone else 14 (82%). However, only 57% of student respondents felt confident discussing mental health issues with 15 patients and 36% deemed university training to be satisfactory. Males were more likely than females 16 to ‘agree strongly’ or ‘agree slightly’ that they would not want to live next door to someone who has 17 been mentally ill’ (p=0.01). 18 Conclusion: While some positive opinions were evident, more work is needed to prepare these future 19 pharmacists for roles within mental health care teams. INTRODUCTION The National Institute for Health and Care Excellence (NICE) states that common mental health disorders such as depression, generalized anxiety disorder, panic disorder, obsessive-compulsive disorder, post-traumatic stress disorder and social anxiety disorder, may affect up to 15% of the United Kingdom (UK) population. 1 According to the National Institute of Mental Health (NIMH), the prevalence of adults in the United States of America (USA) with any mental illness is 17.9% (2015 data). 2 However, as many individuals do not necessarily seek medical help, mental health conditions can be undiagnosed and underreported.5][6] While the severity of individual mental health disorders varies, all can be associated with significant long-term concerns.For example, depression is associated with significant morbidity and mortality and is the most common disorder contributing to suicide. 1 Research investigating attitudes towards mental illness has been conducted at population levels among the general public (such as in Australia, 7 UK 8 and USA 9 ) and media campaigns strive to raise awareness and aim to dispel myths. 10Work has also been conducted to investigate various university students' mental health [11][12][13] and their attitudes towards mental illnesses (involving medical, 14,15 nursing 16 and pharmacy students [17][18][19][20][21][22] ).Unfortunately, views held by future healthcare professionals towards mental illness have not always been appropriate.For example, Bell and colleagues' questionnaire study conducted in six different countries (Australia, Belgium, Estonia, Finland, India and Latvia) revealed that pharmacy students' attitudes towards people with mental health illnesses (schizophrenia and severe depression) was sub-optimal. 18im was to investigate Queen's University Belfast (QUB) Level 4 (ie the final year of the degree program) pharmacy students' opinions on mental health.Specifically, the objectives were investigate their attitudes towards mental health conditions and determine whether gender affected responses. To the best of the authors' knowledge, there has been limited work in this area involving pharmacy students particularly in the UK and no work specifically conducted in Northern Ireland.This research adds to the existing body of literature by providing useful baseline data from a UK context.Moreover, it is important to ascertain pharmacy students' opinions on mental health conditions, given that they will be advising and counselling patients about this important clinical subject area in their capacity as a healthcare professional (and since the number of people affected by mental health conditions continues to grow 1 ).Importantly, it is anticipated that the findings of this research will inform future teaching of the subject matter.From a pharmacy education standpoint, "developing a clinical knowledge base that culminates in the demonstrated ability of learners to apply knowledge to practice" and preparing students "to provide patient-centered collaborative care" are stipulations in the Accreditation Council for Pharmacy Education (ACPE) Accreditation standards and Key Elements for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree. 23se are also reiterated in the UK Master of Pharmacy (MPharm) accreditation standards. 24 METHODS Ethical approval for this work was obtained from the School of Pharmacy Ethics Committee at QUB (Ref 025PMY2016; Nov 22, 2016).All Level 4 (final year) MPharm students at QUB were invited to participate in the study.The inclusion criterion was that the study participants had to be currently enrolled Level 4 students.Level 4 students were selected because they had been taught about mental health conditions prior to conducting the research (unlike the other year groups) and also because they were soon to graduate from university and begin their career in pharmacy practice. Data were collected by means of a paper-based self-completed questionnaire.The questionnaire was developed with reference to the wider literature [7][8][9][10][11][12][13][14][15][16][17][18] and consisted of two sections.Section A was an adapted version of the 'Attitudes to Mental Illness' UK public opinion questionnaire, 8 consisting of many attitudinal statements measured using a five-point Likert scale (Agree strongly/Agree slightly/Neither agree nor disagree/Disagree slightly/Disagree strongly) and, on occasion, a sevenpoint scale (Very uncomfortable to Very comfortable).The Attitudes to Mental Illness questionnaire was developed and funded by the Department of Health and includes items from the 'Community Attitudes toward the Mentally Ill (CAMI)' scale and the 'Opinions about Mental Illness' scale. 8ther statements were included about confidence counselling patients on mental illness and training provision within the degree program.Section B related to demographic information and gathered nonidentifiable data only. To maximize response rates, the questions were largely in a close-ended question format. 25The questionnaire was piloted with ten pharmacist postgraduate students at the School in November 2016. As a result, one minor modification was made (wording was amended for one question to clarify that respondents could select as many options as they wished). Questionnaire distribution took place during Semester 1 (in December 2016) in a compulsory class.In January 2017, the responses from the completed questionnaires were coded and entered into a customized database developed on IBM SPSS v22 (SPSS Inc., Chicago, IL) for statistical analysis. Data analysis largely took the form of descriptive statistics.Interpolated median scores were calculated on the rating questions.Comparisons were done between gender responses as previous work revealed differences in opinions. 12,14Mann-Whitney U test and Chi-square test were used for gender comparisons with significance set at P<0.05 a priori.The Mann-Whitney U test was performed on the data that was ordinal in nature whereas the Chi-square test was performed on the data that was categorical (nominal) in nature. Mean age of the year group was 22.8 years.Before commencing the MPharm degree program at QUB, the majority of students had received their education in the UK and Ireland (83%, 80/97) with some from Asia (10%, 10/97).Others (7%, 7/97) did not disclose this information. Students were asked about the person closest to them who has/had some kind of mental illness.They were instructed to select one option from a list.The results to this question are outlined below, with the top three most popular selections being: a friend, no one known, and immediate family.In relation to the hypothetical statement about how likely they would be to go to a doctor for help if they felt they had a mental health problem, responses were: 13/97 (13%) 'very likely', 43/97 (44%) 'quite likely', 13/97 (13%) 'neither likely nor unlikely', 22/97 (23%) 'quite unlikely' and 6/97 (6%) 'very unlikely'.The interpolated median was 3.7 (5 equated to 'very likely' to 1 for 'very unlikely'). In another hypothetical statement, students were asked to rate how comfortable they would feel talking to a friend or family member about their mental health.For example, telling a friend or family member that they (the student) had a mental health diagnosis and how it affected them.Respondents had to rate their answer from 1 (very uncomfortable) through to 7 (very comfortable).The interpolated median for this statement (n=97 respondents) was 4.4.Similarly (and using the same scale), students had to rate how comfortable they would feel talking to a current or prospective employer about their mental health. The interpolated median for this statement (n=97 respondents) was 2.1. Regarding statements about future relationships, students had to indicate their level of agreement [1 ('disagree strongly') to 5 ('agree strongly')].The interpolated medians for each are: I would be willing to continue a relationship with a friend who developed a mental health problem (4.9), to live with someone with a mental health problem (4.5) and to work with someone with a mental health problem (4.7). When asked about the number of people in the UK who have a mental health problem at some point in their lives, only 35/97 (36%) respondents selected the correct answer (1 in 4 people).Furthermore, students had to indicate their level of agreement [1 ('disagree strongly') to 5 ('agree strongly')] about various conditions such as stress, grief, depression and drug addiction being types of mental health conditions.The results for this question as presented in Figure 1.Moreover, 23/97 (24%) thought a person was mentally ill if they were incapable of making simple decisions about his/her life and 34/97 (35%) thought a mentally ill person could not be held responsible for his/her own actions. Other attitudinal statements on mental health conditions and corresponding results are provided in Table 1.While the majority (93%) considered that virtually anyone could become mentally ill, about one fifth of respondents (22%) thought there was something about people with mental illness that made it easy to tell them apart from 'normal' people.Positive opinions were held by most with regard to people with mental illness having the same rights to jobs as anyone else (82%) and not being excluded from holding public office positions (74%).Most thought that pharmacological and nonpharmacological measures were beneficial in the management of mental health conditions (89% and 96%, respectively) and that services should typically be provided via community-based facilities where possible (86%).However, just over half (57%) felt confident talking about mental illness with patients and only 36% felt that their university training was adequate.About 6 out of every 10 respondents knew what advice to give a friend with a mental health problem so that they could get professional help.Moreover, in relation to the statement 'I would not want to live next door to someone who has been mentally ill', males were more likely than females to 'agree strongly' or 'agree slightly' [8/39 (21%) versus 6/58 (10%), p=0.01].However, given the small numbers involved, this result should be interpreted with caution.Females were more likely to 'disagree strongly' or 'disagree slightly' that people with mental health problems should be excluded from taking public office [48/58 (83%) versus 23/38 (61%), p=0.01]. Lastly, when asked about whether people with mental illness experienced stigma and discrimination nowadays because of mental health problems, 51/97 (53%) selected 'yes, a lot', 46/97 (47%) selected 'yes, a little' and no one selected 'no'. DISCUSSION Unlike other research, 12,14 gender played a limited role in this current study as there were few significant differences between male and female responses.Only a small percentage (5%) of students reported having a mental illness which is a much lower prevalence than that previously reported in the literature, for example, by Goodwin and colleagues 11 for first year undergraduate university students, Alfaris and colleagues 12 for health professions' university students (and in particular female students) and Payakachat 13 and Panthee 19 and colleagues for pharmacy students.Thankfully the majority of respondents suggested they would be comfortable talking to friends and family about a mental health condition (if applicable), although over 40% appeared reluctant to seek medical help from a doctor.Moreover, the majority of respondents disagreed that most people with mental health problems go to a healthcare professional to get help.In the UK, the doctor is a key medication provider in primary care and a gateway to other mental health services.Similarly, previous work by Reavley and colleagues 26 in Australia revealed that 16-24 year olds were less likely to seek help for mental health issues than middle-aged or older adults.Furthermore, Downs and Eisenberg 27 concluded that people who need professional support the most are actually the least likely to source it.Ultimately, pharmacy educators cannot assume that students pursuing healthcare degrees are looking after their own health adequately. We found this previously in relation to alcohol intake: the mean intake of alcohol was 18.3 units per week (exceeding the recommended UK amount) with around 70% of pharmacy students reporting binge drinking on at least one day of the week. 28In QUB School of Pharmacy, a 'mental health first aid' scheme has just been launched whereby a cohort of students across the year groups are trained to spot warning signs of mental health issues among their peers and signpost them to appropriate sources of help.It would be useful to evaluate the impact of this and conduct further research to ascertain types of professional support that students do, or would consider accessing, in relation to mental health issues, and barriers towards seeking help. In this current study, respondents did not appear particularly comfortable talking about personal mental health issues with employers.The score was much greater in relation to friends and family.It is difficult to draw meaningful conclusions in relation to this as people are probably less likely to talk to employers (than family and friends) about personal issues anyway.However, it could be related to concerns about stigma and discrimination since all students also thought stigma and discrimination associated with mental illness exists today and most considered that we (society) should adopt a far more tolerant attitude toward people with mental illness.Northern Ireland may be slower to adopt appropriate attitudes towards mental health than other countries.For example, a public awareness campaign rolled out across various parts of Europe by Kohls and colleagues 29 gleaned less positive results in Ireland compared with Germany and Portugal.That being said, many of our students' attitudes towards mental health in this current study were positive and appropriate, unlike that previously reported by Bell and colleagues about pharmacy students. 18The majority of the students in this current study thought that people with mental health problems should have the same rights to a job as anyone else (including being given responsibility and public office positions), and seemed fine with the concept of living next door and having future relationships with people who had mental health problems. With reference to the training provided and knowledge of the subject area, many student respondents correctly thought that virtually anyone could become mentally ill and all identified common mental illnesses (schizophrenia, depression and bipolar disorder) as being such.However, only about 1 in every 2 respondents felt confident talking about mental illness with patients and 6 in every 10 knew what advice to give a friend with a mental health problem.Only 1 in every 3 respondents considered that the university training was adequate.These findings, among others, suggest that the current training provision (a lecture series) within the School of Pharmacy is potentially not enough to adequately prepare future pharmacists for practice.Similarly, Aaltonen and colleagues 30 investigated perceived barriers among pharmacy students in relation to providing medication counselling for people with mental health disorders (in Australia, Belgium, Estonia, Finland, India and Latvia; n=649 respondents) and concluded that more work is needed within pharmacy education programs. Furthermore, in this current study, some misconceptions exist (in relation being able to tell mentally ill patients apart from others).] The study has several weaknesses.The research was only conducted within one year group at one school of pharmacy and therefore the findings are not generalizable.It is also possible that attitudes could change depending on when the study was conducted in the semester (for example, knowledge of mental health conditions could be better after revising for a clinical examination than beforehand and attitudes might change after working in practice, as has been reported previously for pharmacy students 20 ). CONCLUSION Many of these future pharmacists appeared to have appropriate attitudes towards mental health and people with mental health conditions.Gender seemed to have very limited influence on attitudes. However, there was a lack of confidence around advice provision to friends and patients and level of dissatisfaction with the current training provision.The reluctance to seek medical help if they ever developed a mental illness probably mirrors the view held by many members of the general public, but is perhaps surprising given these students are future healthcare professionals. The work adds to the field and provides us with a timely opportunity to reflect on current teaching and make changes to our educational practice.From the findings, it seems that our mental health education is not at an appropriate level to adequately prepare these students for practice.Additionally, this study should provide useful baseline data for other schools of pharmacy in the UK and potentially beyond.Future research should focus on an exploration of whether having personal mental health issues subsequently affects advice provision to patients and also evaluate the impact of introducing specific mental health awareness training into the program. Table 1 Respondents' Views on Various Attitudinal Statements Relating to Mental Illness (N=97, unless otherwise stated)
2018-11-01T18:46:31.932Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "1c1095789c6c48a3ef3f33f5045b18361139b4e1", "oa_license": null, "oa_url": "https://www.ajpe.org/content/ajpe/82/7/6560.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "115a847d0c4a47f69125989a564c27895318ff24", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
202150906
pes2o/s2orc
v3-fos-license
Numerical analysis of slotted aerospike for drag reduction One of the important criteria for designing high speed vehicles is drag reduction and aerodynamic heating. There are plenty of methods available however finding an economical and simple method for drag and heat flux reduction is very challenging. In this paper, the forward facing aerospike for blunt-nosed bodies is introduced at supersonic and hypersonic Mach numbers and tested for drag reduction. Initially, the flow fields are studied around the blunt cone with and without aerospike. In addition, the different shapes and L/D ratios flying at supersonic Mach 2 are computed numerically. The computational simulations is carried out to solve the three-dimensional steady Reynolds-averaged Navier-Stokes equation along with the k-ω turbulence model in computation solver. After comparing the flow properties around the various aerospike models, we have found that the flat aerodisk experiences a drastic pressure reduction on the blunt nose cone but it has relatively less heat flux reduction. Consequently, a slotted aerodisk modification design has been analysed which uses convection flow to reduce heat flux. Introduction Generally, blunt shape geometry is used in high-speed missile, rockets, capsules, etc. in order to accommodate larger payload and to withstand severe thermal expansion exerted by the forebody which leads to the formation shock wave. The aerodynamic drag and aerodynamic heating lead to the reduced performance and erosion of the surface. The methods like breathing nose, an energy deposition technique, spike-tipped nose, adopted to suppress the aerodynamic heating problems. Among all the methods an application of forward facing aerospike for blunt-nosed bodies at supersonic and hypersonic Mach numbers is most economical [1]. Aerospike leads to transformation of strong bow or detached shock wave into weak shock waves followed by recirculation or dead air region. This dead region depends on which will depend on the shape of the tip of the spike. The forebody of missiles usually faces lesser pressure due to the presence of recirculation of air and hence the reduction of drag is clear. The flow field region between the tip of the aerospike and the main body depend on freestream flow conditions, the shape of the body and the geometry of the spike. The study of different length of the sharp spike at hypersonic Mach number 6.8 over a configuration shows the traces of reduction in drag and heat flux [2]. Further, several notable works made on the hemispherical body with different spikes at hypersonic speeds and saw low drag [2,3].Computational results are done using the Baldwin-Lomax turbulence model, where k-ω turbulence model was used [6][7][8]. The flow field around the body with aerospike will be different for hypersonic Mach number over speed owing to the shock strength and the separation zone between the spike. Pieces of literature reported that the flow field around blunt bodies with different kind of aerospike in supersonic speed. The domain shape is elliptical [9][10][11][12][13][14][15][16]. Manigandan et al. recently proved with his notable work stating that elliptical has high advantage than other profile. Major studies were carried out on sharp or blunt aerospike. Studies with spherical tip aerodisk at the hypersonic speed which shows a reduction in drag in comparison to sharp spike is very limited. Implementing k-ω turbulence model at Mach 2 with sharp and hemispherical blunt head spike reduces the drag up to 68%. The combinational jet with aerospike concept at supersonic speed was proposed to be more efficient in drag and heat reduction. Rahul et al. studied the simulation over the ballistic missile at Mach 6. They found the performance of the conical aerospike for the different length to diameter ratio to examine the effect of aerodynamic forces. They found the angle of attack is proportional to the drag on the body and pitching moment. Schnabel et al. studied the pressure distribution on aerospike nozzle on the rotating engines. The nozzle had been designed to examine the effect of pressure distribution. Dumitrescu et al. studied the plug nozzle to estimate the nozzle performance across the sea level. They have chosen three types of the plug of truncated length 40%, 50% and 60%. From the results they concluded the altitude and temperature are very important parameters for optimal performances. From the above studies, we have come to know that drag depends on the shape and size of the spike. Moreover, the material property is neglected for the simulation reference purpose we have used composite material as the spike material [17][18][19][20][21][22][23][24]. Simulation Methodology The simulation is carried on the blunt hemispherical body of base diameter (D) of 14mm and length 1.5D which is shown in figure 1. Three different types of spikes were used for the analysis whose basic stem diameter was 2mm. The Semi-cone angle of 10 -12 degrees was given to sharp aerospike and the flat head aerodisk (Aerospike) having a diameter of 0.3D with a flair angle of 120 degrees. Three different cases of L/D ratio (0.75, 1, 1.5) of individual model were studied to uncover the optimum solution. Flathead aerodisk configuration is believed to be the optimum solution. Hence a slot was made around the stem throughout the disc having a diameter of 0.2 mm as shown in figure 1. In order to save time, 2D computations have been used. The platform Star CCM+ used to simulate the problem which uses a finite volume approach to solve compressible Reynolds Averaged Naiver Stokes (RANS) equations. The assumptions made for the simulation is a steady-state solution, axisymmetric computations explicit coupled using "k-ω" turbulence model. The k-ω turbulence model has been arrived after carrying out the sensitivity tests, convergence and comparison of the experimental and numerical results. Results and Discussion Different types of aerospike are studied for Mach number 2 is shown in figure 3 to figure 9. Effect of spike on flow field The flow field characteristics around a model with and without aerospike or slotted aerodisk are studied. Due to the presence of an adverse pressure gradient, the flow separation takes place. In addition, a large region of recirculation fluid zone is formed along the spike length. And consequently, the flow can be supposed to be a superimposition of two flows, which are called as external free stream flow and a recirculation flow. The regions like recirculation zone always depend on the L/D ratio of the geometry and aerospike of the aerospike models. In our study, we have taken 3 geometries for aerospike which are a sharp spike, flat disk aerospike and hemispherical head aerospike. Different kinds of shocks are formed on various models. For example, oblique shock wave formation on a sharp spike whereas bow shocks on flat head disk [25][26][27]. The extent of recirculation is also dependent on the nature of shock waves. The modification of slot in aerodisk increases the number of recirculation fluid regions. The free stream flow passes from side to side of the slotted hole which then forms a series of wake and vortices right behind the rear part of the slotted aerodisk followed by a primary recirculation region. As the length of the slotted aerospike increases, the degree of the wake is increased this may result in structural fatigue. Figure 3 to figure 7 shows the variation of the flow field. Among the several slots, figure 7 reports better mixing compared to other spike. Effect of spike on surface pressure The tip of aero spike forms the shock wave and leads to the formation of recirculation region around the roots of aero spike. Figure 8 and figure 9 shows the pressure distribution. This recirculation region acts as a streamlined body which reduces drag. Pressure drop between the aero spike and the blunt body is directly proportional to the ratio of L/D ratio of the spike. In our study, we have taken 3 geometries for aerospike which are a sharp spike, flat disk aerospike and hemispherical head aerospike. In comparison with blunt nose cone, the peak pressure drop for the sharp spike, hemispherical spike and flat disk aerospike are 12.74%, 28.79% and 82.72% respectively. The largest pressure drop is found using flat disk aerospike which is 82.72% as shown above. Hence, we have chosen flat disk aerospike as the best design which is further modified into slotted flat disk aerospike due to an aerodynamic heating problem. The peak pressure drop on slotted aero disk is 63.81% on comparing with blunt nose design. Thus, due to the introduction of a slot over flat disk aerospike, there is 18.91% increase in pressure in comparison with flat disk spike. This increase in pressure of slotted disk aerospike is due to the formation of series of wake or vortices right behind the rear part of the slotted aerodisk followed by a primary recirculation region [24-25]. Effect of spike on surface heat flux There is a minor change in heat flux around the root of aerospikes. In our study, we have taken 3 geometries for aerospike which are sharp spike, flat disk aerospike and hemispherical head aerospike. In comparison with blunt nose cone, the heat flux drop for sharp spike and hemispherical spike flat disk aerospike are 0.12% and 0.18% respectively. When the bow shockwave formed in front aerodisk which results in a higher value of stagnation point heat fluxes which is higher than those for hemisphere-cylinder without aerodisk. There is an increase of 2.91% in heat flux for flat disk aerospike on comparing with a blunt nose. We have chosen flat disk aerospike in spite of an increase in heat flux by 2.91%. This design is taken as an optimum design by considering the fact that there is a maximum pressure drop in the flat disk aerospike when compared to other models. Hence, the slot is introduced on the flat disk aerospike. The free stream flow passes through the slotted hole which then forms a series of wake or vortices right behind the rear part of the slotted aerodisk followed by a primary recirculation region. This free stream flow acts as a convection medium that passes through the slot. Consequently, there is a decrease of heat flux on slotted aerospike in comparison with blunt nose cone due to convection taking place. 5.12% heat flux is decreased with the application of slotted aerospike. The overall decrease in heat flux of slotted aerospike in comparison with flat disk aerospike is 8.03%. Conclusion The pressure and reattachment heat flux of a blunt body with different models of aerospikes is tested for results under k-ω turbulent conditions. The operating Mach number is of Mach 2. The numerical simulation is conducted on aerospikes of three different L/D ratios of 0.75, 1.00 and 1.5 suggested that there is an increase in pressure drop with increase in L/D ratio of aerospikes. In this study, we have found that for flat disk aerospike which has a maximum pressure drop of 80.72%, the local heating at the reattachment is always higher than the peak heating of the base model about 2.91%. Hence, we introduced a slot at the tip of the flat disk aerospike concentrically. This resulted in 62.81% of pressure drop and 6.12% of heat flux reduction due to convection taking place through the slot from free stream flow to the recirculation region. Conclusively, our slotted flat disk aerospike model is the best model design in which the reduction of pressure and peak heat flux has been compromised.
2019-09-10T09:10:01.075Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "ceb9a5ef09436e64f5987c60f5c6ccc73145fd94", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1276/1/012031/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e03f0a872537da829aa543afd2e52c3aa434d4ee", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
58560906
pes2o/s2orc
v3-fos-license
Expression Analysis of BDNF Gene and BDNF-as Long Noncoding RNA in Whole Blood Samples of Multiple Sclerosis Patients : Not Always a Negative Correlation between Them Multiple sclerosis (MS) is a chronic inflammatory disorder of the central nervous system (CNS), in which axonal damage is a deteriorative factor. Brain-Derived Neurotrophic Factor (BDNF) is described as a neuronal-survival gene, also capable of exerting pleiotropic effects on the immune cells. Here, we aimed to investigate expression levels of BDNF and its antisense RNA, BDNF-AS, in Iranian MS patients. Our case-control study was based on collecting 50 whole blood samples of relapsing-remitting MS patients and 50 healthy controls. Then, expression analysis of BDNF and BDNF-AS was performed by Real-time quantitative PCR. We found a strong and positive correlation between BDNF and BDNF-AS in MS patients. This is while no significant difference in BDNF and BDNF-AS expression levels was seen between MS patients and controls (p>0.05). A significant and strong positive correlation was found between the expression levels of BDNF-AS and BDNF (r=0.785, p<0.0001). Further, significant positive moderate correlations of BDNF and BDNF-AS with other lncRNAs (GSTT1-AS1 and IFNG-AS1) and genes (TNF and IFNG) were revealed (p<0.0001). Additionally, there was no correlation between the BDNF and BDNF-AS expressions and disease duration, age at onset, and Expanded Disability Status Scale of Kurtzke (EDSS) (p>0.05). BDNF and BDNF-AS expression levels revealed insignificant discrepancies in patients and controls. We found a strong and positive correlation between BDNF and BDNF-AS in MS patients, which is, based on previous studies, a quit novel finding and can be further discussed by future works to unravel its possible application in MS. We suggest evaluation of different leukocytes Corresponding Authors: Arezou Sayad, PhD; Department of Medical Genetics, School of Medicine, Shahid Beheshti University of Medical Sciences, P.o.Box: 1985717443, Tehran, Iran. Tel/Fax: (+98 21) 2387 2572, Email:ar.sayad@yahoo.com Mohammad Taheri, PhD; Urogenital Stem Cell Research Center, Shahid Beheshti University of Medical Sciences, P.o.Box: 1985717443, Tehran, Iran. Tel/Fax: (+98 21) 2387 2572, E-mail: Mohammad_823@yahoo.com BDNF, BDNF-as and Multiple Sclerosis 549/ Iran J Allergy Asthma Immunol, Vol. 17, No. 6, December 2018 Published by Tehran University of Medical Sciences (http://ijaai.tums.ac.ir) subsets separately along with large cohort studies comprising a higher number of individuals from different ages to unravel the effects of other possible aspects. INTRODUCTION Multiple sclerosis (MS) is a chronic inflammatory disorder of the central nervous system (CNS) including the brain, spinal cord and optic nerves, which is depicted by inflammatory demyelination and neurodegeneration.Commonly, MS manifests in young adults and develops towards increased disability over time.It is speculated that main cause of MS is attributed to the infiltration of autoreactive T cells that detect and react against autoantigens. 1,2Several genetic causes are known to contribute to the risk of MS.Meanwhile, environmental aspect might also provoke the onset in genetically predisposed individuals. 3he neurobiogenesis pathway appears to be vital in the neurodegeneration process of MS. 4 Recently, new insights into CNS tissue damage and repair in the course of chronic inflammatory disorders have resulted from experimental evidence that some immune responses might play a neuroprotective role. 5,6Brainderived neurotrophic factor (BDNF) is described as a neuronal-survival and growth-promoting gene also capable of exerting pleiotropic effects on the immune cells within the inflammatory lesions of MS patients. 7,8mmunohistochemical studies, moreover, showed that BDNF, in the context of MS-related inflammatory reactions, is released not only by neurons but also by T cells, microglia, macrophages and reactive astrocytes. 9,10ext, we focused on BDNF antisense RNA (BDNF-AS) which is identified to be a naturally conserved long noncoding RNA (lncRNA).LncRNAs are known as principal regulators of the genome expression and their effects in various physiological and pathological processes have been widely studied. 11,124][15] In vivo and in vitro studies has demonstrated that downregulation of BDNF-AS can inversely up-regulate BDNF, suggesting pro-neuronal effects in neuronal differentiation and outgrowth. 14together, here we aimed to investigate the expression levels of BDNF gene and BDNF-AS lncRNA in Iranian MS patients. Patient and Control Groups Here, we recruited 50 Iranian sporadic MS patients (all relapsing-remitting type) and 50 sex-and agematched control subjects (37 women and 13 men).Patients (38 females and 12 males) were diagnosed by McDonald criteria and MRI (Magnetic Resonance Imaging) via expert neurologists.All patients were treated with Interferon (IFN)-β therapy for at least two years (intramuscular injection of 20 μg of CinnoVex [CinnaGen Co, Tehran, Iran] three-times a week) and were categorized as IFN-β responders. 16,17Samples were collected from MS Society of Iran and some hospitals from Tehran.Importantly, HLA-DRB1 * 15, as a critical risk factor for MS, was ruled out in all patients.In the control subject, anyone without a family history of autoimmune disease and cancer was included as control group. Blood Sampling The entire protocol and measurements of our study were in line with Ethics Committee of Shahid Beheshti University of Medical Sciences guidelines (IR.SBMU.MSP.REC.1396.876).Blood samples were collected from all participants in this study.Both MS patients and healthy controls gave their informed consent for incorporation in this work.Afterwards, clinical information of patients was obtained. Quantitative Real-time PCR Total RNA was isolated by using GeneAll Hybrid-R TM blood RNA extraction kit (cat No. 305-101, South Korea).Then, cDNA synthesis was carried out through kit of Biosystems High-Capacity cDNA Reverse Transcription (PN: 4375575, USA) according to the manufacturer's instruction.Allele ID6 (Premier Biosoft, Palo Alto, USA) was utilized to design the primers (PCR product lengths and primer sequences have been summarized in Table 1).Beta-2-microglobulin (B2M) was considered as a reference gene in order to normalize the expression level for each sample. SYBR Green-based Real-time quantitative PCR assay was conducted in triplicates in Corbett Rotor Gene 6000 machine (Corbett Life Science, Australia).Routinely, the NTC (No Template Control) sample was included for each primer in each run. Statistical Analysis We used SPSS version 18 (Chicago, IL, USA) for statistical analyses.Analysis of differences between two groups was done by independent t-test.Moreover, Pearson coefficient was utilized to study the correlations between variables. Results were regarded statistically significant if p values were ˂0.05.Spearman correlation test was carried out to assess the possible correlations between BDNF-AS and BDNF relative expression levels. RESUTLS The clinical information of both MS patients and healthy individuals is provided in Table 2.All the subjects' results were evaluated in the following three categories: results of total subjects (regardless of their sex and age), sex-related results (male or female) as well as age-linked results (>30, 30-40 as well as 40˂ years).In this way, all of the patients were compared with healthy controls and independently analyzed regarding sex and age. Relative Expression Level of BDNF Gene The data of all MS patients displayed an elevated, though statistically not significant, level of expression for BDNF observed between patients and control groups (Table 3).Of note, the most considerable difference was for male patients >40 years (3.6-fold change) compared with controls. Relative Expression Level of BDNF-AS LncRNA BDNF-AS relative expression differences in MS and control groups was not significant (p˃0.05).BDNF-AS expression in male patients had a two-fold increase; however, this increment did not reach a statistically significance.Expression ratio of BDNF-AS was represented as Table 4. Correlation Analysis between BDNF and BDNF-AS with Expanded Disability Status Scale (EDSS) The correlations between BDNF and BDNF-AS with EDSS were not statistically significant (r=-0.049,p=0.739 and r=-0.087,p=0.548, respectively). Correlation Analysis between BDNF and BDNF-AS with Disease Duration The correlation between BDNF gene and BDNF-AS lncRNA with disease duration did not reach a statistical significance in our analysis (r=0.025, p=0.861 and r=-0.056,p= 0.698, respectively). Correlation Analysis of BDNF and BDNF-AS with Age at Onset Correlation analysis results showed that there is an insignificant correlation between expression levels of BDNF (r=0.121,p=0.405) and BDNF-AS (r=0.234,p=0.102) with age at onset of MS disease. Correlations between Expression Level of BDNF and BDNF-AS After Spearman correlation analysis, a significant and strong positive correlation was found between the expression levels of BDNF-AS and BDNF, as illustrated in Figure 1 (r=0.785,p˂0.0001). DISCUSSION In this study we compared the expression levels of BDNF and BDNF-AS in the 50 RR-MS patients versus 50 healthy subjects.We tried to match completely all participants in our sampling.To do so, samples were categorized with regards to sex and age.Also, those patients who were HLA-DRB1 * 15 positives were excluded due to its major genetic susceptibility for MS. On the other hand, though expression levels of BDNF and BDNF-AS have been investigated in different subtypes of blood leukocytes, 6,[18][19][20][21][22] to best of our knowledge, no molecular study has been so far performed on whole blood samples comparing their expression levels.Of note, some studies have evaluated BDNF concentrations in serum or plasma, but all of them were restricted to protein assays such as ELISA. 19,20,23,24onverging lines of evidence support BDNF as a candidate molecular effector in neuroprotective autoimmunity and inflammation, which is produced in peripheral blood and MS lesions by immune cells. 25It has been recently reported to exert beneficial effects on experimental autoimmune encephalomyelitis (EAE), an animal model that histopathologically and clinically mimics MS. however, BDNF has not been demonstrated to have any measurable effect on the clinical progression as well as quantitative magnetic resonance imaging (MRI) parameters. 26n the current study, after our analysis, no statistically difference was seen neither in BDNF expression level nor that of BDNF-AS between groups of patients and healthy subjects.Although there was an increased level for BDNF in all subgroups of patients and considerable changes in females between 30-40 years old as well as males˃40 years old subgroups (3.6and 2.8-fold changes, respectively) compared with controls, none of them reached significance.Again, for BDNF-AS none of patients' subgroups showed significance even subgroup of females ˂30 years old (3-fold change).This is well in line with Lindquist and Kalinowska-Łyszczarz works who found no quantitative change in BDNF protein in freshly isolated peripheral blood mononuclear cells (PBMCs) of MS patients, 27,28 and in contrast to those exhibiting significant changes. 19,22ne of the most straightforward explanations for this might be different sample sizes and heterogeneity of MS that, in part, contributes to these results.0][31] It can be, further, proposed that different mean age of our patients (as collected by random sampling) versus mean age of other studies may come up with different consequences.For instance, Lommatzsch et al observed that BDNF levels in plasma was significantly down-regulated with increasing age. 23Such conflicting results regarding altered BDNF expression in MS patients can be most likely because of inclusion of patients with subclinical disorder activity or different immunotherapy, and also in part due to various biological materials analyzed. 27ole of BDNF-AS1 as a potential biomarker in cancer development in human retinoblastoma has previously been described. 32Particularly, BDNF-AS levels was reported to negatively correlate with the level of BDNF Mrna. 14,33It was found that BDNF was normally repressed by BDNF-AS. 25LncRNAs are thought to regulate the expression of target genes through binding to complementary sequences. 34BDNF mRNA contains the BDNF-AS complementary sequences.However, caution is required to directly link BDNF-AS to BDNF in MS.In one study it was noted that stability of BDNF sense RNA was not affected by BDNF-AS. 14All these lines of evidence suggest that a more complicated pathway would associate with BDNF and BDNF-AS regarding their inter-regulation. 13ext, our data revealed no correlation with clinical parameters such as disease duration, age at onset and EDSS, which was confirmed by another study at protein level. 18A strong and positive significant correlation (r=0.785,p<0.0001) was furthermore observed between BDNF and BDNF-AS, implying their strong interaction in MS.Interestingly, previous studies suggested a negative correlation between them; nonetheless, our study provided contrary results in whole blood samples of MS remission phase. 14,33We can speculate due to the fact that our sampling was conducted on patients in remission phase of MS, axonal degeneration might be more emphasized in relapsing phase. 35,36][39] It was recently proposed that BDNF and BDNF-AS can interact with some components of epigenetic pathways and methylation process (Figure 2). 8onsistent with that, BDNF was reported to be regulated at transcriptional level by methyl CpG binding protein 2 (MeCP2). 8This can be taken as a new mechanism in re-myelination and/or myelin repair in MS.Partly, activation of T cells can result in TNF expression leading to the BDNF induction.Then, BDNF acts as a chemo-attractant signal which draws nerve growth factor (NGF) into regions with elevated BDNF (Figure 2). 8riefly, BDNF and BDNF-AS expression levels revealed insignificant discrepancies in patients and controls.We found a strong and positive correlation between BDNF and BDNF-AS in MS patients, which is, based on previous studies, a quit novel finding and can be further discussed by future works to unravel its possible application in MS.As a limitation of our study it would be better to measure mRNA levels of these genes in different phase of the multiple sclerosis patients.We suggest evaluation of different subsets separately along with large cohort studies comprising a higher number of individuals from different ages to unravel the effects of other possible aspects. Figure 2 . Figure 2. Schematic view of the brain-derived neurotrophic factor (BDNF), tumor necrosis factor (TNF) and nerve growth factor (NGF) signaling triad.Activation of T cells results in TNF expression leading to the induction of BDNF and NGF expressions.BDNF acts as a chemo-attractant signal which draws NGF into the regions of increased BDNF.BDNF moreover induces expressions of NGF and TNF.Increased NGF expression suppresses TNF from signaling through tumor necrosis factor receptor 1 (TNFR1).This causes suppressing of the TNF/TNFR1 inflammatory effects-that brings cell damage and apoptosis.On the other hand, increased NGF expression promotes the preferential TNF signaling via tumor necrosis factor receptor 2 (TNFR2).The TNF/TNFR2 anti-inflammatory effects promotes remyelination and/or myelin repair.MeCP2 acts as a transcriptional repressor of BDNF.Further, therapeutic interventional strategies are recognized to suppress activity of MeCP2 and consequently removing repressive effects of MeCP2 from BDNF.This resultant elevation in BDNF would restore the homeostatic balance between different cytokines and neurotrophins which is of essential importance in the process of remyelination and/or myelin repair. Table 2 . Clinical and demographic information of relapsing-remitting multiple sclerosis (RR-MS) patients and healthy controls * EDSS: Expanded Disability Status Scale of Kurtzke.
2019-01-22T22:33:00.930Z
2019-04-15T00:00:00.000
{ "year": 2018, "sha1": "b6356753e3162fa416b9b07db519d593a7eef2df", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/ijaai.v17i6.619", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b6356753e3162fa416b9b07db519d593a7eef2df", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265497169
pes2o/s2orc
v3-fos-license
A single coiled‐coil domain mutation in hIKCa channel subunits disrupts preferential formation of heteromeric hSK1:hIKCa channels Abstract The expression of IKCa (SK4) channel subunits overlaps with that of SK channel subunits, and it has been proposed that the two related subunits prefer to co‐assemble to form heteromeric hSK1:hIKCa channels. This implicates hSK1:hIKCa heteromers in physiological roles that might have been attributed to activation of SK channels. We have used a mutation approach to confirm formation of heterometric hSK1:hIKCa channels. Introduction of residues within hSK1 that were predicted to impart sensitivity to the hIKCa current blocker TRAM‐34 changed the pharmacology of functional heteromers. Heteromeric channels formed between wildtype hIKCa and mutant hSK1 subunits displayed a significantly higher sensitivity and maximum block to addition of TRAM‐34 than heteromers formed between wildtype subunits. Heteromer formation was disrupted by a single point mutation within one COOH‐terminal coiled‐coil domain of the hIKCa channel subunit. This mutation only disrupted the formation of hSK1:hIKCa heteromeric channels, without affecting the formation of homomeric hIKCa channels. Finally, the Ca2+ gating sensitivity of heteromeric hSK1:hIKCa channels was found to be significantly lower than the Ca2+ gating sensitivity of homomeric hIKCa channels. These data confirmed the preferred formation of heteromeric channels that results from COOH‐terminal interactions between subunits. The distinct sensitivity of the heteromer to activation by Ca2+ suggests that heteromeric channels fulfil a distinct function within those neurons that express both subunits. cells (Balut et al., 2012;Gardos, 1958).Cloning showed they possess approximately 40% identity with SK1-3 subunits and were originally termed SK4 (Joiner et al., 1997).However, they are now considered to be a separate subfamily (termed K Ca 3.1; Balut et al., 2012;Brown et al., 2018;Turner et al., 2015) and are now more commonly termed IKCa (Higham et al., 2019;Ishii et al., 1997;Wulff et al., 2001).The belief that the physiological roles of SK1-3 and IKCa channels were distinct has been challenged in recent years by immunocytochemistry, electrophysiology and pharmacology showing IKCa subunit expression coincident with expression of SK1-3 channel subunits in some neurons (King et al., 2015;Sailer et al., 2002;Stocker & Pedarzani, 2000;Turner et al., 2015Turner et al., , 2016) ) and cardiac tissue (Tuteja et al., 2005;Weisbrod et al., 2013).This suggests a greater role for IKCa channel subunits in the body than has been previously considered.Any role might be more widespread when it is considered that it has been shown that human (h) SK1 subunits and IKCa subunits preferentially coassemble when co-expressed in a heterologous expression system (Higham et al., 2019).The resulting heteromeric hSK1:hIKCa channel has a distinct single-channel conductance and altered pharmacology.IKCa-mediated current is blocked by the clotrimazole analog TRAM-34 (Wulff et al., 2001).Block is mediated by association of TRAM-34 with residues threonine 250 in the pore loop and valine 275 in transmembrane domain S6 (Wulff et al., 2001).Interaction with T250 might suggest that TRAM-34 is an open-channel blocker, although that has yet to be determined.It has been suggested that the pyrazole ring of TRAM-34 uses the side chains of V275 on each subunit to be orientated for interaction with T250 of one subunit (Brown et al., 2018).This proposal suggests that high sensitivity to block of IKCa channel current by TRAM-34 requires all four IKCa channel subunits.Coexpression of SK1 and IKCa subunits produced a current that displayed a lower sensitivity to TRAM-34 with a significantly reduced maximum block (Higham et al., 2019), supporting the mechanism of block of homomeric IKCa channel current. SK channels assemble via interactions between coiled-coil domains (CCDs) within the COOH terminus of each subtype (Church et al., 2015;Tuteja et al., 2010).CCDs are sequences that are implicated in proteinprotein interactions, with the COOH termini of both SK and IKCa subunits displaying such domains close to the calmodulin-binding domain (CaMBD) (Ji et al., 2018;Kim et al., 2007).Sequence analysis suggests that hIKCa subunits contain two separate COOH terminal CCDs, both displaying a histidine residue (H358 and H389).It has been suggested that histidine residues might be particularly important in protein function and assembly, with the phosphorylation state of H358 in hIKCa regulating channel activity (Srivastava et al., 2006). We have used a mutation approach to investigate the preferred formation of heteromeric hSK1:hIKCa channels (Higham et al., 2019).Heteromeric channels containing hSK1 subunits mutated to be sensitive to the hIKCa inhibitor TRAM-34 (Wulff et al., 2001) displayed a changed pharmacology when compared with co-expression of wildtype subunits.These data, together with no evidence of homomeric channels, confirmed the preferential formation of heteromeric hSK1:hIKCa channels.Heteromeric channel formation was disrupted by a point mutation within the CCD in the COOH terminus of IKCa, the mutation not affecting homomeric channel formation.Finally, we demonstrated that the heteromer displays a calcium sensitivity distinct from homomeric hIKCa channels that could be exploited by the cell to optimize function. | Cell culture and transient transfection TsA201 cells were maintained in culture at 37 C and 5% CO 2 , being grown in culture flasks with Dulbecco's Modified Eagle Medium (DMEM) supplemented with foetal bovine serum (10%) and penicillin and streptomycin (1%).Cells were passaged at $80% confluence.The cDNA of KCa channel subunits and one encoding expression of enhanced green fluorescent protein (eGFP) (using pcDNA3.1 or equivalent vectors) were transfected into tsA201 cells using polyethylenimine (PEI) (Merck/Sigma, UK) at a concentration of 0.2 mg/mL (Higham et al., 2019).Cells were used 48 h after transfection.All mutations were carried out using QuikChange II XL (Stratagene). Sensitivity of expressed channels to Ca 2+ was assessed using inside-out macropatches excised from eGFPpositive cells, using electrodes of resistance 1-3 MΩ.A range of free Ca 2+ concentrations was used, from 30 nM to 3 μM in half log 10 increments.We used external solution to bath inside-out patches, with the free Ca 2+ concentration calculated to maintain free Mg 2+ at 1 mM.Solutions were designed using REACT, with pH adjusted to 7.4 and osmolarity between 280 and 310 mOsm. | Data analysis For concentration-inhibition relationships, data points representing inhibition of current amplitude were fit with a variable-slope Hill equation in the following form: where I control is current amplitude in the absence of drug ((X), expressed in logarithmic units), I is current amplitude in the presence of drug, A min is I min divided by I control , A max is I max divided by I control , IC 50 is the concentration of drug that blocks 50% of current that is sensitive to that drug, and n H is the Hill coefficient (Graphpad Prism v9). Biphasic concentration-inhibition relationships were fitted using the following equation: | Confirmation of preferential formation of heteromeric hSK1:hIKCa channels We elected to utilise a change in pharmacology resulting from mutations in hSK1 to confirm the preferential formation of heteromeric channels when co-expressed with hIKCa subunits.We introduced two point mutations within hSK1 (hSK1[S348T + A371V]) to attempt to make evoked current sensitive to block by TRAM-34 (Wulff et al., 2001).Expression of hSK1[S348T + A371V] subunits produced functional current that displayed characteristic inward rectification and sensitivity to inhibition by extracellularly applied apamin (Figure 1ai).Our attempts to characterize the pharmacology of hSK1 [S348T + A371V]-mediated current was hampered by run-down of evoked current (Figure 1b).Comparison of current amplitudes over time showed that in contrast to wildtype hSK1-, wildtype hIKCa-or heteromeric wildtype hSK1:hIKCa-mediated currents, both homomeric and heteromeric channel currents containing the hSK1 [S348T + A371V] subunit displayed significant run-down (Figure 1b).Construction of a diary plot illustrated that hSK1[S348T + A371V] decreased in amplitude, displaying a τ rundown of 2 min (Figure 1b).Block of hSK1[S348T + A371V]-mediated current was resolved by early addition of TRAM-34, with current amplitude decreasing with a τ of approximately 20 s (Figure 1b).As predicted, introduction of the threonine (T250) and valine (V275) residues found in the hIKCa subunit sequence into hSK1 rendered the channel sensitive to block by TRAM-34 (Wulff et al., 2001).In contrast to wildtype hSK1-mediated current being insensitive to TRAM-34 (Higham et al., 2019), hSK1[S348T + A371V]-mediated current was fully blocked by 10 μM TRAM-34 (Figure 1aii,b). Co-expression of hSK1[S348T + A371V] and hIKCa subunits produced current that displayed inward rectification and was insensitive to application of apamin (100 nM) (Figure 2a).The first observation that indicated inclusion of the hSK1[S348T + A371V] subunit within the expressed heteromeric channel was that current was F I G U R E 1 Generation of a pore mutant hSK1 subunit produced a current sensitive to both apamin and TRAM-34.(ai) Current-voltage (IV) relationship for hSK1[S348T + A371V] generated by a voltage ramp from À100 to +100 mV (1 s duration) in the absence and presence of apamin (100 nM).The Ca 2+ -dependent inward rectifying current was completely inhibited by apamin leaving a linear IV relationship.(aii) IV relationship for hSK1[S348T + A371V] in the absence and presence of TRAM-34 (10 μM), showing the mutant hSK1 channel current was sensitive to the hIKCa inhibitor.(b) Diary plot showing amplitude of channel currents over time.Amplitude of hSK1-and hSK1: hIKCa-mediated currents were stable, while current rundown was observed for both homomeric and heteromeric channels containing the hSK1[S348T + A371V] subunit.Homomeric hSK1[S348T + A371V]-mediated current decreased in amplitude with a τ of 2 min (fit shown as solid line).Inhibition of hSK1[S348T + A371V]-mediated current by TRAM-34 (10 μM) was resolved by early application, causing inhibition of current with a τ of 20 s (fit shown as solid line). | A point mutation within the C-terminal coiled-coil domain of the IKCa subunit [H389E] permited homomeric channel assembly but disrupts preferential formation of heteromers with hSK1 subunits It is proposed that assembly of both homomeric hSK and hIKCa channels is mediated by interactions between CCDs within the COOH terminal of each subtype (Church et al., 2015;Ji et al., 2018;Tuteja et al., 2010).hIKCa subunits contain two separate COOH terminal CCDs, with H358 in the first domain being able to regulate channel activity (Srivastava et al., 2006).The targeting of H389 would determine whether the second CCD domain is involved in assembly or function of homomeric or heteromeric channels. We used pharmacology to determine whether heteromeric channel formation was affected by the H389E mutation. shown to produce a sub-maximal inhibition with a greatly reduced potency when compared with inhibition of wildtype hIKCa channel current (Higham et al., 2019) (Figure 5a,b).Application of increasing concentrations of TRAM-34 incrementally inhibited current from cells co-expressing hSK1 and hIKCa[H389E] subunits (Figure 5a).The concentration-inhibition relationship was best fit by a single Hill equation, with an increased sensitivity when compared with inhibition of heteromeric currents elicited by wildtype subunits (Figure 5b within the COOH terminus of the hIKCa subunit disrupted preferential formation of heteromers. | Sensitivity to the SK channel inhibitor UCL1684 Bis-quinolinium cyclophane SK blockers with nanomolar potency have been designed, with UCL1684 being used commonly (Campos Rosa et al., 1998;Hancock et al., 2015).The original description of the preferential formation of hSK1:hIKCa channels reported that heteromeric current was relatively insensitive to UCL1684, with an approximately 10%-15% block of current observed with a high concentration (100 nM) of the SK channel blocker (Higham et al., 2019).We re-evaluated this conclusion by determining whether expressed hSK1:hIKCamediated current was sensitive to higher concentrations of UCL1684.Expressed heteromeric hSK1:hIKCamediated current was clearly less sensitive to inhibition by UCL1684 than homomeric hSK1 current, with heteromeric current inhibited by high concentrations of UCL1684 (Figure 7a).The concentration-inhibition relationship illustrated that the current was sub-maximally inhibited with an IC 50 of 236 ± 82 nM (n = 3) (Figure 7b,c).In contrast, homomeric hSK1-mediated current was maximally inhibited with an IC 50 of 783 ± 380 pM (n = 3) (Figure 7b,c).We elected to use the <200-fold difference in sensitivity between homomeric hSK1-mediated and heteromeric hSK1:hIKCamediated current to confirm that the mutation H389E disrupted heteromeric channel formation.Generation of the concentration-inhibition relationship for UCL1684 affecting expressed hSK1:hIKCa[H389E]-mediated current showed a two component curve with maximal inhibition of 50.9 ± 12.5% that was significantly reduced when compared with inhibition of hSK1-mediated current ( p = 0.0171*) (Figure 7b).The IC 50(a) of 511 ± 0.2 pM and n H(a) of 0.910 ± 0.02 (n = 3) of the highsensitivity component was not significantly different from the IC 50 (p = 0.545) and n H (p = 0.354) of UCL1684 inhibition of homomeric hSK1-mediated current (Figure 7b,c).The IC 50(b) of 301 ± 84.6 nM and n H(b) of 0.930 ± 0.1 (n = 3) of the low-sensitivity component was not significantly different from the IC 50 ( p = 0.609) and n H ( p = 0.350) of UCL1684 on heteromeric hSK1:hIKCamediated current (Figure 7b,c).These data confirmed that substitution of H389 by glutamate within the distal CCD of the hIKCa subunit disrupted either the formation or the stability of the heteromeric channel, resulting in a population of homomeric hSK1 channels being resolved. | DISCUSSION Mutating hSK1 to introduce the equivalent T250 and V275 residues found in hIKCa produced a homomeric hSK1-mediated current that was now sensitive to the hIKCa inhibitor TRAM-34.Co-expression of the mutant hSK1 and wildtype IKCa subunits produced a heteromeric current of changed pharmacology when compared with co-expression of wildtype subunits.Addition of apamin had no effect, clearly showing the lack of apamin-sensitive homomeric hSK1 channels.These data confirmed that the co-expression of hSK1 and hIKCa channel subunits produced the preferential formation of heteromeric channels.Apamin is a negative allosteric inhibitor for SK1-3 channels that binds to the S3-S4 extracellular loop and outer pore to inhibit channel activity (Lamy et al., 2010), with the S3-S4 loop being donated to the adjacent subunit (Weatherall et al., 2011).The structure of hSK1 is predicted by using AlphaFold and shows the S3-S4 extracellular loop to be in a favourable position to be donated to the adjacent subunit (Figure 9ai) (Lee & MacKinnon, 2018).Current inhibition by apamin shows positive cooperativity, which has been shown to result from interactions between adjacent subunits (Lamy et al., 2010).Co-expression of hIKCa and hSK1 subunits produces a heteromeric channel that is insensitive to apamin, which indicates that subunits must be alternately arranged within the tetramer (Higham et al., 2019).This arrangement places the S3-S4 extracellular loop and the outer pore of different hSK1 subunits too far apart to permit apamin binding.Construction of a model heteromer between hIKCa and hSK1 subunits illustrates how far apart the two binding sites for apamin will be (Figure 9aii).The arrangement of the four residues that are necessary for TRAM-34 binding (Brown et al., 2018) is present in wildtype hIKCa channels (Figure 9bi) but absent in heteromeric hSK1:hIKCa channels (Figure 9bii).Recreation of this binding pocket through inclusion of hSK1[S348T + A371V] subunits in heteromeric channels (Figure 9biii) therefore produced a current inhibited by TRAM-34 with the same sensitivity at wildtype hIKCa channels (Figure 3b). We investigated whether the assembly of heteromers utilizes a CCD within the COOH terminus of the hIKCa channel subunit.We elected to investigate the role of H389 in subunit assembly, as protonation of the F I G U R E 8 Differing sensitivities to activation by Ca 2+ of homo-and heteromeric channels.(a) IV relationships for hSK1-, hIKCa-and hSK1:hIKCa-evoked currents in the presence of increasing concentrations of Ca 2+ .(b) Ca 2+ activation curves showing that homomeric hIKCa channels appeared most sensitive to activation by Ca 2+ (EC 50 = 409 ± 15.3 nM) (n = 5), with a relationship that demonstrated that these channels were active at resting Ca 2+ levels.In contrast, both homomeric hSK1 (EC 50 = 531 ± 78 nM) (n = 4) and heteromeric hSK1: hIKCa (EC 50 = 613 ± 27.6 nM) (n = 7) channels appeared less sensitive, with the Ca 2+ dependence of heteromeric channels being consistent with those that underlie the hippocampal neuron slow afterhyperpolarization.(c) EC 50 values for activation of hSK1, hIKCa and hSK1:hIKCa channels by Ca 2+ .There was no significant difference between EC 50 values of hSK1 and hIKCa (p = 0.129) or hSK1 and hSK1: hIKCa channels (p = 0.359).However, there was a significant rightward shift of the EC 50 value of hSK1:hIKCa channels compared with hIKCa channels (p = 0.0004***).Significance determined by one-way ANOVA. imidazole side chain of histidine residues at lower intracellular pH can be important in protein-protein interactions and assembly (Schonichen et al., 2013).Substitution of H389 within the hIKCa COOH terminal CCD with a glutamate (E) residue disrupted the preferred formation of heteromeric hSK1:hIKCa channels, without affecting the assembly of homomeric hIKCa channels.Finally, the functional consequences of heteromeric channel formation were investigated by measurement of the sensitivity to calcium of channel activation.We demonstrated that the heteromer displays a calcium sensitivity distinct from IKCa channels but similar to hSK1 channels.These data show that heteromeric channel formation utilizes a CCD within the COOH terminal of the hIKCa subunit that is different from that used to generate homomeric hIKCa channels.It is clear that the formation of heteromers produces a channel that displays a sensitivity to activation by calcium ions that will be exploited by the cell to optimize function. It has been proposed that SK channels utilize the coiled-coil domain (CCD) within the COOH terminus to assemble functional channels (Church et al., 2015;Ji et al., 2018;Tuteja et al., 2010).All SK channel subunits contain one CCD within their COOH termini, whereas the hIKCa subunit possesses two within its COOH terminus.The H389E mutation was introduced into the second (distal) CCD within the hIKCa C-terminus.This mutation revealed the different structural requirements for assembly of homomeric and heteromeric IKCa channels, because it did not affect formation of homomeric hIKCa channels but did disrupt assembly of heteromeric channels.Resolution of an apamin-and UCL1684-sensitive current in cells co-expressing the mutant hIKCa and wildtype hSK1 subunits indicated the presence of homomeric hSK1 channels.It is interesting to note that the pharmacology of the largest current component in cells co-expressing these subunits reflected the presence of heteromeric hSK1:hIKCa(H389E) channels.These data indicated that the H389E mutation did not prevent heteromer formation but only disrupted it.The H389E mutation would only disrupt the second C-terminal CCD of hIKCa subunits and not the first.Functional homomeric hIKCa(H389E) channels were expressed, suggesting that assembly of homomeric channels is mediated by the first CCD, while heteromeric hSK1:hIKCa channel coassembly utilises the second COOH terminal CCD. Co-expression of hSK1 and hIKCa subunits produced preferential formation of heteromeric channels (Higham et al., 2019).Both subunits are expressed in the soma of F I G U R E 9 Apamin binding pocket is absent in heteromeric hSK1:hIKCa channels, while inclusion of the hSK1[S348T + A371V] subunit in heteromeric channels recreates the TRAM-34 binding domain.(ai) Prediction of hSK1 channel structure by AlphaFold shows the donation of the S3-S4 extracellular loop (yellow) to the adjacent subunit, creating the two-site binding motif for apamin (between loop [yellow] and outer pore [red]).(aii) Preferential formation of heteromeric channels produces a tetramer where subunits alternate in identity (hSK1, blue; hIKCa, orange).This arrangement of subunits places too great a distance between the S3-S4 extracellular loop and outer pore of the hSK1 subunits within the heteromeric tetramer, rendering the heteromeric channel insensitive to apamin.(bi) In wildtype hIKCa channels, T250 and V275 (magenta) form a TRAM-34 binding pocket below the selectivity filter.(bii) Heteromeric channels preferentially formed upon co-expression of wildtype hSK1 (blue) and hIKCa (orange) subunits disrupts this binding pocket and renders the heteromeric channel to be less sensitive to inhibition by TRAM-34.(biii) Inclusion of the mutated hSK1[S348T + A371V] subunits within heteromeric channels recreates the TRAM-34 binding pocket, increasing the sensitivity to inhibition by TRAM-34. hippocampal neurons (Bowden et al., 2001;Sailer et al., 2002;Turner et al., 2015) and the slow afterhyperpolarization (slow AHP) is somatic in origin (Lima & Marrion, 2007).The heteromeric channel shares pharmacological properties with the channel that underlies generation of the slow AHP (Higham et al., 2019;King et al., 2015).For example, hSK1:hIKCa channel current is sensitive to the analogs of clotrimazole (e.g., TRAM-34) and charybdotoxin (Higham et al., 2019), as is the slow AHP (King et al., 2015;Shah & Haylett, 2000).In contrast, both hSK1:hIKCa-mediated current and the slow AHP are insensitive to apamin (Church et al., 2019;Higham et al., 2019).These data indicate that homomeric SK1 channels do not underlie the slow AHP, whereas heteromeric hSK1:hIKCa channels are clearly a candidate channel to underlie generation of the slow afterpotential.The conductance of the channel underlying the slow AHP has been estimated to be in excess of 20 pS (Lima & Marrion, 2007), which is more consistent with the conductance of the heteromer rather than homomeric SK channels (Higham et al., 2019;Hirschberg et al., 1999).This conductance estimate for the channel underlying the slow AHP was derived from channels being activated by delayed facilitation of L-type Ca 2+ channels (Cloues et al., 1997;Sahu et al., 2017;Sahu & Turner, 2021).Non-stationary noise analysis estimated an open probability (Po) of the channel at the peak of the slow AHP to be 0.6 (Valiante et al., 1997) and a very similar Po value was observed for the outward channels evoked by delayed facilitation of priming L-type channels (Lima & Marrion, 2007).These properties led to the suggestion that a submembrane concentration of approximately of 0.5-1 μM Ca 2+ would be required for this estimated Po (Marrion & Tavalin, 1998).The Ca 2+ sensitivity of heteromeric hSK1:hIKCa channels (EC 50 = 613 nM, inactive at 100 nM Ca 2+ ) showed a tendency to be lower than the Ca 2+ sensitivity of expressed homomeric hSK1 channels (EC 50 = 531 nM, inactive at 100 nM Ca 2+ ) and was significantly lower than homomeric hIKCa channels (EC 50 = 409 nM, 10% active at 100 nM Ca 2+ ).This lower sensitivity, together with the positive cooperativity of activation of hSK1:hIKCa channels by Ca 2+ , means that an increase of submembrane Ca 2+ concentration would have an effect on heteromeric channel open probability.This allows for small changes in hSK1:hIKCa channel Po based on how much Ca 2+ accumulates in the submembrane domain.In contrast, the higher cooperativity of homomeric hSK1 channel activation by Ca 2+ would likely result in those channels being maximally activated.This is pertinent when it is considered that the amplitude of the slow AHP is dependent on the number of action potentials within the preceding burst (Church et al., 2019).This might reflect an increase in delayed facilitation of L-type Ca 2+ channels (Cloues et al., 1997), which will increase the likelihood of functional coupling between colocalised Ca 2+ and outward channels (Marrion & Tavalin, 1998).This would also raise submembrane Ca 2+ levels to increase outward channel open probability and increase the slow AHP amplitude.Therefore, it can be suggested that the pharmacology and Ca 2+ dependence of activation of heteromeric hSK1:hIKCa channels places them as a strong candidate to underlie generation of the slow AHP. The channel underlying the slow AHP is a therapeutic target for the treatment of cognitive impairment (Disterhoft et al., 1996;Moyer et al., 2000;Power et al., 2001), as the slow AHP amplitude increases with age (Disterhoft et al., 1996;Moyer et al., 2000;Power et al., 2001) and upon acute overexpression of those tau isoforms that are implicated in dementia (Stan et al., 2022).Resolution of the subunit identity comprising this elusive channel will be required to enable a new approach to target cognitive deficits in disease. illustrates comparison of IC 50 values showing a significant difference in values obtained for homomeric WT hIKCa, heteromeric WT hSK1:hIKCa F I G U R E 2 Expression of hSK1[S348T + A371V] with WT IKCa subunits produced a current that was insensitive to apamin but inhibited by TRAM-34.(a) IV relationship generated by a voltage ramp from À100 to +100 mV (1 s duration) in the absence and presence of apamin (100 nM), showing that the heteromeric channel current is insensitive to the SK channel inhibitor.(b) IV relationship in the absence and presence of TRAM-34 (10 μM), showing the current recorded from cells expressing both hSK1[S348T + A371V] and WT IKCa subunits was fully sensitive to the hIKCa inhibitor.F I G U R E 3 Comparison of sensitivities to inhibition by TRAM-34.(a) Concentration-inhibition relationships generated by inhibition of expressed channel currents by increasing concentrations of TRAM-34.Homomeric hIKCa-mediated current was most sensitive and fully blocked by TRAM-34.Current from cells expressing hSK1[S348T + A371V] and WT IKCa subunits was also fully sensitive to TRAM-34, with an intermediate sensitivity between homomeric hIKCa-and heteromeric hSK1:hIKCa-mediated current.(b) Mean IC 50 values obtained from inhibition of each current subtype, with significance determined by one way ANOVA. F I G U R E 6 Disruption of heteromeric channel assembly reveals apamin-sensitive hSK1-mediated current.(a) Addition of apamin (100 nM) had no effect on current resolved from cells co-expressing wildtype hSK1 and hIKCa subunits (n = 4).(b) Addition of apamin (100 nM) partially inhibited current resolved from cells co-expressing wildtype hSK1 and the mutant hIKCa[H389E] subunits (n = 6). F I G U R E 7 Homomeric hSK1-mediated current identified by inhibition by UCL1684 resulting from disruption of heteromeric hSK1: hIKCa(H389E) channel formation.(a) IV relationships for hSK1 (left), hSK1:hIKCa (WT) (centre) and hSK1:hIKCa[H389E] (right) evoked currents in the absence and presence of increasing concentrations of UCL1684.(b) Concentration-inhibition relationships for the effect of UCL1684 on homomeric hSK1, heteromeric hSK1:hIKCa (WT) and heteromeric hSK1:hIKCa[H389E] channel currents.Homomeric hSK1-mediated current was fully inhibited by UCL1684 with an IC 50 of 783 pM (n = 3), while both populations of heteromeric currents were only partially inhibited by UCL1684.Heteromeric channels comprised of wildtype hSK1 and hIKCa (WT) subunits displayed a single component inhibition relationship with an IC 50 of 236 nM (n = 3).In contrast, heteromeric channel current from cells co-expressing wildtype hSK1 and mutant hIKCa[H389E] subunits displayed a two-component inhibition relationship.The first component reflected inhibition of homomeric hSK1 channels (IC 50 511 pM), while the second component reflected inhibition of heteromeric hSK1:hIKCa [H389E] channel current (IC 50 301 nM) (n = 3).(c) Mean ± s.e.m IC 50 values for homomeric hSK1-and heteromeric hSK1:hIKCa-mediated current and the comparison with the IC 50 values from each phase of the inhibition relationship produced from cells co-expressing hSK1 and hIKCa[H389E] subunits.Significance was determined using the unpaired Student's t test.
2023-11-30T06:17:32.790Z
2023-11-29T00:00:00.000
{ "year": 2023, "sha1": "f9c7db566aa0ad3eec8344b0b86936d8d55ec535", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.16189", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "951a5340e3d3fb8f2b62e79719af8df2def9db11", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252160721
pes2o/s2orc
v3-fos-license
Effect of Hemadsorption Therapy in Critically Ill Patients with COVID-19 (CYTOCOV-19): A Prospective Randomized Controlled Pilot Trial Introduction Immunomodulatory therapies have shown beneficial effects in patients with severe COVID-19. Patients with hypercytokinemia might benefit from the removal of inflammatory mediators via hemadsorption. Methods Single-center prospective randomized trial at the University Medical Center Hamburg-Eppendorf (Germany). Patients with confirmed COVID-19, refractory shock (norepinephrine ≥0.2 µg/kg/min to maintain a mean arterial pressure ≥65 mm Hg), interleukin-6 (IL-6) ≥500 ng/L, and an indication for renal replacement therapy or extracorporeal membrane oxygenation were included. Patients received either hemadsorption therapy (HT) or standard medical therapy (SMT). For HT, a CytoSorb® adsorber was used for up to 5 days and was replaced every 18–24 h. The primary endpoint was sustained hemodynamic improvement (norepinephrine ≤0.05 µg/kg/min ≥24 h). Results Of 242 screened patients, 24 were randomized and assigned to either HT (N = 12) or SMT (N = 12). Both groups had similar severity as assessed by SAPS II (median 75 points HT group vs. 79 SMT group, p = 0.590) and SOFA (17 vs. 16, p = 0.551). Median IL-6 levels were 2,269 (IQR 948–3,679) and 3,747 (1,301–5,415) ng/L in the HT and SMT groups at baseline, respectively (p = 0.378). Shock resolution (primary endpoint) was reached in 33% (4/12) versus 17% (2/12) in the HT and SMT groups, respectively (p = 0.640). Twenty-eight-day mortality was 58% (7/12) in the HT compared to 67% (8/12) in the SMT group (p = 1.0). During the treatment period of 5 days, 6/12 (50%) of the SMT patients died, in contrast to 1/12 (8%) in the HT group. Conclusion HT was associated with a non-significant trend toward clinical improvement within the intervention period. In selected patients, HT might be an option for stabilization before transfer and further therapeutic decisions. This finding warrants further investigation in larger trials. In severe COVID-19, a dysregulated systemic immune overactivation causes the elevation of inflammatory cytokines [10,11]. High interleukin-6 (IL-6) levels were associated with multiorgan failure and mortality [12][13][14]. Similar to septic shock caused by bacteria, SARS-CoV-2-associated hyperinflammation can also initiate a proinflammatory feedback loop, triggering hypercytokinemia and leading to hemodynamic instability or even shock [15]. Immunomodulatory therapies, including corticosteroids and IL-6 antagonists, have recently shown beneficial effects [16][17][18]. Removal of circulating inflammatory mediators by cytokine adsorption might represent a biologically plausible method to achieve a less proinflammatory cytokine milieu, thus conferring significant clinical improvement in severe COVID-19. Hemadsorption using Cyto-Sorb ® (CytoSorbents Corporation, Monmouth Junction, NJ, USA) is approved in Europe and has previously been shown to attenuate an excessive systemic inflammatory response [19]. By eliminating various mediators (e.g., IL-1/6/8/10), bacterial toxins, and danger-associated molecular patterns (DAMPS), the treatment may contribute to the hemodynamic stabilization of patients with septic shock [20]. The adsorber consists of porous polystyrene with an effective surface area of >40,000 m 2 , thus allowing permanent binding of molecules in the range of 5-60 kDa in a concentration-dependent manner [21]. The device can be inserted into a renal replacement therapy (RRT) circuit or an extracorporeal membrane oxygenation (ECMO) system [20,22,23]. Because of its potentially beneficial effect on critically ill patients with COVID-19, Cyto-Sorb ® received emergency use authorization in the US by the FDA [24]. The purpose of this randomized controlled trial was to evaluate the effect of cytokine elimination by hemadsorption on hemodynamics and disease severity in critically ill patients with COVID-19 with proven hypercytokinemia. The CYTOCOV-19 trial was an investigator-initiated, openlabel, prospective, randomized, controlled study in critically ill patients with COVID-19 admitted to the ICUs of the Department of Intensive Care Medicine at the University Medical Center Ham-burg-Eppendorf (Germany). The study protocol was approved by the Ethics Committee of the Hamburg Chamber of Physicians (No.: PV7314) and complies with the Declaration of Helsinki. Patients, Inclusion and Exclusion Criteria All critically ill patients with confirmed COVID-19 were screened for eligibility. Patients were included when they presented with confirmed COVID-19 and refractory shock with the need for norepinephrine ≥0.2 μg/kg/min to maintain a mean arterial pressure (MAP) ≥65 mm Hg, IL-6 ≥500 ng/L and a need for RRT and/or ECMO. Exclusion criteria were diagnosis of advanced liver cirrhosis (Child-Pugh C), do-not-resuscitate order, moribund condition, expected survival of less than 14 days due to comorbidities, pregnancy or breastfeeding, or participation in another interventional trial. Randomization Eligible patients were randomly assigned in a 1:1 ratio to standard medical therapy (SMT) plus hemadsorption therapy (HT) or SMT alone. The randomization sequence was generated using permuted blocks with a size of 4 and was not stratified. Medical staff involved in patient care was aware of group assignment since use of a hemadsorption device in addition to standard therapy could not be blinded with reasonable effort. Trial Intervention In the intervention group, a hemadsorption device was incorporated into either the RRT or the ECMO system, respectively. For HT, a CytoSorb ® adsorber (total volume 300 mL, priming volume 120 mL, filled with sterile normal saline) was used and placed in a pre-filter position within the RRT circuit. The device was replaced every 18-24 h. Treatment duration was five consecutive days, and treatment was stopped early when shock reversal was observed for at least 24 h (primary endpoint). Flow rates through the hemadsorption device were above 150 mL/min. Early replacement was indicated when blood flow decreased below 100 mL/min or complications like line clotting were observed. For RRT, the multiFiltratePRO Ci-Ca system was used throughout for pre-dilution CVVHD with the Ultraflux AV 600 polysulfone capillary hemofilter (both Fresenius Medical Care, Bad Homburg, Germany). Blood samples were taken routinely before the initiation of HT and on each subsequent day until day 10. Clinical laboratory parameters included differential blood count, serum electrolytes, kidney and liver function parameters, coagulation, IL-6, mid-regional pro-adrenomedullin (MR-pro-ADM), and procalcitonin (PCT). The reference timepoint was the time of randomization. Patient follow-up was performed for at least 28 days after randomization. Primary and Secondary Endpoints The primary endpoint was shock reversal defined as hemodynamic stabilization with a significant reduction of norepi- nephrine to a dose of 0.05 µg/kg/min or lower while maintaining MAP ≥65 mm Hg for at least 24 h [20]. Secondary endpoints included improvement of organ dysfunction measured by sequential organ failure assessment (SOFA) score, lactate clearance, time on RRT, time on ECMO, duration of mechanical ventilation, time to shock reversal, length of ICU stay, total vasopressor dose, and ICU and hospital mortality within 28 days. Further secondary endpoints included reduction (≥20%) of IL-6, PCT and MR-pro-ADM within 10 days after randomization. Study Definitions and Patient Management Confirmed COVID-19 was defined as at least one positive result of reverse transcriptase-polymerase chain reaction (rt-PCR) for SARS-CoV-2 obtained from naso-pharyngeal swabs and/or bronchial secretions or blood. Acute respiratory distress syndrome (ARDS) was defined according to the Berlin definition, using the PaO 2 /FiO 2 ratio (Horowitz index) [25]. Severity of illness was evaluated by SOFA and simplified acute physiology (SAPS II) scores [26,27]. A Charlson Comorbidity Index (CCI) was calculated for all patients [28]. Medical treatment was performed following national and international recommendations. Norepinephrine was infused to obtain a MAP above 65 mm Hg [29][30][31]. ECMO was evaluated in patients with severe refractory hypoxemia (PaO 2 /FiO 2 ratio <80) not responding to conservative ARDS management. RRT was started in patients with severe metabolic acidosis, anuria unresponsive to fluids, hyperkalaemia, and/or uremic complications, according to the most recent Austrian/German recommendations [32,33]. IL-6 was measured by an electrochemiluminescence assay (Atellica IM Analyzer; Siemens Healthcare GmbH, Erlangen, Germany). Statistical Analysis Data are presented as absolute and relative frequency for categorical variables and as median and interquartile range for continuous variables. Categorical variables were compared with χ 2tests or Fisher's exact tests. Continuous variables were compared using the Mann-Whitney U test. Within-group and betweengroup comparisons of IL-6 levels were Bonferroni corrected for multiple comparisons. Survival function estimates were calculated using the Kaplan-Meier method and were compared using the logrank test. Statistical tests were two-sided with a 5% significance level and with nominal p values reported for description outside the primary analysis. Statistical analyses were performed using IBM SPSS Statistics Version 24.0 (IBM Corp., Armonk, NY, USA) and GraphPad Prism 9 (GraphPad Software, San Diego, CA, USA). The study was prepared in accordance with the Consolidated Standards of Reporting Trials recommendations. Results A total of 242 patients were assessed for eligibility, and 24 patients underwent randomization. Of these, 12 patients were assigned to either the HT group or SMT. The last day of follow-up was May 1, 2021. The flow diagram displaying screening, randomization, and outcomes is depicted in Figure 1. Characteristics of the Study Population The characteristics of the study population are shown in Table 1. Thirteen (54%) patients were referred from other hospitals for further intensive care management. Before randomization, patients had been treated in the ICU for a median time of 6 Hemadsorption Details regarding hemadsorption treatment are shown in Table 2. Time from randomization to the start of hemadsorption was 0.9 (0.5-2.3) h. The hemadsorption device was added to the RRT circuit in 11 (92%) and to the ECMO system in 1 (8%) patient, respectively. All patients were on continuous RRT; anticoagulation was performed using systemic heparin in 1 (8%) patient and regional anticoagulation with citrate-calcium in 11 (92%) patients assigned to the HT group. Overall, 74 hemadsorption devices were used and patients received 6 (5.8-6.3) hemadsorption treatments during the intervention period. Duration of treatment was 22.9 (17.4-24.7) h per adsorber. Overall, 9 (12%) hemadsorption treatments had to be terminated early because of circuit clotting. In addition, treatment duration below 18 h was observed in 5 (7%) treatment sessions which was due to logistical problems. Due to technical difficulties when exchanging the hemadsorption device within the ECMO circuit, one treatment was prolonged to 46.6 h. No other device-related complications were observed during the intervention period. Two (16%) patients reached the primary trial endpoint before day 5, as predefined in the study protocol, and HT was discontinued at the next planned hemadsorption de- Data are expressed as n (%) or median (interquartile range). ARDS, acute respiratory distress syndrome; SOFA, sequential organ failure assessment; SAPS II, simplified acute physiology score II; pts., points; vvECMO, veno-venous extracorporeal membrane oxygenation; IL, interleukin; PCT, procalcitonin; CRP, C-reactive protein; pro-ADM, proadrenomedullin. Data are expressed as n (%) or median (interquartile range). SOFA, sequential organ failure assessment; RRT, renal replacement therapy; vvECMO, veno-venous extracorporeal membrane oxygenation; PCT, procalcitonin; IL-6, interleukin-6. Analysis of Endpoints and Outcomes The primary endpoint of shock reversal within 10 days of randomization was reached by 4 patients (33%) in the HT group and 2 patients (17%) in the SMT group (p = 0.640). The time to shock reversal was 6.3 (3.7-10.0) days in the HT and 9.2 (5.1-15.9) days (p = 0.110) in the SMT group. We observed a 28-day mortality of 58% (n = 7) in the HT group and of 67% (n = 8) in the SMT group (p = 0.382, cf. Kaplan-Meier survival estimates (Fig. 2). Primary and secondary endpoints are shown in detail in Table 3. Discussion This is the first randomized controlled trial investigating HT for cytokine elimination in critically ill patients with COVID-19 with proven and profound hypercytokinemia. In this study, the primary endpoint of shock reversal was not reached for the intervention group, and we could not demonstrate a significant reduction of IL-6 by HT. However, HT may potentially be accompanied by early clinical stabilization of severely ill patients when compared to SMT. The COVID-19 pandemic resulted in high hospitalization rates with up to 5% admitted to the ICU, mainly due to respiratory failure [1,2,6]. The interplay between direct viral damage to alveolar epithelial cells and excessive endothelial activation results in SARS-CoV-2 related lung injury accompanied by excessive cytokine production [19]. Extensive pulmonary and multiorgan endothelial le-sions are largely described as a hallmark of severe respiratory failure [34]. Among others, high IL-6 levels were observed and strongly associated with multiorgan failure and mortality in critically ill patients with COVID-19 [12,13]. Hemadsorption techniques targeting circulating inflammatory mediators may lead to a re-balancing of the internal cytokine milieu. Several case series and small studies in patients with COVID-19 or septic shock have shown promising results using hemadsorption [20,22,35,36]. Although initial reports suggested an uncontrolled cytokine response in patients with COVID-19, cytokine levels have been reported to be not as high as compared to other causes of ARDS [10,11,37,38]. However, some patients exhibit uncontrolled hyperinflammatory cytokine release, which in many cases entails multiple organ dysfunction and death. Therefore, we sought to specifically target the population, which might benefit most from hemadsorption treatment by including only severely ill patients with cytokinemia defined as IL-6 ≥500 ng/L accompanied by refractory shock. Recently, a small randomized controlled trial by Supady et al. [39] using HT in CO-VID-19 patients with ARDS requiring ECMO therapy could not show beneficial effects. The primary endpoint was IL-6 serum concentration 72 h after randomization. However, median baseline IL-6 levels in the intervention group were low (357 ng/L), compared to 2,269 ng/L in our study. However, we observed a significant decrease in IL-6 levels both in the intervention and the control group within the first 24 h. We further observed a significant and sustained decrease in PCT (see online suppl. Fig. 1), which supports the effectiveness of HT to reduce PCT as shown earlier [40]. We hypothesize that initiation of hemadsorption should probably not be solely based on the clinical condition and acute respiratory failure, as recently shown by Supady et al. [39] As depicted in the flow diagram ( Fig. 1), we had a 100% recruitment of suitable patients in the present study (based on clinical/predefined inclusion criteria). We defined a suitable target population that, in contrast to the work of Supady et al. [39], did not show any conspicuous mortality during therapy. Notably, our cohort consisted of severely ill patients, which is demonstrated by high SOFA and SAPS II scores, usually associated with a mortality rate of above 80% [26,27]. Observational data suggest improvement of hemodynamics and a trend toward improved mortality with the use of hemadsorption in critically ill patients with septic shock. One study by Friesecke et al. [20] showed that hemoperfusion was associated with decreased vasopressor requirement and shock reversal in 65% of treated patients, and that this was accompanied by a significant reduction of IL-6 and lactate levels. In the present study, we observed a higher rate of shock reversal within 10 days after randomization in patients of the HT (33%) than in the SMT group (17%); however, this did not reach statistical significance. The survival curve in our study shows that the treatment group (HT) had a survival advantage, which was, however, limited to the intervention period (Fig. 2). If this survival advantage is attributable to the pre-randomization differences is unclear. Further larger studies have to clarify the finding. We neither observed nor expected differences in 28-day mortality between the two groups. This is in line with a previous RCT of hemadsorption in patients with septic shock which did not result in improved survival [41,42]; but again, in this trial, initial IL-6 levels were substantially lower in both groups (median 357 vs. 289 ng/L) compared with those in our study. In a recent retrospective study with propensity score matching analysis, no difference could be demonstrated in terms of hemodynamic stabilization between cytokine adsorption and SMT. However, this was an uncontrolled short-term (<24 h) intervention in a mixed population with sepsis, septic shock and hyperinflammation due to a variety of causes [43]. Of particular interest is our observation that patients in the HT group could be stabilized during the intervention period compared to patients in the SMT group. We can only speculate if extended treatment duration or targeting only patients with sustained hyperinflammation would result in beneficial and clinically meaningful effects of hemadsorption. However, addressing this question would require a different study design. To date, specific therapies for severe COVID-19 are scarce. Early stabilization of severely affected patients with proven hypercytokinemia to allow referral to a tertiary care center or to bridge to further interventions might be a reasonable indication for the use of hemadsorption. For this reason, our findings warrant further investigation in larger trials. This study has important limitations which should be mentioned. We are reporting a small randomized singlecenter open-label trial. Owing to the small sample size, we observed important differences regarding age and norepinephrine dose at baseline between both groups. Although these differences did not reach statistical significance, they could have influenced the primary outcome. Methodologically, trials involving rather complex medical devices are inherently difficult to double-blind, so that bias cannot be ruled out. Further, even though the flow chart (Fig. 1) shows that we comprehensibly enrolled all available patients after screening for eligibility, this study may still be subject to selection bias, and external validity of our results may be limited. Although statistically non-significant, there was a no-ticeable imbalance in noradrenalin dose, age, and arterial pH to the disadvantage of the control group. Before randomization, our patients had been treated in the ICU for a median time of 6.3 days, and more than half of the cohort were referrals from other hospitals. It is conceivable that an earlier initiation of HT might have resulted in more beneficial effects. Lastly, the duration of hemadsorption was prespecified and limited to 5 days. Whether an extended use of hemadsorption beyond 5 days would result in an improved outcome remains unclear. This study also has some strengths. Our study is consistent with previous findings in patients with septic shock. To our knowledge, this is the first study evaluating efficacy and outcome of HT in critically ill COVID-19 patients with hypercytokinemia, severe systemic inflammation, and multiple organ dysfunction. Screening more than 200 ICU patients only yielded inclusion of 24 patients, which confirms previous findings that uncontrolled hypercytokinemia is only present in some patients, and those might require a tailored and personalized therapeutic approach based on biological plausibility. Conclusion Uncontrolled hypercytokinemia accompanied by severe systemic inflammation and multiple organ dysfunction occurs in a subgroup of critically ill patients with CO-VID-19. There were no effects on IL-6 levels or 28-day mortality. Early mitigation of organ dysfunction leading to clinical stabilization was observed in the HT group. HT in patients with severe COVID-19 was feasible and safe and might be used for stabilization before transfer to a tertiary care center or for decision of further interventions. Whether longer duration or an earlier start of HT would prove beneficial should be elucidated and warrants further clinical investigations.
2022-09-10T06:17:22.765Z
2022-09-08T00:00:00.000
{ "year": 2022, "sha1": "284cdfd32c891b4b8c83d3e95d9986b153b1008a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3580d05484ce8d461fb58930863a729d22fec03d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84183453
pes2o/s2orc
v3-fos-license
Diagnosing cardiovascular disease in western lowland gorillas (Gorilla gorilla gorilla) with brain natriuretic peptide Cardiovascular disease is a leading cause of death in zoo-housed great apes, accounting for 41% of adult gorilla death in North American zoological institutions. Obtaining a timely and accurate diagnosis of cardiovascular disease in gorillas is challenging, relying on echocardiography which generally requires anesthetic medications that may confound findings and can cause severe side effects in cardiovascularly compromised animals. The measurement of brain natriuretic peptide (BNP) has emerged as a modality of interest in the diagnosis, prognosis and treatment of human patients with heart failure. This study evaluated records for 116 zoo-housed gorillas to determine relationships of BNP with cardiovascular disease. Elevations of BNP levels correlated with the presence of visible echocardiographic abnormalities, as well as reported clinical signs in affected gorillas. Levels of BNP greater 150 pb/mL should alert the clinician to the presence of myocardial strain and volume overload, warranting medical evaluation and intervention. Introduction Cardiovascular disease (CVD) is one of the leading causes of death in both humans and great apes [1][2][3][4][5]. While numerous similarities exist between CVD in these species, there are some unique differences. In the human population, the most common presentation of CVD is congestive heart failure, which is also the leading cause of morbidity, mortality, and hospitalization. This illness is primarily a vascular disease often related to diet and exercise [6]. In a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 contrast, in great apes the most common form of CVD is fibrosing cardiomyopathy of as yet unknown etiology. Forty-one percent of captive adult western lowland gorilla (Gorilla gorilla gorilla) deaths in North American zoological institutions are due to fibrosing cardiomyopathy [7]. In recent decades, great strides have been made in diagnosing and managing great ape CVD. Veterinary medicine has increasingly utilized and adapted diagnostic and therapeutic modalities from human medicine to manage gorilla cardiac disease. Echocardiography has been utilized to create a reference range of normal cardiac measurements, inform diagnostic protocols, and monitor response to treatment in zoologically housed gorillas [8]. However, echocardiography requires specialists to diagnose and interpret findings and often requires anesthesia, which potentially confounds findings [3,8]. This study examines the application of a serum biomarker, brain natriuretic peptide (BNP), as a diagnostic aid for CVD in the zoological gorilla population. Measurement of serum BNP has emerged as a modality of clinical interest in the diagnosis, prognosis and treatment of human patients with heart failure [6,9]. This neurohormone is secreted by myocardial cells and is particularly associated with the left ventricular myocardial cells in response to cardiomyocyte stretch within the heart [6]. The biomarkers BNP and NT-proBNP (N-terminal pro-brain natriuretic peptide) are the most diagnostic of the natriuretic peptides for cardiac disease [10][11][12]. In human studies, measuring BNP has been shown to be more sensitive than cardiac ultrasounds for determining early CHF, monitoring patient response to treatment, and determining prognosis [6,9,10,13]. The main utility of BNP in human clinical practice includes having an objective marker of intravascular volume status. Given the morbidity and mortality of CVD in captive gorilla populations, an objective diagnostic tool is needed to allow veterinarians to monitor cardiovascular health, response to treatments, and to aid in determination of when medical intervention is necessary prior to the presence of advanced clinical decompensation. This study examines the usefulness of BNP in diagnosing and predicting cardiac disease in captive gorillas. Animal selection Member institutions of the Association of Zoos and Aquariums housing gorillas and participating in the Great Ape Heart Project based at Zoo Atlanta were invited to participate in a population-based cohort study from 2007-2017 examining cardiovascular data following previously reported guidelines for great ape cardiovascular and echocardiographic studies [8]. Standardized data collection sheets were provided requesting patient identification number, patient age, sex, medical history, date of sample acquisition, purpose of procedure, anesthetic medications used for sample acquisition, current medications, and relevant echocardiographic findings. Body weight, if available, was provided. Echocardiographic findings were reviewed by investigators. Post-hoc analysis, where necessary, was performed and measurements were confirmed or calculated off-line by investigators using hand calipers in the case of videotaped studies and on digitally acquired images using Philips R2.5 software (Philips Healthcare, Andover, Massachusetts 01810, USA) and GE Vivid-I (General Electric, Milwaukee, Wisconsin 53209, USA). Cardiac parameters measured included aortic root diameter (Ao Rt), left atrial size (L atrium), and left ventricle (LV) measurements including LV internal diameter in systole (LVID), and diastole (LVIDd), as well as diastolic septal (IVS) and posterior wall thickness (LVPW). For the purpose of data entry, estimated ejection fractions (EF) were given a numerical value that represented the average EF. Data and BNP samples were collected under anesthesia with regimens varying based on institution. Medications used for anesthesia included ketamine, tiletamine-zolazepam, medetomidine, propofol, isoflurane or sevoflurane inhalant anesthesia, among others with protocols differing between institution and individual animal. Per standard practice among the majority of facilities, gorillas were fasted prior to sedation. Data were entered into Excel (Microsoft, PTSGE Corp., Seattle, Washington 98104, USA) spreadsheets. Whole blood and plasma samples in EDTA taken at the time of evaluation were submitted to the Smithsonian's National Zoological Park along with the standard data collection sheet. All samples were processed within 48 hours of sampling on the Triage BNP test machine (Triage BNP Test, Biosite, San Diego, CA, USA) according to manufacturer recommendations. The amount of BNP present in the sample were displayed as a number in pg/mL. For the purpose of statistical analysis, readings that read as "< 5" were converted to the value of "4", and readings that read as "> 5,000" were converted to the value of "5,000". Statistical methods All descriptive statistics and analyses were run on SPSS V20 and MathCad 2013. Descriptive statistics were calculated for all variables over all gorillas and subgroups of sex and health. Exploratory analyses were examined by grouping but were not useful because subjective groupings based upon medical diagnoses proved more reliable. Assumptions were tested for each test of hypothesis and variance-stabilizing transformations were applied using natural logs and Box-Cox methodology accordingly. When correlations were computed they were Pearson's for continuous measures and Spearman's was used to test measures with two or more categories (dichotomous or polychotomous) measures. General linear models were developed using factors (sex, health) as well as covariates (age) and terms for nonlinearity if necessary. Since age was highly correlated with health status, age was used as a covariate in the model to increase precision of the model design. This procedure statistically removes from the model that part of the variability that is predictable from age alone. Canonical discriminant analyses were modeled when required as a natural extension of regression analysis for predicting group membership. Post hoc classification tables were based upon Fisher's classification functions. Results Records were analyzed for 116 zoo-housed gorillas 10 years of age and over, 51 females and 65 males. Gorillas were assigned into groups based upon health assessment: Group 1 (n = 85) consisted of gorillas that were apparently healthy; Group 2 (n = 9) contained gorillas demonstrating clinical signs consistent with cardiovascular decompensation including but not limited to coughing, lethargy, exercise intolerance, social withdrawal, dyspnea, or grabbing at the chest; and Group 3 (n = 22) contained gorillas that were currently asymptomatic, and were undergoing medical management for CVD based upon diagnosis via previous echocardiographic examination. Results of BNP testing from individual animals in these groups were examined through canonical discriminant functions (Fig 1). The means of the assigned groups appeared to discriminate in to clearly clusters demonstrating that the three groups were statistically distinct; Group 1 demonstrated low value clustering, Group 2 high value clustering, and Group 3 clustering in the middle though towards a lower range. The descriptive values for Age, BNP value, and Cardiac Measurements for Groups 1, 2 and 3 are listed in Table 1. General linear models were examined using sex and health status as factors with their interaction and age, which was highly significant across health status, was used as a covariate. Analysis of covariates provides for an age 'adjustment' across the groups and thereby can increase precision of the model. In each case worsening health status was significantly correlated to increased BNP values at p<0.001 level. To discriminate health status using BNP with echocardiographic variables, the best predictive model used utilized BNP, EF and LVPWd with an 80% jackknifed classification rate. With age added, an overall 87% correct, though biased, classification was obtained, with 90%, 100% and 75% correct in Groups 1, 2, and 3, respectively. Discussion Elevation of BNP levels correlates with the presence of visible echocardiographic abnormalities, as well as reported clinical signs. Given the BNP ranges present within each assigned health group it was found that BNP levels from <5.0-70 pg/dL were related to animals without evidence of CDV, BNP levels of 10-200 pg/dL were associated with animals demonstrating echocardiographic evidence of CVD that were currently being managed by medication, and BNP levels greater than 200 pg/dL were generally found in animals displaying reported clinical signs and echocardiographic evidence of decompensating CVD. Analysis of the data demonstrated that BNP levels increased with worsening cardiovascular health status, including increased IVSd, LVIDd, and LVPWd, as well as with increasing age; BNP values also increased with a decline in EF and cardiac function. Sex was not found to be linked to differences in BNP measurements, contrary to studies in human medicine [11]. It was also noted that while animals receiving medications to manage CVD displayed lower levels of BNP, the animals still displayed echocardiographic changes consistent with CVD. These data mirror reports in human medicine, in which clinical signs are improved and BNP levels are lower in patients Table 1 under current medical treatment for cardiac disease as compared to newly diagnosed and as yet untreated patients [14]. Interestingly, animals with BNP levels measuring greater than 1000 pg/mL were all deceased within 6 months of sample analysis. To fully understand the varying levels of BNP in various stages of cardiac disease, longitudinal studies of individuals would need to be evaluated. One gorilla not included in analysis with a history of severe renal dysfunction without clinical CVD had a dramatic elevation in BNP levels, which may correlate to the finding that increased intravascular volume results in increased secretion of BNP as a result of either cardiac decompensation or renal dysfunction [13]. While animals may present with nonspecific clinical signs of illness and an elevated BNP level, this emphasizes the need for additional diagnostic testing. The clinical signs noted at the presentation of ill animals (lethargy, shortness of breath, pressing on the thoracic area with the palms or fingers, anorexia, weight loss, coughing, gastroesophageal reflux or regurgitation), while not pathognomonic for cardiac disease, have many similarities with clinical signs reported by human patients presenting with cardiac disease. Although this study did not aim to evaluate or classify signs of illness, this provides a useful guide to the veterinary clinician. . Age, BNP level, diastolic interventricular septal wall thickness (IVSd), left ventricular internal diameter in diastole (LVIDd), left ventricular diastolic posterior wall thickness (LVPWd), and ejection fraction (EF) measurements for Levels of BNP should be taken into consideration with echocardiograph examination, electrocardiograms, dietary evaluation, blood parameters, radiography, and environmental settings to obtain a complete picture of cardiovascular health, disease severity and progression, allowing for treatment modifications [4]. Our data demonstrates that a BNP level over 150 pb/ mL in a gorilla with normal renal function should alert the clinician that there is significant myocardial strain and volume overload, warranting medical evaluation and intervention. Supporting information S1 Fig. BNP for all study gorillas. Brain natriuretic peptide (pg/ml) histogram plots with log transformation applied for a) all gorillas included in the study (n = 116); b) gorillas assigned a health status of "1" (n = 85); c) gorillas assigned a health status of "2" (n = 9); and d) gorillas assigned a health status of "3" (n = 22).
2019-03-21T13:02:49.944Z
2019-03-19T00:00:00.000
{ "year": 2019, "sha1": "48cf035619be4c9caa40d8bf2e83f41609f3b251", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0214101&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0803d479dd949c18822291c5334a65cd3b80c743", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
250197936
pes2o/s2orc
v3-fos-license
Therapeutic drug monitoring on the use of transplacental digoxin in fetal tachyarrhythmia: a case report Fetal tachycardia (FT) is a rare disorder and is associated with significant mortality of fetus. Digoxin is one of the antiarrhythmic agents used to treat FT via transplacental therapy. In this report, we describe a therapeutic drug monitoring (TDM) case of digoxin during the treatment of FT. A 40-year-old woman, gravida 2 para 1, hospitalized to control FT as the fetal heart rate (FHR) showed over 200 bpm on ultrasonography at 29 weeks of gestation. She did not have any medical or medication history and showed normal electrolytes level on clinical laboratory test results. For the treatment of FT loading and maintenance dose of intravenous digoxin (loading dose: 0.6 mg; maintenance dose: 0.3 mg every 8 hours) were administered. To monitor the efficacy and safety of the treatment, TDM was conducted with a target maternal serum trough digoxin concentration of 1.0 to 2.0 ng/mL, as well as ultrasonography and maternal electrocardiogram. The observed digoxin serum concentrations were 0.67, 0.83, and 1.05 ng/mL after 1, 2, and 5 days after the initiation of digoxin therapy, respectively. Although the serum digoxin concentrations reached the target range, the FHR did not improve. Therefore, digoxin was discontinued, and oral flecainide therapy was started. The FHR adjusted to the normal range within 2 days from changing treatment and remained stable. TDM of digoxin along with the monitoring of clinical responses can give valuable information for decision-making during the treatment FT. INTRODUCTION The prevalence of sustained fetal tachyarrhythmias (FT) is 1 per 1,000 pregnancies [1]. Sustained FT may cause fetal heart failure, hydrops fetalis, and even death [1][2][3]. Transplacental medical interventions have been applied for nearly 40 years to control fetal rhythm and conversion to sinus rhythm [2]. Digoxin is a commonly used antiarrhythmic drug as it has been used for a long with its safety profile for the mother and fetus [3][4][5]. Digoxin is highly lipophilic and a low molecular weight drug that can rapidly cross the placenta and reach equilibrium [6]. Reports reveal its equal maternal and fetal serum concentrations: 60% to 90% of maternal serum levels, or 11% to 26% in cases of hydrops fetalis [6,7]. Flecainide, sotalol, and amiodarone are generally https://tcpharm.org used as other choices of therapies [5,8]. Digoxin and flecainide are preferred in cases of fetal supraventricular tachycardia, whereas sotalol is preferred in cases of atrial flutter [4,5]. Here, we report a case of a pregnant woman in whom therapeutic drug monitoring (TDM) of digoxin contributed effectively to determining alternative treatment. CASE REPORT A 40-year-old woman, gravida 2 para 1, was referred to our hospital at 29 weeks of gestation to manage FT. She had no surgical or medical history. Fetal ultrasound revealed a grossly normal fetus with an estimated fetal weight of 1,671 g and an amniotic fluid index of 11.95 cm. Electronic fetal heart rate (FHR) monitoring revealed cardiomegaly and an FHR of > 200 beats per minute (bpm) sustained over 50% of the monitoring time. Fetal echo in M-mode revealed an atrial rate of up to 500 bpm, with a ventricular rate of 228-236 bpm (A:V = 2:1). Maternal baseline electrocardiogram (ECG) and electrolyte levels were normal. Transplacental digoxin therapy was planned to control the fetal rate. Digoxin was administered with a 0.6 mg intravenous (IV) load over the first 20 minutes, followed by IV maintenance of 0.3 mg every 8 hours. To monitor the efficacy and safety of the treatment, TDM was conducted in the target maternal serum through a digoxin concentration of 1.0 to 2.0 ng/mL, along with daily ultrasonography and maternal ECG. The observed digoxin serum concentrations were 0.67, 0.83, and 1.05 ng/mL on the first, second, and fifth days after initiation of digoxin therapy, respectively. On the fifth day of digoxin treatment, the FT duration increased, although the serum digoxin concentrations reached the target range (Fig. 1). Therefore, digoxin was discontinued, and oral flecainide therapy was started on the sixth day. Flecainide was orally administered at a fixed dose (100 mg 3 times a day) until cesarean delivery. The FHR was adjusted to the normal range within 2 days of changing treatment to flecainide and remained stable. Mild prolongation of the maternal QT interval (< 480 milliseconds) from 360 milliseconds was observed after flecainide therapy without any symptoms. A cesarean section was performed at 38 weeks and 1 day of gestation. At birth, the baby measured 45 cm in length, weighed 3,800 g, and had Apgar scores of 8 and 9 at 1 and 5 minutes, respectively. His heart rate was 100-130 bpm. Echocardiography revealed an atrial septal defect with a left-to-right shunt (4.1 × 5.6 mm) and a patent ductus arteriosus (5.1 mm). DISCUSSION The recommended dosing regimen for flecainide in FT transplacental therapy is orally 200-300 mg/day, dividing 2 or 3 times a day, and can be increased to 450 mg/day to approach the target plasma range between 0.2 and 1 μg/mL [1,5]. The amount and frequency for dosing flecainide in the case (100 mg 3 times a day) corresponded with the recommended dosage regimen. Although digoxin is preferred in transplacental therapy because of its long history of use for the mother and fetus, there are discrepancies in the first-choice drug for FT treatment [5]. In a recent systemic review, 4 medications were used as first-line therapy in ten studies: digoxin, flecainide, sotalol, and amiodarone in 54%, 26%, 19%, and 1% of patients, respectively [1]. A recent meta-analysis reported that flecainide was more effective than digoxin concerning the rate of supraventricular tachycardia termination and the incidence of maternal side effects [1,5]. [5]. It may be a consequence of clinicians' concerns about maternal adverse events and a wide variety of fetal-to-maternal concentration ratios [9]. Because digoxin is a substrate for P-glycoprotein, the concentration ratio may depend on the gestational age of the placenta [9,10]. Maternal and fetal plasma-binding proteins and placental perfusion can also affect the concentration ratio [9]. Therefore, clinicians should consider the clinical response to maternal digoxin levels when considering alternative management. TDM of digoxin also helps in minimizing side effects and reducing mortality and morbidity. A high serum digoxin level is considered an essential predictor of digoxin toxicity and is the most important predictor of mortality. In a case series reporting the relationship between maternal digoxin dosage, tolerance, and side effects with digoxin levels, the side effects frequently appeared when the serum digoxin level was > 2 ng/mL [11]. In all cases in which at least one symptom or sign existed, the digoxin level was higher than the therapeutic threshold (2 ng/mL), and all reversed within a maximum of 48 hours after the dose decrease. None of the patients developed side effects with digoxin levels < 2 ng/mL [11]. Our case report indicates the significance of TDM in transplacental digoxin therapy for the appropriate and timely management of FT. bpm (blue filled area, left y-axis) and maternal heart rate (red circle, right y-axis). The patient was hospitalized at hospital from 29 weeks of gestation and delivered at 38 weeks and 1 day of gestation. bpm, beats per minute.
2022-07-02T15:02:10.521Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "df4ce6fa2d9988666ebf5de8b1723dc13de84734", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12793/tcp.2022.30.e11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bf768c272be224d78fadb8ad2e290de45749cfc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238356943
pes2o/s2orc
v3-fos-license
Green tea catechins EGCG and ECG enhance the fitness and lifespan of Caenorhabditis elegans by complex I inhibition Green tea catechins are associated with a delay in aging. We have designed the current study to investigate the impact and to unveil the target of the most abundant green tea catechins, epigallocatechin gallate (EGCG) and epicatechin gallate (ECG). Experiments were performed in Caenorhabditis elegans to analyze cellular metabolism, ROS homeostasis, stress resistance, physical exercise capacity, health- and lifespan, and the underlying signaling pathways. Besides, we examined the impact of EGCG and ECG in isolated murine mitochondria. A concentration of 2.5 μM EGCG and ECG enhanced health- and lifespan as well as stress resistance in C. elegans. Catechins hampered mitochondrial respiration in C. elegans after 6–12 h and the activity of complex I in isolated rodent mitochondria. The impaired mitochondrial respiration was accompanied by a transient drop in ATP production and a temporary increase in ROS levels in C. elegans. After 24 h, mitochondrial respiration and ATP levels got restored, and ROS levels even dropped below control conditions. The lifespan increases induced by EGCG and ECG were dependent on AAK-2/AMPK and SIR-2.1/SIRT1, as well as on PMK-1/p38 MAPK, SKN-1/NRF2, and DAF-16/FOXO. Long-term effects included significantly diminished fat content and enhanced SOD and CAT activities, required for the positive impact of catechins on lifespan. In summary, complex I inhibition by EGCG and ECG induced a transient drop in cellular ATP levels and a temporary ROS burst, resulting in SKN-1 and DAF-16 activation. Through adaptative responses, catechins reduced fat content, enhanced ROS defense, and improved healthspan in the long term. INTRODUCTION Clinical trials and epidemiological studies have revealed health benefits associated with green tea consumption, including a significant reduction in systolic blood pressure [1] and fasting glucose [2] as well as weight loss in type 2 diabetes patients [3] and in women with central obesity [4]. The most abundant polyphenols in green tea leaves are epigallocatechin gallate (EGCG), epicatechin gallate (ECG), epigallocatechin (EGC), and epicatechin (EC), forming 30-42% of the solid green tea extract [5]. EGCG accounts for roughly 50% and ECG for 20% of the total catechin amount in green tea leaves [6]. A randomized, placebo-controlled clinical trial testing a daily AGING supplementation with 400 mg EGCG confirmed the safety of a one-year administration with EGCG. It revealed further that plasma concentrations of EGCG reached a measurable level after six months [7]. A recent study tested the bioavailability of EGCG combined with various food supplements. After overnight fasting, consumption of 150 mg green tea extracts already resulted in plasma level peaks of 10 ng/ml/kg after 60-180 min [8]. In vivo experiments in various model organisms suggested a beneficial effect of green tea catechins on lifespan due to metabolic adaptation and enhanced resistance to reactive oxygen species (ROS). For instance, dietary supplementation with EGCG-rich green tea extracts (10 mg/ml EGCG) affected glucose metabolism and increased health-and lifespan in Drosophila melanogaster [9]. Besides, green tea polyphenol-containing water (80 mg/l) extended the lifespan of male C57BL/6 mice [10]. Moreover, treatment of Caenorhabditis elegans (C. elegans) with EGCG at concentrations of 50-300 µM during early-to-mid adulthood promoted lifespan, and 200 µM EGCG was the most potent dosage to extend lifespan via inducing a mitohormetic response via AMPK/SIRT1 and FOXO [11]. However, the poor bioavailability of green tea catechins in mammals [12,13] makes it unlikely to achieve this concentration after oral administration in humans. Nevertheless, several independent clinical trials confirmed that green tea consumption improves various health parameters [1][2][3][4]. After administration of a maximum of 4.5 g of decaffeinated green tea solids, maximum plasma concentrations of EGCG, ECG, and EC reached in total roughly 2.5 µM in humans [14]. Consequently, we tested whether 2.5 µM is still sufficient to promote lifespan by inducing a mitohormetic response in C. elegans. In this work, we reveal that EGCG and ECG enhance fitness and increase the lifespan of C. elegans already at a concentration of 2.5 µM. This comparably low dosage is sufficient to inhibit the mitochondrial respiration chain activity in C. elegans. Experiments in isolated murine liver mitochondria revealed that EGCG and ECG hamper complex I activity. Inhibition of complex I was accompanied by transient ROS formation and an ATP drop after 6 h of EGCG and 12 h of ECG treatment in C. elegans. Lifespan extension of C. elegans by EGCG and ECG proved to be dependent on the presence of the energy sensors AMP-activated kinase AAK-2 and NAD-dependent protein deacetylase SIR-2.1, the homologs of mammalian AMPK and SIRT1, as well as on the ROS-sensing mitogenactivated protein kinase PMK-1, the orthologue of mammalian mitogen-activated protein kinase p38 MAPK, and in the following on its downstream targets protein skinhead-1 (SKN-1), the orthologue of nuclear factor erythroid 2-related factor 2 (NRF2), and DAF-16, the orthologue of a mammalian forkhead transcription factor (FOXO). These data suggest that the subsequent energy deficiency due to transient AMP drop triggers the energy sensors AAK-2 and SIR-2.1 in C. elegans. Moreover, the temporary increase in ROS levels might boost PMK-1 activity and, thereby, the respective signaling cascade, including SKN-1 and DAF-16 in C. elegans. Consistent with the concept of mitohormesis, these signaling pathways provoked an adaptive response by enhancing the activity of ROS defense enzymes superoxide dismutase (SOD) and catalase (CTL), increasing oxidative stress resistances, health, and lifespan. Moreover, metabolism changed in the long term, causing significantly reduced fat content in C. elegans. Taken together, inhibition of mitochondrial complex I once again proved to be a powerful tool to stimulate lifespan extension pathways. EGCG and ECG promote lifespan, fitness, and stress resistance when applied at low doses Oral absorption and absolute bioavailability of green tea catechins are low in mammals [12], reaching total maximum plasma concentrations of 2.5 µM in humans after administration of maximal 4.5 g of decaffeinated green tea solids [14]. However, several independent clinical trials reported beneficial effects of EGCG and ECG regarding health parameters [1][2][3][4]. Therefore, we hypothesized that lower concentrations of EGCG and ECG than those studied previously [11] are still effective and improve lifespan and stress resistance in C. elegans. Indeed, EGCG and ECG applied at a concentration of 2.5 µM were sufficient to significantly extend the medium lifespan (Table 1) of C. elegans from 28.8 ± 0.3 to 30.8 ± 0.1 days ( Figure 1A) and from 28.8 ± 0.3 to 30.6 ± 0.3 days ( Figure 1B), respectively, causing an extension of 6.9% for EGCG and 6.2% for ECG treatment. The maximum lifespan (Table 1) was extended from 35.7 ± 0.6 to 36.9 ± 0.1 days by EGCG treatment ( Figure 1A) and from 35.7 ± 0.6 to 37.1 ± 0.3 days by ECG treatment (Figure 1B), reaching an extension of 3.4% for EGCG and 3.9% for ECG. Next, we tested whether prolonged lifespan also correlates with improved fitness and stress resistance. Locomotion is dependent on functional muscle mass, connective tissues, and neuronal signaling. Consequently, motility is a suitable marker for health [15]. EGCG and ECG treatment improved the nematodes' motility after 7 days of incubation ( Figure 1C). Moreover, treatment of C. elegans with ECGC ( Figure 1D) and ECG ( Figure 1E) for 7 days significantly increased stress resistance (Table 2) to the free radical generator paraquat. Consequently, EGCG and ECG enhanced fitness and stress resistance, both crucial parameters for health. [11]. We could confirm that ROS is essential for lifespan extension provoked by catechins, showing that the antioxidant butylated hydroxyanisole (BHA) prevents the life-prolonging effect of ECGC ( Figure 2A) and ECG ( Figure 2B). Moreover, we found that 25 µM of EGCG and ECG significantly hamper the activity of complex I in murine liver mitochondria ( Figure 2C) and the mitochondrial respiration in mitochondria isolated from rat liver ( Figure 2D). These findings are in line with reduced mitochondrial respiration in C. elegans Table 1 and Table 2 for corresponding detailed data and statistical analyses of lifespan assays and of paraquat stress assay, respectively. Figure 2E) or ECG ( Figure 2F). Notably, mitochondrial respiration recovered after 24 h and 120 h of treatment with EGCG ( Figure 2E) and ECG ( Figure 2F), pointing to compensation of an initially impaired mitochondrial function. The time course of initial diminution and the subsequent recovery of mitochondrial respiration correlates with ROS levels, which increased significantly after 6 h of ECGC ( Figure 2G) and 12 h of ECG ( Figure 2H) administration and dropped significantly after 24 h and 120 h of catechin treatment ( Figure 2G, 2H). AMPK and SIRT1 are essential for catechin-induced lifespan extension Inhibition of complex I reduces NADH's oxidation to NAD+, necessary for glyceraldehyde 3-phosphate conversion to 1, 3-bisphosphoglycerate during glycolysis. Consequently, reduced levels of NAD+ hamper glycolysis and the production of pyruvate, which enters the Krebs cycle to be converted into water and CO2 [16]. In line with these reports, ECGC reduced the oxidation of radioactively labeled glucose by 20%, as shown by impaired production of the 14 C-labeled CO 2 ( Figure 3A). ECG treatment also tended to reduce the glucose turnover. However, the effects remained non-significant ( Figure 3A). The time course of metabolic manipulation by EGCG and ECG was also reflected in overall ATP levels. In line with catechininduced inhibition of mitochondrial respiration ( Figure 2E, 2F) and glycolysis ( Figure 3A), overall ATP levels dropped after 6 h of EGCG ( Figure 3B) and 12 h of ECG ( Figure 3C) treatment in nematodes before recovering after 24 h. A lack of ATP, resulting in a higher AMP to ATP ratio, is well-known to activate the AMP-dependent kinase AMPK [17]. The C. elegans homolog of AMPK, AAK-2, is involved in lifespan extension in response to impaired glycolysis [18] and insulin/IGF-1 signaling [19]. Indeed, EGCG ( Figure 3D) and ECG ( Figure 3E) failed to extend lifespan in aak-2 deficient mutants. Notably, AMPK enhances NAD+-dependent type III deacetylase sirtuin 1 activity by increasing cellular NAD+ levels [20]. In sir-2.1 defective mutants, EGCG ( Figure 3F) and ECG ( Figure 3G) did not achieve a lifespan extension, proving that EGCG and ECG prolong lifespan in an AMPK-and SIRT1-dependent manner. These findings align with previous reports showing that catechins' lifespan extension depends on AMPK, SIRT1, and FOXO [11]. p38 MAPK, NRF2, and FOXO are required for the lifespan extension induced by catechins As shown in Figure 2, EGCG and ECG block complex I activity and, thus, induce a transient rise in ROS levels. ROS [21] and AMPK [22] are potential mediators of the p38 MAP kinase pathways. The homolog of the mammalian p38 MAPK, PMK-1, has been identified as a crucial component in the lifespan extension of C. elegans [23,24]. In line with these previous reports, we found that neither EGCG ( Figure 4A) nor ECG ( Figure 4B) treatment extends lifespan in pmk-1 deficient mutants. Next, we tested the impact of whether the transcription factor SKN1, the worm homolog of NRF2 and a downstream target of PMK1 under conditions of oxidative stress [25][26][27], is involved in the lifespan extension provoked by catechins. Again, no EGCG-( Figure 4C) or ECG-induced ( Figure 4D) lifespan extension could be observed in skn-1 mutant worms. DAF-16 is the homolog of a mammalian FOXO and is reported to respond to physical and environmental stress [28]. daf-16 mutant worms are sensitive to oxidative stress and have shortened lifespans. Moreover, DAF-16 can activate or repress the transcription of target genes involved in dauer formation, life span, stress resistance, and fat storage of C. elegans [29]. EGCG and ECG decreased mean lifespan in daf-16 deficient nematodes from 20.1 ± 0.1 to 19.8 days ( Figure 4E) and from 20.1 ± 0.1 to 19.4 ± 0.2 days ( Figure 4F), respectively. The maximum lifespan was decreased from 22.5 ± 0.3 to 22.2 ± 0.2 days by EGCG treatment ( Figure 4E) and from 22.5 ± 0.3 to 21.7 ± 0.6 days by ECG treatment ( Figure 4F). These results suggest that DAF-16 is indispensable for EGCG's and ECG's lifespan extension and show that daf-16 deficient nematodes are especially prone to a ROS level rise induced by catechins. EGCG and ECG induce adaptive responses in ROS homeostasis and cellular metabolism AMPK/SIRT1 and p38MAPK/NRF2/FOXO signaling cascades are associated with antioxidant defense mechanisms [30]. The major antioxidant enzymes in C. elegans include five distinct superoxide dismutases, converting superoxide to hydrogen peroxide, and two catalases, which ensure the subsequent conversion of hydrogen peroxide to water [31]. EGCG treatments increased SOD activity after 24 h ( Figure 5A) and CTL activity after 7 days ( Figure 5B). Meanwhile, ECG AGING treatments did not significantly increase SOD activity ( Figure 5A) but increased CTL activity after 24 h and 7 days ( Figure 5B). The enhanced activity of SOD and CTL correlates with the subsequent drop of ROS levels after 24 h of EGCG and ECG treatment. Notably, the lifespan-extending effect of EGCG and ECG is dependent on SOD-2 ( Figure 5C) and catalase 2 (CTL-2) ( Figure 5D). As shown in Figure 3, complex I inhibition by EGCG and ECG was also accompanied by a reduction in glucose oxidation. In line with this Table 1 for corresponding detailed data and statistical analyses of lifespan assays. Figure 2. EGCG and ECG inhibit complex I, which results in a temporary hampering of mitochondrial respiration and a boost in ROS production. AGING finding, the fat content was found to be significantly lower after 120 h of EGCG or ECG treatment ( Figure 5E), pointing to a catechin-induced long-term reprogramming of cellular metabolism. DISCUSSION Green tea is one of the most widely consumed beverages worldwide [32]. The popularity of green tea makes it crucial to study its impact on health and aging. Although EGCG's and ECG's bioavailability is relatively low [7,8], consuming 4 cups of green tea daily for 8 weeks significantly decreases body weight [33]. Previous reports already reported a lifespan extension in C. elegans after treatment with 50-300 µM EGCG [11]. Here, we show that already 2.5 µM of EGCG and ECG, a concentration also potentially achieved after green tea consumption [14], are sufficient Table 1 for corresponding detailed data and statistical analyses of lifespan assays. AGING to induce an extension of lifespan and increase stress resistance by adaptational mechanisms. In this mitohormetic response, EGCG and ECG act initially as prooxidants by provoking a ROS rise. Since a transient ROS burst induces antioxidant defense mechanisms, EGCG and ECG display antioxidant properties in the long term. In higher concentrations, EGCG and ECG might show harmful effects due to excessive ROS production. This phenomenon gets obvious in studies performed on cancer cells. While the antioxidant potential of green tea catechins in low concentrations was suggested as a potential solution to prevent tumorigenesis [34,35], higher dosages of catechins might serve as antitumor agents due to the induction of overwhelming ROS formation and apoptosis [36][37][38][39][40][41]. Notably, EGCG was more potent than ECG in human cancer cell lines in inducing cytotoxic effects [33] and inhibiting cancer cell motility [42]. Indeed, it took just 6 h for EGCG, but 12 h Table 1 for corresponding detailed data and statistical analyses of lifespan assays. AGING for ECG to affect mitochondrial respiration, ROS, and ATP levels. However, the impact of these compounds was similar when applied in the long term, yielding similar effects on lifespan, motility, and stress resistance. Besides triggering a mitohormetic response through their effects on transcription factors and enzyme activities, catechins were speculated to exert direct antioxidant potential by scavenging ROS [43,44]. While a modest increase in the plasma antioxidant capacity following green tea consumption was reported [43], the fraction of structurally intact catechins reaching target tissues is insignificant compared to the antioxidant potential due to intracellular glutathione achieving levels of 1-11 mM [45][46][47]. Besides, EGCG even induced hydrogen peroxide formation in the cell culture and liquid NGM system [44][45][46]. Moreover, hydrogen peroxide mimicked the effect of EGCG on signaling pathways, while antioxidants abolished the impact of catechins [37,41,[48][49][50]. We could show that BHA prevented lifespan extension by EGCG and ECG, suggesting that an initial rise in ROS levels is necessary to induce adaptational mechanisms causing improved antioxidant properties. Previous studies already revealed increased hydrogen peroxide levels and a dose-and time-dependent decrease in glutathione levels in cell culture models after applying 50 µM of EGCG [43,51]. However, the mechanism of how EGCG and ECG induce ROS formation was not described so far [11]. In the current study, we revealed that EGCG and ECG inhibit complex I of the ETC. Experiments in rat cerebellar granule neurons have shown that EGCG accumulates explicitly in mitochondria, reaching 90-95% mitochondrial accumulation of this polyphenol [52]. This finding is well aligned with the plethora of literature describing polyphenols as compounds targeting mitochondria [53,54]. Consequently, we isolated mitochondria to investigate the impact of EGCG and ECG on the complexes of the mitochondrial ETC. Isolated mitochondria are separated from their natural environment and signaling processes, and the isolation process brings the risk of damaging mitochondrial membranes due to shear forces [55]. However, drug uptake by mitochondria is dependent on the integrity of the outer and inner mitochondrial membrane, including the function of transporter proteins and carriers [56]. Since 25 µM of EGCG and ECG were necessary to achieve a significant inhibition of complex I activity in mitochondria isolated from murine liver samples and to hamper mitochondrial respiration in mitochondria isolated from rat liver, we assume that the isolation process affected the integrity of mitochondrial membranes and, thereby, mitochondria's potential to take up catechins efficiently. Besides, the isolation of mitochondria yields a relatively homogenous population of spherical organelles with disorganized cristae and diluted matrix content. The structural alterations affect ETC activity and mitochondrial respiration rate [57]. We assume that structural changes in cristae organization due to the isolation process might be another reason why 25 µM of EGCG and ECG were necessary to significantly block complex I activity and mitochondrial respiration rate in isolated mitochondria. In addition, we present that a temporary hampered mitochondrial respiration goes along with a transient rise in ROS levels and a brief drop in ATP, triggering signaling pathways associated with lifespan extension in C. elegans. Our findings align with reports about the C. elegans mutant nuo-6(qm200), carrying a mutation in a conserved subunit of mitochondrial complex I (NUDF84). This specific mutant has reduced complex I function, increased ROS levels [58], and a prolonged lifespan [59]. It was also speculated that blockage of the complex I of the mitochondrial electron transport chain delays aging due to slowed embryonic development and larval growth, decreased pumping and defecation rate, or a reduced accumulation of ROS damage [60][61][62]. However, RNAiinduced knockdown of the mitochondrial electron transport chain's complexes at the L3/L4 stage is sufficient to initiate lifespan extension in C. elegans. At this stage, mitochondria are already undergoing a period of dramatic proliferation and massive mitochondrial DNA expansion [63]. Moreover, inhibiting respiratory chain components during adulthood did not provoke lifespan extension anymore [64][65][66]. Consequently, one has to assume that a temporary sub-lethal rise in mitochondrial ROS during early adulthood induces lifespan extension by provoking changes in the homeostasis of proteins [59,67] and metabolism [58]. Notably, glucose restriction by 2deoxy-D-glucose (2-DG)-mediated inhibition of glycolysis increases the lifespan in C. elegans in a ROSdependent manner [18], suggesting that the temporary drop in ATP levels due to complex I inhibition is an additional trigger to prolong lifespan. Our data demonstrate that life span extension by EGCG and ECG involves energy sensors AAK-2/AMPK and SIR-2.1/SIRT1 as well as the ROS-sensing PMK-1/p38 MAPK, and the transcription factors SKN-1/NRF2 and DAF-16/FOXO. By activating these signaling cascades, the function of ROS defense enzymes, SOD and CTL, and the oxidative stress resistance gets boosted. A previous report presented that catechins' lifespan extension depends on AMPK, SIRT1, and FOXO [11]. Ahead of this report, SOD-3, DAF-16, and SKN-1 were already suggested as targets of EGCG due to enhanced expression [68] or translocation into the nucleus after respective compound treatment [48]. Oxidative stress was reported to stimulate SKN-1's translocation to the AGING nucleus, a process tightly regulated by protein kinases, including PMK-1, GSK-3, MKK-4, IKK epsilon-1, NEKL-2, and PDHK-2 [26]. Notably, SKN-1 activation in neurons is necessary for dietary restriction-mediated lifespan extension [69]. Moreover, reduced insulin/IGF-1 signaling causes nuclear accumulation of SKN-1, a process needed for long-lived daf-2 mutants with increased stress resistance and lifespan [19,70]. DAF-16, the orthologue of mammalian FOXO, is a crucial regulator of longevity, metabolism, and dauer diapauses in C. elegans [28,29,71,72]. Consequently, it seems reasonable that the ROS-sensing p38 MAPK and the energy-sensing AMPK activate the respective signaling cascades after blockage of complex I by EGCG and ECG. Reports showed that AMPK activates p38 MAPK [73]. Consequently, these two kinases might even augment each other's activity and the potential of the respective signaling cascade. The long-term effects also included reduced fat content in C. elegans after 5 days of catechin treatment. Align with this finding, inhibition of complex I and complex IV by rotenone and NaN3 reduced lipid accumulation in 3T3-L1 cells [74]. Moreover, a previous report revealed reduced body fat content in C. elegans after catechin treatment [75]. Besides, green tea catechins were associated with reduced obesity in zebrafish [76], mice [77], rats [78,79], and humans [80,81], suggesting a catechininduced metabolic remodeling. Clinical trials have already confirmed the safety of EGCG [7] and highlighted the potential in counteracting age-related cardiovascular and metabolic diseases [1][2][3][4]. Experiments in rodents studying physical and clinical parameters over time and further clinical trials are required to identify the best timing and dosage for administering catechins. Finally, these studies might characterize additional effects and downstream mechanisms of complex I inhibition. Despite the promising results obtained in animal experiments, the low bioavailability of EGCG [7] still raises the question of whether green tea catechins can reliably provoke beneficial effects in humans. Consequently, additional efforts might be needed to identify complex I inhibitors with increased bioavailability. CONCLUSIONS We conclude that applying the green tea catechins EGCG and ECG at a low dose extends the lifespan of C. elegans via inducing a mitohormetic response. Thereby, the inhibition of complex I causes a transient ROS rise that stimulates the antioxidant defense enzymes SOD and CAT and activates the PMK-1/SKN-1/DAF-16 pathway (Figure 6, Scheme). Besides, complex I inhibition causes a temporary drop in cellular ATP levels and consequently activation of AAK-2/SIR-2.1 signaling. In the long term, the re-wiring of these energy-and ROS-dependent pathways reduces the fat content and extends health-and lifespan. Nematode strains and maintenance C. elegans strains used in the current study were obtained from the Caenorhabditis Genetics Center (CGC, University of Minnesota). Nematodes were grown and maintained at 20°C in 10 cm Petri dishes on nematode growth media (NGM), with Escherichia coli (E. coli) OP50 bacteria as the food resource as previously described [18,82,83]. The strains used in this study included the following: N2 (wild type), GA184 sod-2(ok257), and VC754 ctrl-2(ok1137). Compound treatment EGCG, ECG, and BHA dissolved in DMSO, reaching a stock concentration of 2.5 mM of EGCG and ECG and 10 mM of BHA. The NGM agar solution was autoclaved and subsequently cooled to 55°C, before supplements and compounds (EGCG, ECG, BHA, or DMSO) were added under continuous stirring. The final concentration for compounds was calculated regarding the volume of agar, and the same volume of DMSO was added to control plates. NGM agar plates were supplemented with 100 µg/ml ampicillin to induce metabolic inactivity in E. coli. Agar plates were poured and dried, sealed with parafilm, and stored at 4°C. Before experiments, NGM plates were spotted with a bacterial lawn of heatinactivated bacteria (OP50 HIT) to avoid interference by a potential xenobiotic-metabolizing activity of E. coli. To exclude any effects on development, the incubation period with compounds started at the L4 stage by transferring nematodes to the respective NGM plates [84]. To analyze oxygen consumption rate, glucose oxidation, ATP levels, enzyme activity, and fat content, adult worms at the L4 stage were transferred on NGM agar plates containing 25 µM 5-fluoro-2′-deoxyuridine (Sigma-Aldrich, St. Louis, MO, USA) to prevent progeny formation. After 16 h, we transferred animals to respective treatment groups and harvested them at the indicated time points [18]. Lifespan analyses According to standard protocols, all lifespan assays were performed at 20°C as previously described [18,19]. Briefly, the C. elegans population was synchronized with hypochlorite/NaOH solution except for skn-1 mutant worms. Eggs from heterozygous skn-1 hermaphrodites were harvested after overnight egglaying without applying hypochlorite/NaOH solution to increase the yield of viable larvae [85]. Eggs of nematodes were transferred to NGM plates with fresh OP50 bacteria to allow hatching and development. After approximately 64 h, at the L4 stage, we moved 200 nematodes manually to freshly prepared NGM plates containing the respective compounds and supplied them with a lawn of OP50 HIT. During the first 10-14 days, nematodes were transferred to freshly prepared NGM treatment plates every day and later every second day. Nematodes without any reaction to gentle stimulation were classified as dead. Nematodes that crawled off the plate or suffered from non-natural death like internal hatching were censored and excluded from statistics on the day of premature death. Notably, for lifespan analysis using BHA, nematodes were propagated on BHA-containing NGM plates for four generations before synchronization; the same applied for the respective DMSO controls. Locomotion assay Following the L4 stage, nematodes were treated with 0.1% DMSO, 2.5 µM of EGCG, and 2.5 µM of ECG for 7 days. Afterward, we transferred single worms into S-buffer containing 0.01% Triton X-100 to wash off bacteria and then pipetted them on a glass slide. Movements of single worms within the liquid system were recorded for 20 seconds by a digital CCD camera (Moticam 2300, Motic, St. Ingbert, Germany) coupled microscope (SMZ 168, Motic, St. Ingbert, Germany) equipped with Motic Images Plus 2. We analyzed the videos using the DanioTrack software (Loligo Systems, Tjele, Denmark), subtracting the background and determining the center of gravity of all object pixels compared to the background. As described previously, the center's shift distance was accumulatively calculated and normalized per second [84]. Paraquat stress resistance assay Resistance to lethal oxidative stress by paraquat (Sigma-Aldrich, Munich, Germany) was assessed as previously described [18,19]. Briefly, worms were treated with 0.1% DMSO, 2.5 µM of EGCG, and 2.5 µM of ECG for 7 days after L4 stage. Afterward, we transferred worms into 96-well plates: 6 nematodes in 100 µl of S-buffer, containing freshly dissolved 50 mM paraquat. Dead worms were scored every hour until all control worms were dead. Basal oxygen consumption rate Mitochondrial respiration was quantified using a DW1/AD Clark-type electrode (Hansatech, King's Lynn, England) as previously described [18]. Briefly, we treated worms with 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for the indicated periods, then washed off the AGING respective NGM plates with S-buffer and allowed them to settle by gravitation to remove offspring and bacteria. Worms were also washed twice with S-buffer and transferred into the DW1 chamber to monitor oxygen consumption for 10 mins. Afterward, we collected worms for Bradford protein determination [86]. ROS quantification Before the ROS measurement, MitoTracker Red CM-H2X ROS (Invitrogen, Carlsbad, CA, USA) incubation plates were prepared as previously described [19]. Briefly, we treated worms with 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for the indicated periods, then washed off the respective NGM plates and allowed them to settle by gravitation to remove offspring and bacteria. Worms were additionally washed twice with S-buffer and transferred to freshly prepared MitoTracker Red CM-H2X incubation NGM plates containing 500 µl of OP50 HIT mixed with 100 µl freshly prepared MitoTracker Red CM-H2X stock solution (100 µM). After 2 h at 20°C, worms were washed off MitoTracker Red CM-H2X incubation NGM plates and transferred to NGM agar plates with 0.1% DMSO, 2.5 µM EGCG or 2.5 µM ECG for 1 h to remove excess dye from the gut. Aliquots of 100 µl worm suspension in S-buffer were distributed into a 96-well FLUOTRAC ™ plate (Greiner Bio-One, Frickenhausen, Germany). Fluorescence intensity was measured on a microplate reader (FLUOstar Optima, BMG Labtech, Offenburg, Germany) using wellscanning mode (ex: 570 nm; em: 610 nm). We collected worms from plates for Bradford protein determination [86]. Glucose oxidation assay [ 14 C] D-glucose oxidation rates were determined as described previously [87]. Uniformly labeled [ 14 C] Dglucose was purchased from PerkinElmer, and the specific activity of the batch used was 300 mCi/mmol. We placed an equal number of nematodes on the NGM plates containing 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for the indicated period. After collection and two subsequent washes in S-buffer, worm pellets were resuspended in the incubation buffer. 700 µl of the suspension were transferred to 4 cm Petri dishes. The latter were placed in 10 cm Petri dishes together with a second 4 cm Petri dish containing 600 µl of 0.1 M KOH solution to trap CO2 as described previously [18,88]. Hence, each 10 cm dish was equipped with two 4 cm dishes, one carrying nematodes and the other containing KOH. We added labeled glucose to a final concentration of 17 µM U-[14C] D-glucose (5 µCi/ml) in the nematode suspension as a substrate. We added nonradioactive glucose into each sample to reach a final concentration of 0.5 mM. The 10 cm Petri dishes were covered, sealed with parafilm in an air-tight manner, and incubated at 20°C for 3 h. Subsequently, an aliquot of 500 µl of KOH was immersed in 4.5 ml of scintillation fluid and placed in a liquid scintillation counter (Beckmann LS 6000, Global Medical Instrumentation, Inc.) to quantify the amount of trapped 14 CO 2 . We normalized 14 CO 2 signals to incubated worms' protein content. ATP quantification We treated nematodes with 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for the indicated time. After collection and washing with S-buffer twice, worm pellets were shock frozen in liquid nitrogen and grinded in a nitrogen-chilled mortar. The grinded samples were boiled with 4 M Guanidine-HCl at 99°C for 15 min to destroy ATPase activity [58,89]. Precipitated proteins were separated by centrifugation (30 min, 13200 g at 4°C), and the supernatant was analyzed regarding the ATP content using CellTiter Glo (Promega, Fitchburg, WI, USA) according to the manufacturer's instructions. ATP values were normalized to protein content using the Bradford assay [86]. Activity assays for Catalase (CTL) and Superoxide Dismutase (SOD) After treating nematodes with 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for the indicated period, the respective enzyme activities were determined by standard photometric assays as previously described [18,19,84]. Briefly, CTL activity was estimated by the production of formaldehyde due to the enzyme's reaction with methanol in the presence of an optimal concentration of H2O2. The produced formaldehyde was determined spectrophotometrically with 4-amino-3hydrazino-5-mercapto-1, 2, 4-triazole (Purpald, Applichem, Darmstadt, Germany). We measured SOD activity photometrically with a tetrazolium salt, forming a water-soluble formazan dye upon reduction with a superoxide anion. Fat content analysis We determined fat content by applying a triglyceride determination kit (Roche, Mannheim, Germany) as previously described [18,88] and normalized to protein content using the Bradford assay [86]. Briefly, worms were incubated with 0.1% DMSO, 2.5 µM EGCG, or 2.5 µM ECG for 5 days, washed, and shock-frozen in liquid nitrogen. Afterward, worm pellets were grinded in a nitrogen-chilled mortar with Milli-Q water supplemented with 5% Triton X-100 and sonicated 3 times. We centrifuged 200 µl of the homogenized AGING extract and extracted the supernatant for protein determination. 400 µl of lysate was heated to 80°C for 5 min and then cooled down to room temperature. The heating was repeated once to dissolve all triglycerides. After heating and cooling, the lysate was centrifugated at 12000 g for 10 min, and we collected the supernatant for triglyceride determination according to the manufacturer's protocol. Quantification of complex I activity in mitochondria from the murine liver We measured the activity of complex I spectrophotometrically at 600 nm in 1 ml of 25 mM potassium phosphate buffer containing 3.5 g/L BSA, 60 µM 2,6-dichloroindophenol (DCIP), 70 µM decylubiquinone, 1.0 µM antimycin A, and 0.2 mM NADH, adjusted to pH 7.8 [90]. Decylubiquinone and antimycin A were dissolved in DMSO as 17.5 mM and 1.0 mM, respectively. DCIP and NADH were dissolved in water as 10 mM for both. BSA stock solution was 70 g/L in 5 mM potassium phosphate buffer, pH 7.4. Mouse liver mitochondria stocks contained 10 µg/µl in 10 mM Tris (pH 7.6) and were stored at −80°C. After being thawed, 30 µl of mitochondria were treated with 470 µl of 10 mM Tris-Cl, pH 7.6, to disrupt the mitochondrial membrane. Subsequently, 20 µl mitochondria fragments were preincubated in a 960 µl incubation mixture without NADH for 3 mins. After 3 mins, we added 20 µl of 10 mM NADH into the incubation mixture and measured the absorbance at 20 s intervals for 2 mins. 2 mins later, 1 µl of DMSO, EGCG, or ECG were added into the incubation mixture as fast as possible and measured absorbance again at 20s intervals for 4 mins. The effect of chemicals on complex I activity was expressed as the slopes' ratio of decreasing absorbances before and after adding substances. Isolation of mitochondria from murine liver We did the isolation of mitochondria from rat liver according to Frezza's protocol [91], except for the homogenization, which was done using a tissue glass Dounce Homogenizer (Wheaton, VWR, Darmstadt, Germany). Briefly, rodents were fasted overnight and killed by cervical dislocation. The liver was rapidly explanted, immersed, and sliced in the isolation buffer containing 200 mM sucrose, 1 mM EGTA/Tris, and 10 mM Tris/MOPS, pH 7.4. The washed liver fragments were placed into the tube with around 25 ml isolation buffer. The loose-fitting pestle was inserted, pressed down, and lifted four times, and then the tight-fitting pestle was applied in the same way twice. The mixture was poured into the 50 ml polypropylene falcon tube and centrifuged at 600 g for 10 min at 4°C. We carefully removed the fat on the top of the supernatant by using tissue paper. The supernatant was extracted to a second polypropylene falcon tube centrifuged at 7000 g for 10 min at 4°C. Afterward, the fat was removed, the supernatant discarded, and the mitochondrial pellet resuspended in the remaining buffer. The suspension containing mitochondria was centrifuged again at 7000 g for 10 min at 4°C. The supernatant was removed entirely, and the mitochondrial pellet was resuspended in 200 µl isolation buffer as described above. The concentration of isolated mitochondria was determined with Bradford (1976). Quantification of oxygen consumption rate in murine liver mitochondria Mitochondria respiration was quantified using a DW1/AD Clark-type electrode (Hansatech, King's Lynn, England) at 30°C in 1 ml experiment buffer containing 125 mM KCl, 10 mM Tris/MOPS, 0.1 mM EGTA/Tris and 1 mM KH2PO4, pH 7.4, as previously described [91]. 5 mM Glutamate and 2.5 mM Malate were supplied as substrates for complex I, III, IV. After recording basal respiration for 2 min, 0.1% DMSO, 25 µM EGCG, or 25 µM ECG and subsequently 100 µM ADP was added. After ADP was wholly consumed, the oxygen consumption rate slowed down, 5 mM succinate, and ADP were added to study complex II, III, IV activity. At the end of each measurement 60 nM FCCP were supplied to check the viability of mitochondria. Statistical analyses Data are expressed as means ±SD unless otherwise indicated. Statistical analyses for all data except lifespan assays and stress resistance assays were performed by Student's t-test after testing for equal distribution of the data and equal variances within the data set. Statistical calculations were carried out using the log-rank test to compare significant distributions between the different groups in lifespan and stress resistance assays. We performed all analyses using Microsoft Office Excel 2016 (Microsoft, Albuquerque, NM, USA). Differences were considered statistically significant at p < 0.05 and presented as specific p-values ( * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001). AUTHOR CONTRIBUTIONS J.T., C.G., and C.T.M. performed experiments, analyzed, and visualized the data. C.T.M. and M.R. wrote the manuscript. Funding was acquired by M.R. and C.T.M. All authors have read and agreed to the published version of the manuscript.
2021-10-06T06:17:00.096Z
2021-10-04T00:00:00.000
{ "year": 2021, "sha1": "39fcbeb8dc9a27da981310d2d445858a90ecdaf2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.203597", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19d7641f722087ed9699f8de0a4b8440d77ff2cd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16077363
pes2o/s2orc
v3-fos-license
Correlated $\pi\pi$ and $K\bar K$ exchange in the baryon-baryon interaction A dynamical model for correlated two-pion and two-kaon exchange in the baryon- baryon interaction is presented, both in the scalar-isoscalar ($\sigma$) and the vector-isovector ($\rho$) channel. The correlations between the two pseudoscalar mesons are taken into account by means of $\pi\pi - K\bar K$ amplitudes derived from a meson-exchange model, which is in line with the empirical $\pi\pi$ data. It is found that correlated $K\bar K$ exchange plays an important role in the $\sigma$-channel for baryon-baryon states with non- vanishing strangeness. The strength of correlated $\pi\pi$ plus $K\bar K$ exchange in the $\sigma$-channel decreases with the strangeness of the baryon- baryon system becoming more negative. The results for correlated $\pi\pi$- exchange in the vector-isovector channel deviate from what is expected in the naive SU(3) picture for genuine $\rho$-exchange. Shortcomings of a simplified description in terms of sharp mass $\sigma$- and $\rho$-exchange are pointed out. Introduction The study of the role of strangeness degrees of freedom in low energy nuclear physics is of high current interest since it should lead to a deeper understanding of the relevant strong interaction mechanisms in the non-perturbative regime of QCD. For example, the system of a strange baryon (hyperon Y ) and a nucleon (N ) is in principle an ideal testing ground to investigate the importance of SU (3) flavor symmetry for the hadronic interactions. This symmetry is obviously broken already by the different masses of hadrons sitting in the same multiplet. However the important question arises whether (on the level of hadrons) it is broken not only kinematically but also dynamically, e.g. in the values of the coupling constants at the hadronic vertices. The answer cannot be given at the moment since the present empirical information about the Y N interaction is too scarce and thus prevents any definite conclusions. Hopefully the situation will be improved by experiments of elastic Σ ± p, Λp, and even Ξp, currently performed at KEK [1,2]. Existing meson exchange models of the Y N interaction assume for the hadronic coupling constants at least SU (3) symmetry, in case of models A and B of the Jülich group [3] even SU (6) of the static quark model. This symmetry requirement provides relations between coupling constants of a meson multiplet to the baryon current, which strongly reduce the number of free model parameters. Specifically, coupling constants at the strange vertices are then connected to nucleon-nucleon-meson coupling constants, which in turn are fixed between close boundaries by the wealth of empirical N N scattering information. All Y N interaction models can reproduce the existing empirical Y N scattering data. Therefore at present the assumption of SU(3) symmetry for the coupling constants is not in conflict with experiment. However, the treatment of the scalar-isoscalar meson sector, which provides the intermediate range baryon-baryon interaction, is conceptionally not very convincing so far. The one-boson-exchange models of the Nijmegen group start from the existence of a broad scalar-isoscalar ππ resonance (ǫ-meson, m ǫ = 760 M eV , Γ ǫ = 640 M eV ), which is hidden in the experiment (e.g. πN → ππN ) under the strong ρ 0 -signal and can therefore not be identified reliably. For practical reasons the exchange of this broad ǫ-meson is then approximated by the exchange of a sum of two mesons with sharp mass m 1 and m 2 ; the smaller mass is around 500 M eV and thus corresponds to the phenomenological σ-meson in conventional OBE-models. The ǫ-meson is then treated as SU (3) singlet (model D [4]) or as member of a full nonet of scalar mesons (model F [5] and NSC [6]). In the SU (3) framework the ǫ-coupling strength is equal for all baryons in model D, while it depends on four open SU (3) parameters in models F and NSC, which are adjusted to N N and Y N scattering data. In the latest Nijmegen model NSC [6] the scalar meson nonet includes apart from the ǫ the isoscalar f 0 (975), the isovector a 0 (980) and the strange mesons κ, which the authors identify [7] with scalar q 2q2 -states predicted by the MIT bag model [8]. This interpretation is however doubtful, at least for the f 0 (975) and a 0 (980). According to a recent theoretical analysis of the ππ, πη, and KK system [9] in the meson exchange framework the f 0 (975) is a KK molecule bound in the ππ continuum, while a 0 (980) is dynamically generated by the KK threshold. Thus both mesons do not appear to be genuine quark model resonances, with the consequence that SU (3) relations should not be applied to these mesons. (The non-strange members of the scalar nonet are expected to be at higher energies). In the Bonn potential [10] the intermediate range attraction is provided by uncorrelated (Fig. 1a,b) and correlated (Fig. 1c) ππ exchange processes with N N , N ∆ and ∆∆ intermediate states. It is known from the study of the ππ interaction that the ππ correlations are important mainly in the scalar-isoscalar and vector-isovector channel. The Bonn potential includes such correlations, however only in a rough way, namely in terms of sharp mass σ ′ and ρ exchange. One disadvantage of such a simplified treatment is that this parametrization cannot be transported into the hyperon sector in a well defined way. Therefore in the Y N interaction models of the Jülich group [3], which start from the Bonn N N potential, the coupling constants of the fictitious σ ′ -meson at the strange vertices (ΛΛσ ′ , ΣΣσ ′ ) are essentially free parameters. In view of the little empirical information about the Y N interaction this feature is not satisfactory. This is especially true for an extension of the Y N models to baryon-baryon channels with strangeness S = −2. So far there is no empirical information about these channels (apart from some data on Ξ-and ΛΛ-hypernuclei). Still there is a large interest in these channels initiated by the prediction of the H-dibaryon by Jaffe [11]. The H-dibaryon is a deeply bound 6-quark state with the same quark content as the ΛΛ system (uuddss) and with 1 S 0 quantum numbers. For the experimental search it is important to know whether conventional deuteron-like ΛΛ states exist. An analysis of possible S = −2 bound states in the meson exchange framework could provide valuable information in this regard, but requires a coupled channels treatment of ΛΛ, ΣΣ, and N Ξ channels. An extension of the Jülich Y N models to those channels is only of minor predictive power since the strength of the important ΞΞσ ′ vertex is completely undetermined and cannot be fixed by empirical data. These problems can be overcome by an explicit evaluation of correlated ππ exchange processes in the various baryon-baryon channels. A corresponding calculation has been done for the N N case (Fig. 1c) [12]. Starting point was a fieldtheoretic model for both the N N → ππ Born amplitudes and the ππ-KK interaction [13]. With the help of unitarity and dispersion relations the amplitude for the correlated ππ exchange in the N N interaction has been determined, showing characteristic discrepancies to σ ′ and ρ exchange in the (full) Bonn potential. For the correct description of the ππ interaction in the scalar-isoscalar channel the coupling to the KK channel is essential, which is obvious from the interpretation of the f 0 (975) as a KK bound state. Apart from the ππ-KK interaction model the KK channel is not considered in Ref. [12], i.e. the coupling of the kaon to the nucleon is not taken into account. In fact, this approximation is justified in the N N system [14]; it is however not expected to work in channels involving hyperons. The aim of the present paper is a microscopic derivation of correlated ππ as well as KK exchange processes in the various baryon-baryon channels with S = 0, −1, −2 (Fig. 2). The KK channel is treated on an equal footing with the ππ channel in order to determine reliably the influence of KK correlations. Our results replace the phenomenological σ ′ and ρ exchange in the Bonn N N and Jülich Y N models by correlated processes and in this way eliminate undetermined model parameters (e.g. σ ′ coupling constants). Corresponding interaction models thus have more predictive power and should make a sensible treatment of S = −2 baryon-baryon channels possible. The formal treatment is similar to that of Refs. [12,15,16] dealing with correlated ππ exchange in the N N interaction. Due to the inclusion of the KK channel and different baryon masses (e.g. in the N Λ channel) generalizations are however required at some places. Starting point is a field-theoretic model for the baryon-antibaryon (BB ′ ) → ππ, KK Born amplitudes in the J P = 0 + , 1 − channels. Besides various baryon exchange terms the model includes in complete consistency to the ππ-KK interaction model [13,17] also a ρ-pole term (cf. Fig. 3). These Born amplitudes are analytically continued into the pseudophysical region below the BB ′ -threshold. The solution of a covariant scattering equation with full inclusion of ππ − KK correlations yields the BB ′ → ππ, KK amplitudes in the pseudophysical region. In the N N → ππ channel these amplitudes are then adjusted to quasiempirical information [18,19], which has been obtained by analytic continuation of πN and ππ data. With the assumption of SU (6) symmetry for the coupling constants a parameter-free description of the other particle channels can then be achieved. Via unitarity relations the products of BB ′ → ππ, KK amplitudes fix the singularity structure of the baryon-baryon amplitudes for ππ and KK exchange. Assuming analyticity for the amplitudes dispersion relations can be formulated for the baryon-baryon amplitudes, which connect physical amplitudes in the s-channel with singularities and discontinuities of these amplitudes in the pseudophysical region of the t-channel processes. With a suitable subtraction of uncorrelated contributions, which are calculated directly in the s-channel and therefore guaranteed to have the correct energy behavior, we finally obtain the amplitudes for correlated ππ and KK exchange in the baryonbaryon system. In the next chapter we describe the underlying formalism which is used to derive correlated ππ and KK exchange potentials for the baryon-baryon amplitudes. Furthermore we present our microscopic model for the required BB ′ → ππ, KK amplitudes. Sect.3 contains our results and also a comparison with those obtained from other models. The paper ends with some concluding remarks. Kinematics and amplitudes The kinematics of a two-body scattering process A + B → C + D (cf. Fig. 4) is uniquely determined by the 4-momenta p A , p B , p C , p D of the particles. Taking into account the on-mass-shell relations (p 2 X = M 2 X , X = A, . . . , D) and the conservation of the total 4momentum (p A +p B = p C +p D ) only two independent Lorentz-scalars can be built out of these momenta. For these Lorentz-scalars one usually introduces the three Mandelstam variables which are related by By crossing the scattering process A + B → C + D is closely related to two other processes, as indicated by Fig. 4: Here, the channels are named according to the Mandelstam variable which denotes the squared total energy in the center-of-mass (c.m.) system. For the s-channel process the particle 4-momenta in the c.m. system read: with E X = M 2 X + p 2 X . The modulus of the relative momentum p s ( q s ) of the initial (final) state can be expressed in terms of s: The scattering angle ϑ s = ) < ( p s , q s ) is related to the Mandelstam variables by For the t-channel process A + C → B + D the c.m. 4-momenta of the particles are The analogue of Eqs. 5 for the modulus of the relative momenta p t , q t and the scattering angle ϑ t = ) < ( p t , q t ) now reads: Instead of the particle 4-momenta, usually the total 4-momentum and the following three linear combinations are used to characterize the kinematics of a two-body scattering process: The scalar products of these three momenta can again be expressed in terms of the Mandelstam variables In covariant field-theory the scattering amplitude T for a general process with where P i (P f ) denotes the total 4-momentum in the initial (final) state. The factors N x (x = i, f ) are given by where the j-th particle of channel x has mass M j , momentum p j , energy E j = (M 2 j + p j 2 ) 1/2 and spin s j . (Spin and isospin quantum numbers are suppressed for the moment.) The exponent b j is given by For a two-body scattering process A + B → C + D Eq. 12 reads Particles with spin (and helicity λ X ) are described in the helicity basis according to the conventions of Jacob and Wick [20]. By separating off the helicity spinors u X ( p X , λ X ) of the particles from the scattering amplitudes one obtains the transition matrix M. If, for instance, all four particles of the process A + B → C + D are spin-1/2 baryons, the transition matrix M is a 16 × 16 matrix in spinor space and is defined by Now the transition matrix M can be constructed as a linear combination of the so-called kinematic covariants O i , which are like M operators in spinor space. The O i are built up from the Dirac γ-matrices and the momenta P and Q (cf. Eq. 10) in such a way that their matrix elements are Lorentz-invariant quantities. The invariant amplitudes c i (s, t) are Lorentz-scalars. The number of independent kinematic covariants for a given scattering process corresponds to the number of independent helicity amplitudes and is determined by the dimension of the spinor space and invariance principles for the underlying interaction. For the scattering of four spin-1/2 baryons (A + B → C + D) there are in general eight independent kinematic covariants. For the elastic scattering (A + B → A + B) their number is reduced to six due to time reversal invariance.For the 'superelastic' scattering of four identical particles (A + A → A + A) the number of independent kinematic covariants is further reduced to five due to the symmetry under particle exchange. The set of kinematic covariants is not unique. However, for the forthcoming it is essential that the O i are chosen in such a way that the invariant amplitudes c i (s, t) do not contain any kinematic singularities, but only 'physical' singularities demanded by unitarity. In the case of four spin-1/2 particles this condition is fulfilled by the set of eight covariants given in Ref. [21]. This set is based on the so-called Fermi-covariants S, P, V, A, T : i (P ) and the matrix elements have to be evaluated according to Dispersion relations for baryon-baryon amplitudes of ππ and KK exchange For a general two-body scattering process the physical regions of the Mandelstam variables for the s-, t-and u-channel reaction are non-overlapping. Therefore the transition matrices in the three channels can be interpreted as independent branches of one operator M defined in the various kinematic regions. In the t-channel (A + C → B + D), for instance, the scattering amplitude is related to the invariant amplitudes by By introducing the concept of analyticity the invariant amplitudes in the three channels become closely related: The invariant amplitudes c i (s, t) are supposed to be analytic functions (except for the physical singularities) in the whole complex st-plane. Therefore, if all these physical singularities are known the c i (s, t) can be deduced at any point in the complex Mandelstam plane by the formulation of dispersion integrals. The singularity structure of the invariant amplitudes is completely determined by the unitarity of the S-matrix. In terms of the scattering amplitude unitarity of the S-matrix is expressed as where P n and P f = P i denote the total 4-momenta. The summation in Eq. 21 is to be understood over all physical states n, i.e. states which are energetically accessible for the system with energy P 0 i . The singularities of the baryon-baryon (A + B → C + D) amplitudes generated by (correlated) ππ and KK exchange are most easily derived using the unitarity relation for the t-channel reaction A + C → D + B. For this, one has to restrict the summation in Eq. 21 to physical ππ and KK states; i.e., contributions of single-meson poles or of heavier two-meson and multi-meson (e.g. 3π, 4π) channels are disregarded. In the c.m. system (P i = P f = ( √ t, 0)) the summation over the states n then reads with ω µ (k) = m 2 µ + k 2 and the on-shell momentum k µμ = t/4 − m 2 µ . The symmetry factor N µµ is introduced in order to obtain the correct phase space in case of identical particles: Now, the kinematic covariants O i in Eq. 18 have been chosen in such a way [21] that the left-hand side of Eq. 21 yields the imaginary part of the invariant amplitudes, i.e. Hence, the unitarity relation for the helicity amplitudes of the process A + C → D + B becomes in the c.m. system: From the beginning, the unitarity relations 21, 25 are just defined above the kinematic threshold of the process A + C → D + B, that is for t ≥ t 0 ≡ max{M A + M C , M B + M D }. However, they can be continued analytically into the pseudophysical region (4m 2 π ≤ t ≤ t 0 ) as discussed in Ref. [22]. Below the kinematic threshold the baryon-antibaryon momenta become imaginary. According to Ref. [22] the unitarity relation for the S-matrix, which can be written symbolically as [S(p)] * S(p) = 1, has to be continued to complex momenta by [S(p * )] * S(p) = 1. As explained in Ref. [23], for imaginary momenta this is equivalent to evaluating the expression [S(p)] * S(p) = 1 with real dummy variables for the momenta and replacing these dummy variables at the very end (after having performed all complex conjugations) by the imaginary momenta. The right-hand side of Eq. 25 obviously vanishes below the ππ threshold at t = 4m 2 π . Since for the processes considered here the left-hand cuts, which are due to unitarity constraints for the u-channel process, do not extend up to t = 4m 2 π (for fixed s lying inside the physical s-channel region), the invariant amplitudes c i (s, t) are real-analytic functions of t, i.e. c i (s, t * ) = c i (s, t) * . According to Eq. 25 and c i (s, t) has a branch cut along the real t-axis extending from t = 4m 2 π to t = +∞. Corresponding statements hold for the KK branch cut. For c.m. energies below 1 GeV the ππ interaction is dominated by the JI = 00, 11 partial waves [13]. At low transfered momenta, relevant in low-energy baryon-baryon scattering, correlations between two exchanged pions (kaons) are therefore only considerable when the exchanged ππ (KK) system is in a state with relative angular momentum J = 0 and isospin I = 0 ('σ-channel') or J = 1 and I = 1 ('ρ-channel') [12]. In our approach only the correlated part of two-pion and two-kaon exchange is evaluated by dispersiontheoretic means. The uncorrelated part has to be calculated directly in the s-channel in order to include all t-channel partial waves and to guarantee the correct energy dependence of this contribution. In the ΛN channel, for instance, the iterative two-pion exchange with an N Σ intermediate state becomes complex above the N Σ threshold. This behavior cannot be reproduced with a single-variable dispersion relation in the t-channel which will be applied to the correlated contribution (see below). Hence, the following dispersiontheoretic considerations can be limited to the σ and ρ channel of ππ (KK) exchange. For this, the BB ′ → µµ amplitudes on the right-hand side of the unitarity relation 25 are decomposed into partial waves [24]. Choosing the coordinate system so that p points along theẑ-axis and q lies in thexẑ-plane the partial wave decomposition gives where the on-shell momenta p = p(t) and q = q(t) (see Eq. 8) are suppressed as arguments of the partial wave decomposed T matrix elements. Note that the right-hand side depends on the Mandelstam variable s only via the angle ϑ t = ϑ t (s, t) (see Eq. 9) between p and q. Now, by restricting the sum over J in Eq. 27 to J = 0 (J = 1) the contribution c with the abbreviation Eq. 28 is a system of linear equations for the discontinuities Im c . Its solution provides the discontinuities as linear combinations of the following products of BB ′ → µµ helicity amplitudes: (30) From the symmetry properties of the BB ′ → µµ helicity amplitudes, which are due to the parity invariance of the underlying strong interaction (see e.g. Ref. [20]), the following relations can be deduced: Therefore only four linear independent F J λ D λ B ,λ A λ C exist for J > 0. For J = 0 there is only one independent BB ′ → µµ helicity amplitude, µµ | V J (k, q) | BB ′ , ++ , and consequently only one independent F J λ D λ B ,λ A λ C . Using the explicit representation of helicity spinors in Appendix A the evaluation of the matrix elements of the kinematic covariants O i in Eq. 28 is straightforward. In case of unequal baryon masses M B = M B ′ the analytic structure of the BB ′ → µµ helicity amplitudes is much more involved compared to the case when the baryon masses are equal. This complicates an analytic continuation of the amplitudes to the pseudophysical region. The ΛΣ channel is the only channel where this problem arises. In order to facilitate an easy handling of the expressions we treat this channel approximately by setting the mass of the Λ and of the Σ equal to the average mass (M Λ + M Σ )/2. This approximation is justified for two reasons: First the mass difference between Λ and Σ hyperon is small (77M eV ) on the baryonic scale. Second, since we evaluate the uncorrelated part directly in the s-channel the important energy dependence of this contribution is not affected by our approximation. The correlated contribution on the other hand is known to have a rather smooth energy dependence which is supposed not to change drastically by our approximation. In the following we restrict ourselves therefore to the case where Solving the system of linear equations 28 yields that in the σ channel (J = 0) all discontinuities Im [c σ i (s, t)] vanish except for the scalar component In contrast, in the ρ channel only the axial-vector component vanishes: with The discontinuity of the pseudoscalar and the tensor component are obviously related by Consequently only four of the five nonvanishing discontinuities are linear independent in alignment with the number of independent F J=1 λ D λ B ,λ A λ C . Except for poles (corresponding to single-particle exchange) and cuts the invariant amplitudes c i (s, t). Since we want to restrict the dispersiontheoretic evaluation to the contribution of (correlated) ππ and KK exchange to the baryon-baryon amplitudes we take into account only those singularities which are generated by ππ and KK intermediate states, namely the discontinuities of Eqs. 34-35 due to the ππ and KK unitarity cut ('right-hand cut'). The left-hand cuts, which are due to unitarity constraints for the u-channel reaction, can be neglected in the baryon-baryon channels considered here, since they start at large, negative t-values (from which they extend to −∞) and are therefore far away from the physical region relevant for lowenergy s-channel processes. For identical baryons (e.g. N N → N N ) this is only true if the dispersion relations are applied only to the direct baryon-baryon amplitude and the antisymmetrization of the amplitudes is not taken into account from the very beginning but just for the final s-channel amplitudes. Otherwise, crossing of the exchange diagram would result in a u-channel cut starting at u = 4m 2 π which could not be neglected in the dispersion integrals [16]. In this work the BB ′ → µµ amplitudes, which enter in Eq. 28, are derived from a microscopic model which is based on the hadron-exchange picture (see Sect. 2.4). Of course, this model has a limited range of validity: for energies far beyond t ′ max ≈ 100m 2 π it cannot provide reliable results. The dispersion integral for the invariant amplitudes extending in principle along the whole ππ right-hand cut has therefore to be limited to an upper bound (t ′ max ). In addition left-hand cuts and unphysical cuts introduced for instance by the form factor prescription of the microscopic model for the BB ′ → µµ amplitudes are neglected. Because of these approximations of the exact expressions, which are necessary in order to obtain a solution of the physical problem, the formulation of either a dispersion relation or a subtracted dispersion relation might lead to different results for the amplitudes although both should be mathematically equivalent. However, the ambiguity which dispersion relation to choose can be avoided by demanding that the analytic structure of the resulting c The transition amplitude M σ for the exchange of a scalar σ meson with mass m σ between two J P = 1/2 + baryons A and B follows from the interaction Lagrangians (See Appendix A for the hadronic field operators.) The result is where a form factor F σ (t) has to be applied at each vertex since the exchanged σ meson is far away from its mass-shell. This form factor is parametrized in the conventional monopole form with a cutoff mass Λ σ assumed to be uniform for both vertices. The construction of the amplitude for ρ exchange in the transition with (XY ) = (AC), (BD). According to the conventional Feynman rules, using Eq. 11 and the generalized Gordon decomposition [25] for the spinors of two Dirac particles X and Y , the transition amplitude M ρ comes out to be with and the 4-momenta P and Q defined in Eq. 10. P 2 is one of the so-called perturbative covariants introduced in Ref. [15] as P 2 can be expanded in terms of the O i of Eq. 18 since they form a complete set of kinematic covariants. For the given baryon masses one finds After replacing P 2 in Eq. 43 the final result for M ρ reads with M tot = M + M ′ and the form factor F ρ (t) parametrized according to Eq. 40. By comparison of the discontinuities in the σ and ρ channel in Eqs. 34, 35 with the transition amplitudes for sharp σ and ρ exchange it follows that c σ S , c ρ V and c ρ 6 obey an unsubtracted dispersion relation, The tensor component of sharp ρ exchange is proportional to t (cf. 47). In order to generate this factor t also for the tensor component of correlated ππ and KK exchange a subtracted dispersion relation (subtraction point t 0 and subtraction constant c ρ T (s, t 0 ) = 0) is assumed for the invariant amplitude c ρ T (s, t): Similarly, the u − s dependence of the (pseudo-)scalar component of sharp ρ exchange (cf. Eq. 47) can be reproduced by assuming a subtracted dispersion relation for c ρ S (s, t) and c ρ P (s, t) with t 0 (s) = Σ − 2s and c ρ S,P (s, t 0 (s)) = 0: 3 Baryon-baryon interaction arising from correlated ππ and KK exchange The invariant amplitudes constructed in the preceding Section using dispersion theory still contain the uncorrelated contributions of ππ and KK exchange. Investigating the problem of baryon-baryon scattering requires the knowledge of the on-shell scattering amplitude T , which is usually obtained as a solution of a scattering equation T = V + V GT that iterates the interaction kernel V . In general, V contains besides other terms one-pion and one-kaon exchange contributions as well as the contribution V 2π from two-pion and two-kaon exchange. But iterating π and K exchange in the second order term V GV also generates two-pion and two-kaon exchange contributions to the scattering amplitude. In order to avoid double counting these 'iterative' contributions therefore have to be left out from the dispersiontheoretically calculated V 2π . As stated above we even go beyond this and subtract all uncorrelated contributions from V 2π . By this the dispersiontheoretic calculations can be restricted to the σ and ρ channel (since only there significant correlations occur in the kinematic region considered), whereas the uncorrelated contributions are evaluated in the s-channel and therefore contain all t-channel partial waves. In order to eliminate the uncorrelated contributions from V 2π we determine the discontinuities Im[c (J) i,Born (s, t)] generated from the BB ′ → µµ Born amplitudes V J (i.e., no ππ − KK correlations included) using as before the unitarity relation 28 (with T J replaced by V J ) and subtract them finally from the full discontinuities Im[c i (s, t), the (unsubtracted) dispersion relation 48 has to be modified tõ where the spectral function ρ Corresponding expressions hold for the subtracted dispersion relations 49 and 50. Now the baryon-baryon helicity amplitudes arising from correlated ππ and KK exchange can be evaluated according to Eqs. 16, 17 The partial wave decomposition of these matrix elements then proceeds as usual (see e.g. Ref. [3]). Of course, when iterating the baryon-baryon interaction kernel in a scattering equation V 2π,corr has to be known off-shell. However, dispersion theory applies only to on-shell amplitudes and does not provide any information on the off-shell behavior of the amplitudes. Therefore an arbitrary prescription for the off-shell extrapolation of V 2π,corr has to be defined [12], which is certainly a drawback of the dispersiontheoretical derivation of this potential. Nevertheless the characteristic features of correlated ππ and KK exchange like the strength of V 2π,corr in the various baryon-baryon channels can already be discussed by means of the unique on-shell amplitudes. Therefore we postpone the discussion of how to extrapolate V 2π,corr off-shell to a subsequent work. The dispersiontheoretic amplitudes for correlated ππ and KK exchange (Eqs. 34, 35) have been constructed in such a way that their operator structure agrees as far as possible with sharp σ and ρ exchange 39, 47. Therefore our results for the correlated exchange can be parametrized in terms of σ and ρ exchange; i.e., the products of coupling constants for σ and ρ exchange are replaced by effective coupling strengths G (J) (s, t), which contain the full s-and t-dependence of the dispersiontheoretic results. In the σ channel this gives for the elastic baryon-baryon process Note that sharp σ exchange (Eq. 39) would correspond to a spectral function (except for form factors). This suggests to interpret the spectral function as a function that denotes the strength of an exchange process depending on the invariant mass of the exchanged system (here: ππ, KK). By comparing the coefficients of the kinematic covariants O i in Eqs. 47 and 53 we obtain in the ρ-channel: Obviously the effective coupling strengths do not depend on s but only on t. This is only possible by choosing the subtracted dispersion relation 50 forc ρ S,P (s, t), since the integrand of the dispersion integral becomes independent of s (Im[c ρ S,P (s, t ′ )] ∝ cos ϑ t (s, t ′ ) ∝ u ′ − s). Thereforec ρ S,P (s, t) depends on s only by the factor (u − s), which cancels out exactly when calculating the effective coupling strengths. It should be emphasized that the parametrization of V 2π,corr discussed here does (so far) not contain any approximations. Isospin-Crossing Up to now, in favor of a clear representation of the dispersiontheoretical calculation, we have suppressed isospin degrees of freedom. The isospin structure of the BB ′ → µµ amplitudes will be discussed in the next Sections when the microscopic model for these amplitudes is presented. However, when 'crossing' is applied to the baryon-baryon (schannel) and baryon-antibaryon (t-channel) amplitudes it has to be kept in mind that the total isospin in the various channels is constructed from different combinations of particle isospins. The total isospin I s of the s-channel process A+B → C+D is composed out of [I A ⊗ I B ] Is or [I C ⊗ I D ] Is and that of the t-channel process Consequently, besides the analytic continuation of the invariant amplitudes in s and t the recoupling of the particle isospins has to be taken into account when crossing amplitudes. Therefore, the isospin amplitudes T AB→CD s (I s ) and T AC→DB t (I t ) of the sand t-channel processes being independent of the isospin projections m s and m t are linearly related: where X AB,CD (I s , I t ) is the so-called isospin-crossing matrix. Note that our isospincrossing matrix differs from theX AB,CD (I s , I t ) introduced in Refs. [26,27] since their t-channel process (D + B → C + A) differs from the one in Sect. 2.1: As shown in Ref. [26] the isospin-crossing matrixX AB,CD (I s , I t ) can be expressed in terms of Clebsch-Gordan coefficients with m s (|m s | ≤ I s ) being arbitrary. For a particle A with isospin I A and isospin projection m A the particle state | A and the isospin state | Im A might differ in sign. The isospin state of the antiparticle A is generated by applying the G-parity operator G to | Im A [26]: With the phase convention used here for the SU (3) field operators (e.g. when calculating the isospin factors of the BB ′ → µµ amplitudes in the next Section) the phase η A , which is independent of m A , comes out to be [27] where Y denotes the hypercharge of particle A. Note that this phase convention differs from the one used in Ref. [26]. For the baryon-baryon processes considered in this work the isospin-crossing matrices are tabulated in Tab. 1. By the partial wave decomposition the amplitude of correlated ππ and KK exchange is separated into the contributions of the σ-(I t = 0) and ρ-channel (I t = 1). Suppressing the spin-momentum dependence the isospin amplitude can be written as The column X(I s , 0) (X(I s , 1)) of the isospin-crossing matrix agrees except for a constant factor F σ (F ρ ) with the isospin factors for t-channel exchange of a σ (ρ) meson in the corresponding s-channel process. Conventionally these constant factors F σ and F ρ , which are also tabulated in Tab. 1, are extracted from the isospin-crossing matrix and put into the spectral functions 52, so that the isospin factors for the s-channel potential V 2π agree with the isospin factors of σ and ρ exchange. A microscopic model for the BB ′ → ππ, KK transition amplitudes In the preceding Sections we have outlined the dispersiontheoretic calculation of correlated ππ and KK exchange in the baryon-baryon interaction starting from amplitudes for the transition of a baryon-antibaryon (BB ′ ) state to two pions (ππ) or a kaon and a antikaon (KK). These amplitudes have to be known in the so-called pseudophysical region, i.e. for energies below the BB ′ threshold. However, in case of the process N N → ππ, these amplitudes can be derived from empirical data by analytic continuation of πN and ππ scattering amplitudes, which are extracted from scattering data, into the pseudophysical region [18,28,29,30]. Corresponding analyses for the transitions Y Y ′ → ππ, KK are out of sight since the required empirical information (e.g. πΛ scattering data) does not exist. An evaluation of correlated ππ and KK exchange in the Y N or Y Y interaction therefore necessitates the construction of a microscopic model for the BB ′ → ππ, KK amplitudes. This model can be tested against the quasiempirical information for the N N → ππ amplitudes and then has to be extrapolated to the other channels of interest. In addition, only the use of a microscopic model for the BB ′ → ππ, KK amplitudes allows a consistent treatment of medium modifications of the baryon-baryon interaction. The microscopic model presented in the following is a generalization of the hadronexchange model for the N N → ππ transition amplitudes of Ref. [17], where it was applied to the analysis of correlated two-pion exchange in the πN interaction. A main feature of the model presented here is the completely consistent treatment of its two components, namely the BB ′ → ππ, KK Born amplitudes and the ππ − KK correlations. Both components are derived in field theory from an ansatz for the hadronic Lagrangians. The amplitudes for the processes BB ′ → ππ, KK are obtained from a scattering equation which can be written in operator form as where the scattering amplitude T, the Born amplitude V and the Greens operator G are operators in channel space of BB ′ , ππ and KK. For large t ′ , contributions of the spectral functions of correlated ππ and KK exchange to the dispersion integrals 48-50 values are suppressed by the denominator 1/(t ′ − t) because in the physical s-channel t≤0. Since the unitarity cuts of the BB ′ states start far above the branch points of the ππ and KK cuts the contribution of the BB ′ Greens function G BB ′ to the BB ′ → ππ, KK scattering amplitudes in Eq. 63 and to the spectral functions can be neglected. For the same reason the coupling to other mesonic channels like the ρρ channel can be renounced. Of course, by these approximations the range of validity of the microscopic model for the BB ′ → µµ (µµ = ππ, KK) amplitudes is limited to low t ′ values. Therefore instead of integrating along the whole ππ unitarity cut up to infinity the upper bound of the dispersion integrals, t ′ max , is set to a value somewhere below the ρρ threshold at t ′ ≈ 120m 2 π so that convergence of the integrals is achieved. Taking into account only ππ and KK intermediate states, i.e. neglecting G BB ′ , we obtain for the two components of Eq. 63 which give the BB ′ → µµ amplitudes: (64) In Fig. 5 this scattering equation for the BB ′ → ππ, KK amplitudes is represented symbolically. The correlations T ππ/KK→ππ/KK are generated with a realistic model [17] of the ππ − KK interaction. This meson-exchange model is a modified version of the so-called Jülich ππ model [13]. The Born amplitudes of this model for the elastic ππ and KK channels as well as for the transition ππ → KK are shown in Fig. 6. Besides the t-channel (and in case of ππ → ππ also u-channel) exchanges of the vector mesons ρ, ω, φ, K * the s-channel exchanges (pole graphs) of the ρ, the scalar-isoscalar ǫ and the isoscalar tensor meson f 2 are taken into account. The potentials derived from Fig. 6 are iterated in a coupled-channel calculation according to the prescription of Blankenbecler and Sugar [31]. The free parameters of the ππ − KK model of Ref. [17] were adjusted to the empirical ππ phase shifts and inelasticities. A good agreement with the empirical data was achieved(cf. Fig. 8). The scattering equation 64 is likewise solved using the ansatz [31] of Blankenbecler and Sugar (BbS) to reduce the 4-dimensional Bethe-Salpeter equation to a 3-dimensional equation which simplifies the calculation while retaining unitarity. The total conserved 4-momentum P and the relative 4-momentum k ′ of the intermediate µ ′ µ ′ = ππ, KK state are expressed in terms of the particle 4-momenta k ′ (1) and k ′ (2) by Corresponding relations hold for the relative 4-momenta q of the initial BB ′ and k of the final µµ = ππ, KK state. In the center-of-mass system we have P = ( and, if the particles in the initial and final state are on their mass-shell, with E B (q) := q 2 + M 2 B . Now, according to the prescription of Blankenbecler and Sugar, the relativistic two-particle Greens function, G µ ′ µ ′ , is replaced by a 3-dimensional Greens function g µ ′ µ ′ ∝ δ(k ′ 0 ), which respects unitarity. Due to the δ function δ(k ′ 0 ), which sets the two intermediate particles (being of equal mass) equally off-mass-shell (k ′ 0 (1) = k ′ 0 (2)), the integration over k ′ 0 can be carried out immediately and the so-called Blankenbecler-Sugar equation for the helicity amplitudes is obtained: where N µ ′μ′ is defined as in Eq. 23 and T µ ′μ′ →µμ ≡ M µ ′μ′ →µμ . Because of the rotational invariance of the underlying interactions the BbS equation can be decomposed into partial waves [24]: Here, q, k, k ′ denote the modulus of the corresponding 3-momenta. The extrapolation of the model for the N N → ππ amplitudes to the other particle channels is made under the assumption that the hadronic interactions are, except for the particle masses, SU (3) f lavor symmetric. That means that the coupling constants at the various hadronic vertices are related to each other by SU (3) relations. In this way it comes out that all free parameters of the model for the BB ′ → µµ amplitudes can be fixed by adjusting the N N → ππ amplitudes to the quasiempirical data [18,19] (see Sect. 3.1). BB ′ → ππ, KK Born amplitudes In our model, the BB ′ → µµ transition potentials are build up from a ρ-pole diagram and all diagrams in which a baryon out of the J P = 1/2 + octet or the J P = 3/2 + decuplet is exchanged. Of course only those diagrams are considered which respect the conservation of isospin and strangeness. As an example Fig. 7 shows the Born amplitudes for the transition ΣΣ → ππ, KK. The particles occuring in this model are listed in Tab. 2 together with their masses and their basic quantum numbers. In order to start from a maximum SU (3) symmetry our model differs slightly from the one presented in Ref. [17] for the N N → ππ amplitudes. Here, we include the exchange of the Y * ≡ Σ(1385) in the N N → KK transition potential, which was neglected in Ref. [17]. In addition, the form factors at the hadronic vertices are chosen identical within a given SU (3) multiplet. Starting point for the derivation of the various Born amplitudes are the following interaction Lagrangians, which are characterized by the J P quantum numbers of the hadrons involved. For simplicity, we suppress here the isospin or SU (3) dependence of the Lagrangians. For the conventions used for the field operators and the Dirac γ-matrices, see Appendix A. The Lagrangian L DBp includes an off-shell part, which is proportional to x ∆ . As the parameter A, which occurs in the free Lagrangian [32] and then in the non-pole part of the propagator of a spin-3/2 particle (cf. Eq. 101), the parameter x ∆ characterizing the strength of the off-shell part of the DBp coupling is not determined from first principles. However, it is known [33] that fieldtheoretic amplitudes derived with this most general ansatz depend only on a certain combination of A and x ∆ , namely It follows that different pairs of (x ∆ , A) values, which give the same value of Z, describe the same interaction theory. Therefore, without restricting the general validity of our results, we can set A = −1 (i.e. omitting the non-pole part of the spin-3/2 propagator) and select the interaction theory (characterized by Z) through x ∆ , which is finally adjusted to the quasiempirical data for N N → ππ. In order to account for the extended structure of hadrons the vertex functions resulting from the Lagrangians 69 are modified by phenomenological form factors. For the baryon-exchange processes these form factors are parametrized in the usual multipole form where p denotes the 4-momentum and M X the mass of the exchanged baryon X. The two parameters, the so-called cutoff mass Λ X and the power n X , are chosen uniquely for all BB ′ p (Λ 8 , n 8 ) and for all BDp (Λ 10 , n 10 ) vertices in order to keep the number of parameters of our model as low as possible. The dependence of the form factor 71 on the power n X is quite weak [17]. We choose n 8 = 1 and n 10 = 2. Finally, Λ 8 and Λ 10 are adjusted to the quasiempirical data for the N N → ππ amplitudes in the pseudophysical region. For the ρ-pole diagram we parametrize the form factor at the µµρ vertex in the same way as is done in the model of the ππ-KK interaction in Ref. [17]: where k is the relative 3-momentum of the two pseudoscalar mesons and k 2 µμ (t) = t/4 − m 2 µ is the squared on-shell momentum of the µµ state. For the dispersiontheoretic calculation of correlated ππ and KK exchange the BB ′ → µµ Born amplitudes are evaluated only for BB ′ states being on their mass-shell. Therefore, there is no need for a form factor at the BB ′ ρ vertex as to assure convergence of the scattering equation 68. Therefore, we disregard this form factor, again in order to keep the parameters of the model as low as possible. Now, taking into account that the BB ′ state is on mass-shell and that due to the Blankenbecler-Sugar condition (Eq. 66) the energy component of the relative momentum of the µµ state always vanishes (k 0 = 0) the 4-momenta at the external legs of the BB ′ → µµ Born diagrams read in the center-of-mass system: According to the usual Feynman rules [34] (for the various propagators, see also Appendix A) we obtain for the spin-momentum parts of the BB ′ → µµ Born amplitudes V BB ′ →µµ ( k, q; t): • Exchange of a baryon X with j P = 1 2 • Exchange of a baryon X with J P = 3 2 + and momentum p = q − k • ρ-pole graph with bare mass m These expressions can be further simplified by introducing the momenta given in Eq. 73 and contracting the γ-matrices. Finally, the corresponding BB ′ → µµ helicity amplitudes are obtained by applying V BB ′ →µμ ( k, q; t) to the Dirac helicity spinors of the baryons (cf. Eqs. 87 and 94 in the Appendix): The final results for the helicity amplitudes are summarized in Appendix B. In our model the coupling constants at the various hadronic vertices are related to each other by SU (3) arguments. The SU (3) relations together with the isospin factors of the various Born amplitudes are given in Appendix C. As discussed in the previous chapters, for the dispersiontheoretic calculation of correlated ππ and KK exchange the BB ′ → ππ, KK have to be known in the pseudophysical region, i.e. for energies t ′ below the BB ′ threshold (4m 2 π ≤ t ′ < (M B +M B ′ ) 2 ). Therefore, after having derived the analytic expressions for these Born amplitudes in the physical region ( √ t ′ > M B + M B ′ ) they have to be continued analytically as functions of t ′ (and s) into the pseudophysical region. For this all energy-dependent quantities occuring in the expressions for the Born amplitudes (cf. Appendix B) have to be expressed as functions of t ′ and s. If we adopt the approximation introduced in Sect. 2.2, namely that the masses of the baryon and of the antibaryon are equal (M B = M B ′ ), the square of the relative 4momentum and the one-particle energies of the BB ′ state are given in the center-of-mass system for physical values of t by The analytic continuation of these relations to the pseudophysical region is obvious. Note that if we would have allowed M B and M B ′ to be different the corresponding relations would look more involved: Baryon-exchange diagrams in which the mass M X of the exchanged baryon is sufficiently smaller than the mass M B = M B ′ of the external baryons (e.g. N -exchange in ΛΛ → KK or Λ-exchange in ΣΣ → ππ do not satisfy a Mandelstam representation as was already pointed out in Ref. [35] for the latter example. The nonvalidity of the Mandelstam representation becomes obvious when extrapolating the corresponding Born amplitude in Eq. 74 to the pseudophysical region. For given t < 4(M 2 B − M 2 X ), the propagator of baryon X, then acquires a singularity at cosθ = 0 ( q · k imaginary) and the following off-shell momentum of the µµ state Since this problem would hinder an evaluation of correlated ππ and KK exchange and we do not see at present any proper solution, we eliminate the problem by the following approximation: In all baryon-exchange diagrams in which the mass of the exchanged baryon, M X , is smaller than the mass of the external baryons, M B , M X is increased by hand to M B . Again, since the uncorrelated contributions of two-pion and two-kaon exchange (e.g. iterative two-pion exchange in the ΣN channel with a ΛN intermediate state ) are evaluated explicitly in the s-channel, these contributions are not affected by our approximation and thus have the correct energy dependence. Approximations are only made in the correlated part which have a much weaker energy dependence than the uncorrelated contributions. Determination of free parameters During the construction of the microscopic model for the BB ′ → µµ amplitudes it proved to be essential to restrict the number of free parameters as much as possible, which can then be fixed by adjusting the model predictions to the quasiempirical N N → ππ amplitudes. Only in this way one can hope to obtain a reasonable description of the other baryon-antibaryon channels, for which no empirical data exist. As shortly outlined in Sect. 2.4 the ππ − KK interaction model has been developed independently before, with all parameters adjusted to fit the existing ππ scattering phase shifts. Therefore coupling constants and form factors at the ππρ (0) and KKρ (0) vertices occurring in the ρ (0) pole terms are already determined. Assuming that the bare ρ meson couples universally to the isospin current we can also fix all vector couplings g BB ′ ρ will be related by SU (3) symmetry, with two parameters remaining, namely the coupling constant f (0) N N ρ and the F/(F + D) ratio α m v . The coupling of the pseudoscalar mesons π and K to the octet baryons is in the framework of SU (3) symmetry likewise determined by two parameters, the F/(F + D) ratio α p and the coupling constant g N N π . (For the latter we will take throughout g 2 N N π /4π = 14.3). There is an additional freedom since SU (3) symmetry can be either assumed for the pseudoscalar coupling constants g BB ′ p or the pseudovector ones f BB ′ p , which are related by These two possibilities are not equivalent since, because of SU (3) breaking of baryon masses, both sets of coupling constants cannot be SU (3) symmetric at the same time. In this work we will assume SU (3) symmetry for the pseudoscalar couplings, but we will also check the influence of the alternative possibility. Finally, under the assumption of SU (3), the couplings f BDp of π and K to the transition current between baryon octet and decuplet are determined by only one parameter, f N ∆π . We will use f 2 N ∆π /4π = 0.36 in the following. In addition we have form-factor parameters at the hadronic vertices. In order to keep their number small we assume that the cutoff masses Λ BXµ are independent of the exchanged baryon X within one SU (3) multiplet. Consequently we have two additional parameters: Λ 8 if X is a member of the baryon octet and Λ 10 if it is in a decuplet. The power n 8 = 1 and n 10 = 2 in the form-factor ansatz are sufficient to ensure convergence of the scattering equation. Finally, x ∆ (Eq. 69) characterizing the off-shell part of the (3/2) + ⊗ (1/2) + ⊗ 0 − coupling is treated as a free parameter in our BB ′ → µµ model. In order to reduce the number of parameters further we even assume SU (6) symmetry, which fixes α p and α m v to be 0.4. Thus we are left with four free parameters f N N ρ , x ∆ , Λ 8 and Λ 10 , which have been fixed by adjusting our theoretical predictions for the N N → ππ amplitudes to the quasiempirical results of Höhler and Pietarinen [18,19] given in the form of Frazer-Fulco amplitudes f J ± (t), which, up to kinematic factors, correspond to the partial wave decomposed helicity amplitudes of Sect. 2.2, i.e. where k and q are the on-shell momenta of pions and nucleons. The factors F J = −1/ √ 6 (−1/2) for J = 0 (1) are due to the transition from isospin amplitudes used in this work to the Frazer-Fulco amplitudes, which are defined in isospin space as coefficients of the independent isospin operators δ αβ and 1 2 [τ α , τ β ]. Since ππ | T J=0 (t) | N N , +− vanishes identically we have only one amplitude, f 0 + , in the σ channel whereas in the ρ channel we have both f 1 + and f 1 − . Fig. 9 shows the predictions of our microscopic model for f 0 + , f 1 + and f 1 − , in comparison to the quasiempirical results of Ref. [18] in the pseudophysical region t ≥ 4m 2 π ; Table 4 contains the chosen parameter values. (Note that the present model differs somewhat from our former model [17], e.g. by the inclusion of Y * exchange; therefore the values differ slightly from those given in Ref. [17]). With only four parameters we obtain a very satisfactory reproduction of the quasiempirical data, especially in the ρ channel. Some discrepancies occur in the σ channel, which however have only small influence on final results for the correlated ππ exchange in the s-channel reactions (N N , πN ), as also discussed below. Furthermore it should be kept in mind that in this channel the quasiempirical information is plagued with considerable uncertainties. The BB ′ → µµ transition potentials Having fixed all parameters in our microscopic model for the BB ′ → µµ amplitudes we will now first look at the transition potentials (Born amplitudes) in the various baryonantibaryon channels. Figs. 10 and 11BB ′ → ππ, KK amplitudes show the contributions of the various baryon-exchange processes to the N N → ππ, KK and ΣΣ → ππ, KK Born amplitudes above the µµ thresholds, i.e. for t ≥ 4m 2 µ . Contributions of the ρ (0) pole terms are not shown since they possess a singularity at the bare rho mass m (0) ρ . This pole will be regularized only after iteration and coupling to the full ππ amplitude and then leads to the resonance structure at the physical rho mass m ρ . The notation of the (Born) amplitudes follows that of the Frazer-Fulco amplitudes, i.e. The partial wave decomposed Born amplitudes are either even or odd functions of the baryonic relative momenta (cf. Appendix B). Since the latter are imaginary in the pseudophysical region the Born amplitudes being odd functions (e.g. V 0 + ) become likewise imaginary. Apart from coupling constants and isospin factors the strengths of the various contributions are strongly determined by mass ratios. Exchange baryons with the same mass M X as the outer baryon-antibaryon pairs (according to our approximation introduced in Sect. 2.4.1 this case includes also those exchange baryons whose physical mass is lower than that of the outer baryons) produce, in the ππ amplitudes V 0 + and V 1 + , a typical structure at the ππ threshold. The strong rise of the amplitudes, which acquire a finite value at t = 4m 2 π , is a direct signal of the so-called left-hand cut, which is generated by the singularity of the corresponding u-channel pole graph. This cut starts just below the ππ threshold and extends from t 0 = 4m 2 π (1 − m 2 π /M X ) along the real axis to −∞. Obviously, in the σ channel, the various pieces interfere constructively, whereas in the ρ channel also destructive interferences occur. The contributions generated by a spin-3/2 exchange baryon have opposite sign in V 1 + and V 1 − whereas both amplitudes are almost equal when a spin-1/2 baryon is exchanged. The N N → ππ Born amplitudes in Fig. 10 built up by nucleon and ∆ exchange are noticeably larger than the N N → KK Born amplitudes, which are suppressed because of the high mass of the exchange baryons Λ, Σ, Y * . Due to different mass ratios this is no longer true in the hyperon-antihyperon channels. For instance, in the ΣΣ channel shown in Fig. 11 the coupling via ∆ exchange to KK even dominates, due to the large coupling constant (f 2 Σ∆K = f 2 N ∆π ) and the small mass difference between Σ and ∆ (M ∆ − M Σ ≈ 39 M eV ). Therefore already from these results it is to be expected that correlated KK exchange processes play a minor role in the N N system but are important for the interactions involving hyperons. The BB ′ → µµ transition amplitudes The BB ′ → µµ helicity amplitudes, T J ± (t), which are obtained from the solution of the BbS scattering equation 68, consist of Born terms and pieces containing mesonmeson correlations.The latter part fixes the phase of the BB ′ → µµ amplitudes and generates the discontinuities of T J ± (t) along the unitarity cut. It contains ππ and KK correlations in the form of ππ, KK amplitudes. Corresponding predictions from our field-theoretic model, for the scalar-isoscalar on-shell amplitude T J=0,I=0 (t) are shown in Fig. 12. By comparison of the ππ amplitude with the ππ phase shifts (cf. Fig. 8) we can see immediately that the vanishing of the real part of the amplitude at t = 37m 2 π and t = 71m 2 π corresponds to the phase shifts acquiring the value of 90 resp. 270 degrees, respectively. The phase value of 180 degrees (and the corresponding vanishing of the amplitude) occurs just below the KK threshold (t = 50.48m 2 π ). Remarkably, there is a steep rise of the imaginary part of the ππ amplitude at low energies, which has to vanish at the ππ threshold for unitarity reasons. We mention that, in the N N system, this part provides the dominant contribution to correlated ππ exchange. It should be noted that both the real and imaginary part of the KK → KK amplitude (which in Fig. 12 are shown only in the physical region above the KK threshold) acquire a non-vanishing value at the KK threshold due to the open ππ channel. Figs. 13 and 14 show the resulting BB ′ → ππ amplitudes for BB ′ = N N , ΣΣ. Besides the predictions of the full model (solid line) the figures also show the effect which is obtained when neglecting the BB ′ → KK Born amplitudes in the σ channel (dashed line) and restricting oneself to the ρ-pole amplitudes in the ρ channel (dashdotted line). The BB ′ → KK amplitudes (dotted lines) are only shown for J = 0 since for J = 1 as exemplified for the ΣΣ channel they provide almost negligible contributions, mainly because of the weak KK interaction in that partial wave. Note that in the σ channel, due to the imaginary Born amplitudes, the role of the real and imaginary part of T 0 + are in a certain sense interchanged: Im[T 0 + ] contains the Born amplitudes whereas the discontinuity along the unitarity cut is contained in Re[T 0 + ]. Obviously, in all channels, the structure of the ππ amplitude of Fig. 12 can be recognized in those BB ′ → ππ amplitudes T 0 + , which have been evaluated without the BB ′ → KK Born amplitudes (dashed curves). The steep rise of Im[T 0 + ] at the ππ threshold originates from the analogous behavior of the Born amplitudes (cf. Figs. 10 and 11). As follows from the unitarity relation for the BB ′ → ππ amplitudes below the KK threshold (t thr ≈ 50.48m 2 π ) the discontinuity of the BB ′ → ππ amplitudes (contained in Re[T 0 + ]) must vanish at t ≈ 50.43m 2 π since there T J=0,I=0 = 0. Furthermore, those amplitudes T 0 + , which have been evaluated without the KK Born amplitudes, vanish at an additional, though for each channel different position in the energy region below the KK threshold. Interestingly this vanishing of the amplitudes is lost in the hyperonantihyperon channels when the KK Born amplitudes are included; in case of the N N → ππ channel on the other hand the resulting full amplitude still vanishes, at a slightly different position (t ≈ 50.2m 2 π ). Note that the quasiempirical N N → ππ amplitude f 0 + in Fig. 9 has a comparable structure, however the amplitude vanishes already at a somewhat smaller value of t ≈ 43m 2 π . Fig. 15 also clearly shows the cusp structure of the amplitudes at the KK threshold, which is however much stronger in the hyperonantihyperon channels. The BB ′ → KK Born amplitudes do not play any role in the N N → ππ channel, as noted already in Refs. [14,17]. In the hyperon-antihyperon channels on the other hand, we have already at small t-values important contributions to the BB ′ → ππ amplitudes T 0 + , which lead to an enhancement of the amplitudes below the KK threshold and to a noticeably different energy dependence. The amplitudes T 1 ± in the ρ channel are dominated by the resonance structure in the region of the physical ρ mass at t = 30.4m 2 π . We stress that by multiplication with the ππ correlations in the scattering equation also the baryon-exchange Born terms lead to resonant contributions. However, in all channels, the discontinuity of the amplitudes mainly arises (to at least 60%) from both ρ pole terms and those contributions generated by them in the scattering equation (dash-dotted curves in Figs. 13 and 14). As shown in Fig. 14 for the ΣΣ → KK transition, the BB ′ → KK on-shell amplitudes (dotted curves) are completely unimportant in the ρ channel. On the other hand, in the σ channel, the BB ′ → KK amplitudes can acquire large values especially near the KK threshold, due to the corresponding behavior of the KK amplitude T J=0,I=0 (cf. Fig. 12), which lead to non-negligible contributions to the spectral functions. The spectral functions Based on the BB ′ → µµ amplitudes in the pseudophysical region determined in the last section we can now in principle evaluate the spectral functions ρ σ,ρ i (defined in Eq. 51) of correlated ππ and KK exchange for all baryon-baryon channels containing the octet baryons N, Λ, Σ and Ξ. In this work however we restrict ourselves to those channels in which experimental information is available at present or in the near future. Besides the baryon-baryon channels with strangeness S = 0 and S = −1, for which numerous (N N ) and scarce (N Λ, N Σ) scattering data exist, we will also consider the S = −2 channels ΛΛ, ΣΣ and N Ξ, which are relevant both for the description of ΛΛ-resp. Ξ-hypernuclei and for the study of the H-dibaryon [11] predicted in the (S = −2, J = I = 0) channel. Fig. 16a (Fig. 16b) shows the spectral function ρ σ S (BB ′ ) in the σ channel predicted by the full microscopic model for the S = 0, −1 BB ′ channels N N, N Λ, N Σ (S = −2 channels ΛΛ, ΣΣ, N Ξ). Due to isospin conservation the ππ resp. KK exchange contributes to the N Λ and ΛΛ interaction only in the σ channel and to the N Λ − N Σ transition only in the ρ channel. Up to near the KK threshold at t = 50.48m 2 π ρ σ S is negative in all channels. Therefore, as expected, correlated ππ and KK exchange provides attractive contributions in all channels. Especially at small t-values, which determine the long-range part of the correlated exchanges and therefore yield for low-energy processes in the s-channel the main contributions to the dispersion integral, the spectral function ρ σ S (N N ) is by about a factor of 2 larger than the results for ρ σ S (N Λ) and ρ σ S (N Σ), which have about the same size. Due to the sizable contributions of the KK Born amplitudes to the hyperon-antihyperon amplitudes found already in the last section, ρ σ S (N Λ) as well as ρ σ S (N Σ) show a noticeably different t-dependence than ρ σ S (N N ). Below the KK threshold the spectral functions are broadened to larger t-values, therefore the overall range of this correlated exchange is reduced. This is even more true for the S = −2 channels ΛΛ and ΣΣ in Fig. 16b since there the hyperon-antihyperon amplitudes enter quadratically. The effect of the BB ′ → KK Born amplitudes is once more shown in Fig. 17 for the N N and N Σ channel. Whereas they have small influence on ρ σ S (N N ) they provide important contributions to ρ σ S (N Σ) already near the ππ threshold. Fig. 18 shows the corresponding spectral functions in the ρ channel (ρ ρ S , ρ ρ V , ρ ρ T and ρ ρ 6 ) for the various particle channels, which are non-vanishing. (Note that ρ ρ P depends linearly on the four functions shown, cf. Eq. 37, and is therefore not shown). ρ ρ 6 contributes only in BB ′ channels with different masses and therefore vanishes in the N N and ΣΣ channel. According to Eq. 35 ρ ρ S (s, t) depends on s through the factor cos ϑ t (s, t). In order to enable a comparison of the results for ρ ρ S (s, t) in the various particle channels we have throughout set s on the threshold of the s-channel process in question, i.e. s = (M B + M B ′ ) 2 . Since the ρ channel is dominated by the resonant pieces in the ππ channel and the KK channel does not play any role the ρ ρ i (BB ′ ) possess almost the same t-dependence in the various particle channels. For the N N channel, the spectral functions for correlated ππ and KK exchange can be derived either from the N N → ππ amplitudes of our microscopic model or, alternatively, from the quasiempirical results obtained in Refs. [18,19]. As before we have then to subtract the uncorrelated pieces evaluated from the microscopic model for the N N → ππ Born amplitudes. Therefore the results derived from the quasiempirical amplitudes depend on parameters (e.g. g N N π ) of the microscopic model. In Fig. 19 we show the N N spectral functions in the σ as well as in the ρ channel obtained from our microscopic model (solid line) and the quasiempirical amplitudes (dashed line). As expected already from the comparison of the amplitudes in Sect. 3.1 the results agree quite well in the ρ channel. On the other hand some discrepancies occur in the σ channel. At smaller t-values, which determine the long-range part of the correlated exchanges, the theoretical model yields somewhat more attraction than the quasiempirical results. Larger discrepancies occur at higher t-values, which are however of minor relevance for the correlated exchange in the N N interaction. Namely, above t ≈ 30m 2 π the correlated exchanges are of shorter range than ω exchange, which generates the strong repulsive inner part of the N N potential (as well as of the other baryonbaryon potentials). Therefore the short-ranged parts of the correlated ππ exchanges are completely masked by the repulsive ω exchange and have a small influence only on N N observables. Furthermore one has to realize that the quasiempirical results obtained by extrapolation of data from the physical region of the s-and u-channel into the pseudophysical region of the t-channel have considerable uncertainties. 3.5 The potential of correlated ππ and KK exchange Using the spectral functions of the last chapter we can now evaluate the dispersion integrals, Eqs. 48-50, in order to obtain the invariant amplitudes in the s-channel. Eq. 53 then provides the (on-shell) baryon-baryon interaction due to correlated ππ and KK exchange. The effective coupling constants The results will be presented in terms of the effective coupling strengths G σ,ρ AB→CD (t), which have been introduced in Eqs. 54, 56 to parametrize the correlated processes by (sharp mass) σ and ρ exchange. We stress once more that this parametrization does not involve any approximations as long as the full t-dependence of the effective coupling strengths are taken into account. The parameters of σ resp. ρ exchange (mass of the exchanged particle: m σ , m ρ ; cutoff mass: Λ σ , Λ ρ ) are chosen to have the same values in all particle channels. m σ and m ρ have been set to the values used in the Bonn-Jülich models of the N N [10] and Y N [3] interactions, i.e. m σ = 550 M eV , m ρ = 770 M eV . The cutoff masses have been chosen such that the coupling strengths in the S = 0, −1 baryon-baryon channels vary only weakly with t. The resulting values (Λ σ = 2.8 GeV , Λ ρ = 2.5 GeV ) are quite large compared to the values of the phenomenological parametrizations used in Refs. [10,3] and thus represent very hard form factors. Note that, unless stated otherwise, the upper limit t ′ max in the dispersion integrals is put to 120m 2 π . Fig. 20 shows the effective coupling strengths G σ AB (t) in the baryon-baryon channels considered here, as function of −t. With the exception of G σ ΣΣ (t) (dash-dotted curve) the effective σ coupling strengths vary only weakly with −t, which proves that 550 M eV is a realistic choice for the σ mass. In the ΣΣ channel the suitable mass lies somewhat higher due to the strong coupling to the KK channel generated by ∆ exchange, cf. Fig. 11. If we compare the relative strengths of effective σ exchange in the various baryon-baryon channels we observe the same features observed already for the spectral functions: The scalar-isoscalar part of correlated ππ and KK exchange is in the N N channel about twice as large as in both Y N channels and by a factor 3-4 larger than in the S = −2 channels. For sharp mass σ exchange the following equation holds In other words, the three processes are determined by two coupling constants g σN N ≡ G σ N N →N N and g σΣΣ ≡ G σ ΣΣ→ΣΣ , such that G σ N Σ→N Σ (t) = g σN N g σΣΣ . For correlated exchanges this is not necessarily true anymore; here we have in general which is just a consequence of the Schwarz inequality relation The equality holds if both functions have the same t-dependence. This is roughly fulfilled for N N and ΛΛ, but not for ΣΣ; therefore the equality approximately holds in the first but not in the second case. Consequently, in general, effective vertex couplings for correlated exchanges are not well defined since they might take different values in different baryon-baryon channels. Tab. 5 contains the coupling strengths at t = 0. Besides the results for our full model for correlated ππ and KK exchange the table includes results obtained when neglecting the BB → KK Born amplitudes. Obviously the inclusion of these amplitudes provides only 15% of the coupling strengths in the N N channel; it is much more important in the channels with strangeness, in fact providing the dominant part in the S = −2 channels. Furthermore the table contains results obtained when uncorrelated contributions involving spin-1/2 baryons only are subtracted from the discontinuities of the invariant baryon-baryon amplitudes in order to avoid double counting in case a simple OBE-model is used in the s-channel. For the full Bonn N N model contributions involving spin-3/2 baryons have to be subtracted too (as done in general in this work) since corresponding contributions are already treated explicitly in the s-channel. Obviously processes involving spin-3/2 baryons increase the 'true' correlated contribution by about 30% in all channels. In the ρ-channel the spectral functions are dominated by the resonant contributions in the region of the ρ resonance. Therefore the effective coupling strengths ij G ρ AB→CD (t) (ij = V V, V T, T V, T T ) vary even more slowly with t than those in the σ-channel. Because of this weak t-dependence it is for the moment sufficient to consider only the values of coupling strengths at t = 0. They are shown for the N N channel in Tab. 6, for Y N in Tab. 7 and in Tab. 8 for the S = −2 baryon-baryon channels. Note that for the equal (unequal) mass case the results are given in terms of 3 (4) coupling strengths. For equal masses the present description in terms of correlated ππ exchange is more involved compared to sharp mass ρ-exchange since the latter can be characterized by two parameters only, the vector and tensor coupling constant. Also, as in the scalar channel, there is no definite relation between coupling strengths in the various channels so that vertex coupling constants cannot be uniquely extracted but depend on the channel chosen. (Thus it is not surprising that our ρN N coupling strengths, which are determined in the N N system, are not fully consistent with the vector and tensor coupling constants derived by Höhler and Pietarinen [18,19] from πN scattering, though both calculations agree of course qualitatively.) Tables 6-8 include results obtained when only the ρ-pole terms are considered in the BB ′ → µµ Born amplitudes. In this case the effective coupling strengths are still SU (3) symmetric, i.e. they roughly fulfill the relations with α e v = 1 and α m v = 0.4. (We remind the reader that we have chosen the bare couplings to exactly obey SU (3) symmetry.) Obviously the influence of (SU (3) broken) baryon masses on the ρ-pole contributions is small, probably because our calculations are performed in the pseudophysical region, far below the baryon-baryon thresholds. As expected however the effective coupling strengths of the complete calculation do not respect SU (3) symmetry anymore, due to the sizable influence of the (non-pole) baryon-exchange processes. Doing again a restricted subtraction of spin-1/2 baryon contributions only (suitable for an OBE model) we now do not have a unique trend for the change of coupling strengths in all channels, as found before in the scalar case. In case of the vector coupling strength (V V ) in the N N system the restricted subtraction leads even to a smaller value. This is not too surprising if one realizes that the ρ-vector coupling strengths are obtained from differences of approximately equal spectral functions so that well-controlled changes in the spectral functions can lead sometimes to large modifications of coupling strengths, in arbitrary direction. At this point we would like to make a remark about the sensitivity of our results to the upper limit in the dispersion integral, t ′ max . It is in fact quite small: Lowering t ′ max from the generally used value of 120m 2 π to 80m 2 π the effective σ-coupling strengths are increased by less than 10%; in the ρ-channel the variations are even smaller. Moreover, ratios of coupling strengths in the various channels are practically unchanged. Two points remain to be addressed: (i) For the couplings of baryons to pseudoscalar mesons we have assumed in our calculations SU (3) symmetry to be realized for pseudoscalar-type coupling, cf. Sect. 2.4.1. Results are different when the same symmetry is assumed for couplings of pseudovector type, since (apart from the N N π coupling) BB ′ µ couplings are then increased by a factor (M B +M ′ B )/2M N leading to much stronger σ-coupling strengths in the strange baryon-baryon channels, see Table 9. (ii) We have assumed for the F/(F + D) ratios α p = α m v = 0.4 predicted by the static quark model. If we change α p by about 10% (to 0.45) the resulting changes in the effective σ-coupling strengths are much less than 10%. The reason is that part of the Born amplitudes are increased while others are decreased, so that the total effect is quite small. The situation is completely different for α m v , which determines the bare tensor couplings f (0) BB ′ ρ in the ρ pole graph. The same increase of α m v (to 0.45) leads to strong modifications in the ρ coupling strengths, especially for the tensor-tensor part. Comparison with other models In the N N channel we can also determine the effective σ-and ρ-coupling strengths from the quasiempirical N N → ππ amplitudes [18,19]. Corresponding spectral functions have already been discussed before. We now show in Fig. 21 the results for the product of effective coupling strengths and form factors obtained from our microscopic model and the quasiempirical amplitudes [18,19], in comparison to values used in the (full) Bonn potential [10]. Since the quasiempirical amplitudes are available only up to t ′ max = 50m 2 π , a corresponding cutoff is used in the dispersion integral. If we use the same lower cutoff also for our model corresponding results essentially agree with those obtained for the quasiempirical amplitudes. Obviously the slightly stronger increase of the spectral function ρ σ S of the microscopic model at small t ′ roughly compensates for the larger maximum of the quasiempirical spectral function at t ′ ≈ 20m 2 π . Furthermore we obtain a considerable reduction of strength in the σ-channel from inclusion of the repulsive contributions above t ′ = 50m 2 π whereas in the ρ-channel such pieces have only a very small influence. Especially in the ρ-channel our results are considerably larger than the values used in the Bonn potential. The reason is the form factor with Λ N N ρ = 1.4 GeV used in the Bonn potential, which reduces the strength at t = 0 by as much as 50%. One final point remains to be addressed: Our present results differ considerably (by up to 30%) from our former calculations based on a different microscopic model for the N N → ππ amplitudes. The reason for these discrepancies (in the spectral functions and effective coupling strengths) is that the subtraction of the uncorrelated terms from the discontinuities is model-dependent. Both models differ in the parametrization of ∆ exchange (75); furthermore the model of Ref. [12] does not include the ρ-pole term in the transition amplitude. Still both models provide a similar description of the quasiempirical data. The average size of the effective coupling strengths is only a rough measure of the strength of correlated ππ and KK exchange in the various particle channels. The precise energy dependence of the correlated exchange as well as its relative strength in the different partial waves of the s-channel reaction is determined by the spectrum of the exchanged invariant masses, i.e. the spectral functions, leading to a different t-dependence of the effective coupling strengths. Fig. 22 shows the on-shell N N potentials in spin-singlet states with angular momentum L = 0, 2, and 4, which are generated by the scalar-isoscalar part of correlated ππ and KK exchange. As expected it is attractive throughout. Slight differences occur between the potentials derived from the microscopic model for the BB ′ → µµ amplitudes and those determined from the quasiempirical N N → ππ amplitudes, which can be traced to differences in the spectral function ρ σ S (t) (cf. Fig. 19): For small t ′ the microscopic input is larger; therefore the corresponding potential in high partial waves (which is dominated by the small-t ′ behavior) is by about 20% larger than the quasiempirical result. In the 1 S 0 partial wave, on the other hand, medium and short ranged exchange processes characterized by larger t ′ -values contribute. In this region the microscopic amplitudes are considerably weaker; furthermore they contain the repulsive contributions above the KK threshold (cf. Fig. 16). Consequently the resulting potential is somewhat less attractive in the 1 S 0 partial wave. In agreement with Ref. [12] our present results (evaluated either from the microscopic model or the quasiempirical amplitudes) are stronger than σ ′ -exchange of the full Bonn potential. The difference is especially large in high partial waves since σ ′ -exchange, which corresponds to a spectral function proportional to δ(t ′ − m 2 σ ), does not contain the long-range part of the correlated processes. Indeed if we parametrize our results derived from the microscopic model by σ-exchange as before (Sect. 3.5.1) but use for the effective coupling strength G σ N N →N N (t) the constant value at t = 0 we obtain rough agreement with our unapproximated result in the 1 S 0 partial wave but underestimate it considerably in high partial waves. Obviously the replacement of correlated ππ and KK exchanges by an exchange of a sharp mass σ meson with t-independent coupling strength cannot provide a simultaneous description of low and high partial waves. It is interesting to compare our results for the effective σ-and ρ-coupling strengths in baryon-baryon channels with non-vanishing strangeness with those used in the hyperonnucleon interaction models of the Nijmegen [4,5,6] and Jülich [3] groups. These are (with the exception of the Jülich model B which will not be considered in the following) OBE models, i.e. σ (and ρ) exchange effectively include uncorrelated processes involving the ∆-isobar. Therefore we have to use for comparison dispersion-theoretic results in which only uncorrelated processes involving spin-1/2 particle intermediate states have been subtracted. Table 10 shows the relative coupling strengths in the different baryonbaryon channels for various models, in the σ-channel. Apart from the Nijmegen model D [4], in which the scalar ǫ-meson is treated as an SU (3) singlet and therefore couples with the same strength to all channels, the interaction is by far strongest in the N N channel for all remaining models, and it becomes weaker with increasing strangeness. Obviously, the Nijmegen soft core model [6] is nearest to the dispersion-theoretic predictions. Table 11 shows the analogous results in the ρ-channel for the vector-vector (V V ) and tensor-tensor (T T ) components. There are sizable differences between the effective coupling constants from correlated exchange and the coupling constants of the OBE-models as well as among the OBE-models themselves. The latter models assume the vector coupling to the isospin current to be universal, which fixes its relative strength in the different particle channels (apart from form factors included in the Jülich model): In the N Σ channel it is twice as large as in the N N channel and vanishes for the transition N Λ → N Σ. The correlated exchange result deviates strongly, which is another manifestation that SU (3) symmetry does not hold for correlated exchanges, even if it is assumed for the bare ρBB ′ couplings present in our microscopic model. Summary and Outlook An essential part of baryon-baryon interactions is the strong attraction of medium range, which in one-boson-exchange models is parametrized by an exchange of a fictitious scalar-isoscalar meson with a mass of about 500M eV . In extended meson exchange models this part is naturally generated by two-pion-exchange processes. Besides uncorrelated processes correlated terms have to be considered in which both pions interact during their exchange; in fact these terms provide the main contribution to the intermediate-range interaction. In the scalar-isoscalar channel of the ππ interaction the coupling to the KK channel plays a strong role, which has to be explicitly included in any model meant to be realistic for energies near and above the KK threshold. As kaon exchange is an essential part of hyperon-nucleon interactions a simultaneous investigation of correlated ππ and KK exchanges is clearly suggested. In this work we have therefore derived the correlated ππ as well as KK exchange contributions in various baryon-baryon channels. Starting point of our calculations was a microscopic model for the transition amplitudes of the baryonantibaryon system (BB ′ ) into two pseudoscalar mesons (ππ, KK) for energies below the BB ′ threshold. The correlations between the two mesons have been taken into account by means of ππ − KK amplitudes (determined likewise fieldtheoretically [13,17]), which provide an excellent reproduction of empirical ππ data up to 1.3GeV . With the help of unitarity and dispersion-theoretic methods we have then determined the baryon-baryon amplitudes for correlated ππ and KK exchange in the J P = 0 + (σ) and J P = 1 − (ρ) t-channel. In the σ-channel the strength of correlated ππ and KK exchange decreases with the strangeness of the baryon-baryon channels becoming more negative. In the N N channel the scalar-isoscalar part of correlated exchanges is by about a factor of 2 stronger than in both hyperon-nucleon channels (ΛN , ΣN ) and by a factor 3 to 4 stronger than in the S = −2 channels (ΛΛ, ΣΣ, N Ξ). The influence of KK exchange is strong in baryon-baryon channels with non-vanishing strangeness while it is small in the N N channel. This feature can be traced to different coupling constants and isospin factors and especially to the different masses involved in the various baryon-antibaryon channels. The role of correlated KK exchange is small in the ρ-channel. Here the correlations are dominated by the (genuine) ρ-resonance in the ππ interaction. Among the various BB ′ − ππ, KK Born amplitudes the direct coupling of the ρ-resonance to the baryons in the form of a ρ-pole graph provides the dominant contribution to correlated exchange. It turns out that our results depend only slightly on the upper limit (cutoff) introduced in the dispersion integral. Some uncertainty results from applying SU (3) resp. SU (6) relations to either pseudoscalar or pseudovector π and K coupling constants. Note that the same problem occurs already in OBE-models of the hyperon-nucleon interaction. Ultimately it has to be decided by comparison with experiment which procedure is to be preferred. Moreover, if instead of SU (6) symmetry SU (3) symmetry is assumed only, the results for correlated exchanges depends on the F/(F + D) ratios α p and α m v . While the dependence on α p is only weak variation ofα m v leads to noticeable changes in the model predictions for the correlated exchange in the ρ-channel. Also here a final decision about the correct choice of α p and α m v can be made only by comparison with experiment. Again these parameters occur already in OBE hyperon-nucleon models. Therefore no new parameters are introduced when including correlated ππ and KK exchange in baryon-baryon interaction models. On the contrary, the elimination of single σ and ρ exchange reduces the number of free parameters and thus enhances the predictive power of corresponding interaction models. Our results can be represented in terms of suitably defined effective coupling strengths. It turns out that the resulting values in the various baryon-baryon channels are not connected by SU (3) relations. For example, although we have even assumed SU (6) symmetry for the coupling strength of the bare ρ to the baryon current sizable baryon exchange processes destroy this symmetry in the final effective couplings. Consequently the assumption of SU (3) symmetry for single σ-and ρ-exchange is not supported by our findings. With this model constructed in the present work it is now possible to take correlated ππ and KK exchange reliably into account in the various baryon-baryon channels. Especially in channels in which only little empirical information exists the elimination of phenomenological σ-and ρ-exchange considerably enhances the predictive power of baryon-baryon interaction models. Clearly the inclusion of correlated exchange in existing interaction models (e.g. the Bonn N N potential [10] and the Jülich Y N models [3]) requires readjustment of free model parameters to the empirical data. Having fixed these parameters in the N N and Y N channel the interaction model can then be extended parameter-free to other baryon-baryon channels with strangeness S = −2 using SU (3) arguments for the genuine couplings. In this way in the frame of the Bonn-Jülich models the possibility arises for the first time to make sensible statements about the existence of bound baryon-baryon states with strangeness S = −2, which should be of some importance regarding the analysis of H-dibaryon experiments. Table 5: Effective σ coupling strengths G σ AB→AB (t = 0) for correlated ππ and KK exchange in the various baryon-baryon channels. (The meaning of the several rows is given in the text.) Table 6: Effective ρ coupling strengths ij G ρ N N →N N (ij = V V, V T, T V, T T ) at t = 0 for correlated ππ and KK exchange in the N N channel. (The meaning of the several rows is given in the text.) Table 7: The same as Tab. 6, for the hyperon-nucleon channels N Σ and N Λ-N Σ. Table 9: Effective σ coupling strengths G σ AB→AB (t = 0) depending on the F/(F + D)-ratio α p . ps (pv) indicates that SU (3)-symmetry is assumed for the pseudoscalar (pseudovector) coupling constants g BB ′ µ (f BB ′ µ ). The column 'ps, α p = 0.40' agrees with the results of the full model in Tab. 5. Table 10: Strength of the σ-like contributions (at t = 0) to the various baryon-baryon interactions relative to the N N channel. In case of the dispersiontheoretic result for ππ and KK exchange only those uncorrelated contributions are subtracted which are generated already in the s-channel by the iteration of an OBE potential (cf. Tab. 5). In case of the OBE models the numbers are extracted from the coupling constants of the σ-(OBEPT & Jülich A with inclusion of form factors and averaging over the several isospin channels) or the ǫ-meson (Nijmegen models and the ΞΞǫ coupling from SU (3)). [18,19] obtained by analytic continuation of the πN and ππ scattering amplitudes. The solid (dotted) line is derived from the microscopic model for correlated ππ and KK exchange using t ′ max = 120m 2 π (t ′ max = 50m 2 π , only for g 2 σN N ). The dashed line follows from the quasiempirical N N → ππ amplitudes [18,19]. The effective strength of σ ′ and ρ exchange in the Bonn potential [10] is denoted by the dash-dotted line. Figure 22: The σ-like part of the N N on-shell potential in various partial waves as a function of the kinetic energy in the laboratory system. The solid line is derived from our microscopic model for correlated ππ and KK exchange (with t ′ max = 120m 2 π ). The dotted lines are obtained if this dispersiontheoretic result is parametrized by σ exchange and the coupling strength G σ N N →N N (t) is subsequently set to the constant value at t = 0 The dispersiontheoretic calculation using the quasiempirical N N → ππ amplitudes of
2014-10-01T00:00:00.000Z
1995-11-10T00:00:00.000
{ "year": 1995, "sha1": "681283368d79c8825ff99b8374d7e5d9d536c87d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/9511011", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ab6156792912f4cb38f5cef5891f930739e6fe1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
93841137
pes2o/s2orc
v3-fos-license
Adsorption Isotherms of Phenol Onto Adsorbents Derived from Egg Shell and Palm-Oil Shell The adsorption isotherms of phenol from aqueous solution onto adsorbents obtained from egg shell (ESA) and palm shell (PSA) were investigated. The objectives of the investigation were to understand the effect of both adsorbents on solution pH and to study the adsorption equilibrium of phenol onto the adsorbents. The effect of adsorbent on pH of solution was studied by shaking the adsorbent of 0.1 to 1.5 g with 100 ml of acidic aqueous solution for 30 min at room temperature. The adsorption experiments were performed by stirring appropriate amount of adsorbent with 100 ml of 50 mg/l concentration of phenol at constant temperature and pressure. The Langmuir and Fraundlich adsorption models were applied to experimental data and the isotherm constants were calculated using linier regression analysis. The results showed that the adsorption capacity of the adsorbents increases with increasing of dosage and contact time. Also, pH of solution affected the adsorption isotherm of phenol, where maximum adsorption was observed at pH values lower than 9. INTRODUCTION Phenol and phenolic derivatives were commonly encountered in aquatic environment (Gupta et al., 1998;Flock et al., 1999).Potential sources of phenolic compounds include the production and use of phenol and its products in industrial processes.Many industries such as petroleum industry, chemical, plastics, pharmaceutical, drugs, wood, pulp and paper, and phenolic resin industries released large quantities of wastewater containing varieous concentration of phenolics (Juang et al., 1998;Asyhar, 2002a).Because of their toxicity and carcinogenity, phenol and its derivatives are known as organic pollutants.Phenolic compounds are able to cause bad taste and odor, even at low concentration.According to Deanna and Shieh (1986), phenolic compounds are one of the 9 groups of harmful pollutants and therefore, they are included the main parameters and priorities of environmental water and wastewater quality. Investigations of phenol and phenolic derivatives adsorption have been reported by previeous researchers.Juang et al., (1998), investigated adsorption isotherm of phenol on activated carbon fiber, whereas Asyhar (2002b), applied petroelum coke as raw material of activated carbon for the adsorption of phenol and 4-nitrophenol.This study was continued to investigate the equilibrium and adsorption model (Asyhar, 2002c).In the present paper it is reported an investigation of applying the adsorbents derived from chicken egg shells and palm shell towards phenol.The choice of egg shell is based on investigations reported by Van der Weijden andComans (1997), andWenming et al., (2001), that such material is mainly composed of calcium carbonate, i.e. calcite and calcareous soil, and they should be known as adsorbents.This asumption is supported by the report of Yeddou and Bensmaili (2007), that showed adsorption activity of egg shell towards metal iron.Moreover, palm-oil shell is already well known as a source of activated carbon. The techniques for the activated carbon production from this agricultural by-product were published by previeous researchers (Wan Nik et al., 2006).Activated carbon from palm-oil shell has also be reported by Vilisand et al., (1999). Egg shell preparation. Chicken egg shell samples obtained from Bakery Home Industries in Jambi City were washed with tap water and dried at 50 to 60 0 C for 2 hours in an oven.Then, egg shell membrans were separated and washed three times with distilled water.Afterwards, the sample was placed in an electric oven and dried for 2 to 3 hours at 70 0 C. The dried egg shell adsorbent (ESA) was crushed and screened through a set of sieves of 50 mesh size to obtain the appropriate particles. Production of adsorbent from palm shell.The raw material of palm shell taken from PTP. Nusantara VI Mill Sungai Bahar Kabupaten Batanghari Jambi.The material was washed with water and dried in an oven. After drying, the shell was soaked with 10% acid solution and immersed in this solution for 24 hours to loose the fibre and traces.The wastes floated at the surface of solution were then separated by applying distilled water.According to Wan Nik et al., (2006), this treatment is important part to make sure the adsorbents that would be produced are of good quality.The shell materials were taken the weight 20.0 g, then sunk with 100 ml of freshly prepared solution of 30% H 3 PO 4 .The mixture was carbonized at 600 0 C for 2 hours in a stainless tube of 20 cm inside diameter and 50 cm length as designed by Vitidsant et al., (1999).The activated carbon products were washed several times with water to obtain neutral pH followed by dried in an oven at 105 0 C for 2 hours.The palm shell adsorbent (PSA) was then crushed and shieved to obtain particles of 50 mesh. Effect of adsorbent on pH of solution. The effect of adsorbent material on pH of solution was studied by shaking the adsorbent with dosages varied of 0.1 to 1.5 g with 100 ml of acidic aqueous solution (pH=3.2) for 30 min at room temperature (± 30 0 C).The solid materials were filtered out from solution through a filter paper before measuring their pHs with a digital pH meter. Isotherm adsorption experiments. The reaction vessels used in all experiments were 100 ml Erlenmeyer flasks with glass stoppers.Each glass was prewashed with 10% HNO 3 solution, rinsed with demineralized water, and air-dried at 110 0 C for 2 hours prior to usage. A magnetic stirrer was used to mix the solution, and a glass beaker was, as a conventional thermostat, used to maintain a constant temperature.The experiments were undertaken in a little dark by means of covering the outside of the beakers with aluminum foil.The beakers were maintained at ± 1 0 C to the desired temperatures and a normal pressure (1 atm). Experiments were performed by adding 100 ml of phenol solution with a 50 mg/l concentration to 0.1 to 2.0 g of adsorbent.Into the solution, 20 drops of buffer were needed to adjust the pH to 2 to 12 prepared according to procedure described in Asyhar (2002a).The flasks were then put in a thermostat at constant-temperature of 30 0 C and stirred continuously with a speed of 300 rpm for a period of time 0 to 180 min.After equilibration, the supernatant liquid was filtered through blue ribbon filter paper with a 125 mm (S & S) prior to analysis, in order to minimize interference of the adsorbent fines with the analysis.The concentrations of phenol in the residual solutions were determined using UV-visible spectrophotometry at 270 nm wavelength.The removal efficiency of phenol solution was calculated from: where q represents the amount of phenol uptake per unit mass of adsorbent (mg/g), V is the volume of the solution (l), m is the dry mass of the adsorbent (g), C i and C f -initial and final concentrations (mg/l), respectively. RESULTS AND DISCUSSIONS Effect of sorbents towards pH of solution. Figure 1 shows that adsorbent obtained from egg shell significantly affects pH of solution.It can be seen that the pH increase with increasing of egg shell dosages. A maximum increase of pH was almost achieved by adding 0.5 g of egg shell to 100 ml of test solution, indicating the optimum dosage of the egg shell.The capability of egg shell in increasing pH may be explained from the chemical composition of egg shell mainly containing basic components, i.e. calcite and calcareous materials (Yeddou & Bensmaili, 2007). Different from egg shell, activated carbon from palm shell does not affect pH of solution.This can be seen from Figure 1 that there is no significant difference in hight of pHs graphs before and after agitating process, indicating that there is no influenced of adsorbents from palm shell towards pH of solution.It can be concluded that the adsorption capacity of adsorbent derived from egg shell towards phenol is very small compared to those of palm-oil shell, indicating that the egg shell sorbent is not effective for the removal of phenol. Asyhar Effect of contact time.The adsorption experiments were carried out to determine the contact time required to attain equilibrium condition.0.5 g of adsorbent was stirred with 100 ml of test solution having a concentration of 50 mg/l at a constant temperature (30 0 C) and adjusted pH of 6.0 for a time period of 0 to 180 min.The quantity of the adsorbed phenol from solution was plotted as a function of time.The adsorption of both adsorbents to phenol shows generally the similar trend where the adsorption process is a function of contact time as depicted in Figure 2. The amounts of phenol adsorbed by adsorbents were progressively increased as the contact time increased, and then gradually attained equilibrium in 30 min for the adsorbents both derived from egg shell (ESA) and palm shell (PSA).Within the first 15 min, the adsorption by PSA occurred rapidly, where more than 50% of phenol concentrations were reduced within this period of time.No significant change in percent uptakes of adsorbate was observed after 60 min.The equilibrium of the adsorption was almost completely reached after 90 min.A similar trend of adsorption is shown by adsorbent from egg shell (ESA) but in a lower capacity and slower rate.The relatively small amount of phenol removed from its solution by ESA attributes to lack of their interaction compared to PSA.In order to ensure the completion of the adsorption process of phenol, further experiments for the evaluation purposes should be done by stirring each reaction mixture for up to 180 min, even though an adsorption period of 90 min is sufficient to reach equilibrium.Effect of adsorbent dosages.Figure 3 shows the reduction of phenol by variation of amounts of the sorbents.As the amount of adsorbents used increased, the removal of the phenol also increased.The increase in the removal of phenol by PSA was much greater than that by ESA.It was observed that almost completely phenol was adsorbed from solution by applying 0.5 g of adsorbent from PSA.As for ESA, it was observed that a maximum amount of phenol reduced from solution by 1 g of adsorbent was only 51.0 mg.The higher adsorption capability of the adsorbent from PSA is presumed as a result of similar physical properties where both phenol and activated carbon are known as neutral matters.ESA, as a carbonate-rich material, has a polar side in it molecules, and its polarity is higher than PSA, so that the interaction between ESA and the neutral phenol molecules is weaker.That means that the phenol molecules solution weak.Effect of pH to adsorption capacity.The effect of pH to the adsorption capacity of both adsorbents towards phenol was investigated.Both adsorbents strongly adsorbed both at pH values lower than 9.The horizontal plateaus in Figure 4 between pH values 2 and 10 indicate that no significant influence of hydronium ions (pH) within these pH ranges. Figure 2 . Figure 2. Adsorption of phenol onto ESA and PSA as a function of time investigations of isotherm adsorption of phenol onto adsorbents derived from egg shell (ESA) and palm shell (PSA) were performed.The experimental results indicated that several factors such as contact time, pH of solution and adsorbent dosage affect the adsorption process.Adsorbents from egg shell increase pH of solution, whereas adsorbents obtained from palm-oil shell did not significantly affect pH.The maximum adsorptions of phenol were obserbed at pHs below 9.The linearized plots obtained for the adsorption of phenol onto ESA and EPA were indicated by the linearities, R 2 , comfirming the validity of the Langmuir model for the adsorption.The isotherm analysis results show that the adsorptive behavior of phenol onto adsorbents from egg shell and palm shell agreed with Langmuir rather than with the Freundlich isotherm. Table 1 . The linearized plots obtained for the adsorption of phenol onto adsorbents derived from egg shell and palm shell were indicated by the linearities, R 2 , in Table1, comfirming the validity of the
2019-04-04T13:06:13.559Z
2012-11-21T00:00:00.000
{ "year": 2012, "sha1": "14973014e2b893918385d574288175de306e3191", "oa_license": "CCBY", "oa_url": "https://natur.ejournal.unri.ac.id/index.php/JN/article/download/187/181", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "553be386528605e5783ae52a6b18818f1e85e26d", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
254592053
pes2o/s2orc
v3-fos-license
Exact Bayesian inference for the detection of graft-mobile transcripts from sequencing data The long-distance transport of messenger RNAs (mRNAs) has been shown to be important for several developmental processes in plants. A popular method for identifying travelling mRNAs is to perform RNA-Seq on grafted plants. This approach depends on the ability to correctly assign sequenced mRNAs to the genetic background from which they originated. The assignment is often based on the identification of single-nucleotide polymorphisms (SNPs) between otherwise identical sequences. A major challenge is therefore to distinguish SNPs from sequencing errors. Here, we show how Bayes factors can be computed analytically using RNA-Seq data over all the SNPs in an mRNA. We used simulations to evaluate the performance of the proposed framework and demonstrate how Bayes factors accurately identify graft-mobile transcripts. The comparison with other detection methods using simulated data shows how not taking the variability in read depth, error rates and multiple SNPs per transcript into account can lead to incorrect classification. Our results suggest experimental design criteria for successful graft-mobile mRNA detection and show the pitfalls of filtering for sequencing errors or focusing on single SNPs within an mRNA. . If a transcript that originates from the stock is found in scion tissue or a transcript from the scion is found in stock tissue, then a plausible interpretation is that it has moved over the graft junction. This inference depends on the ability to distinguish between mRNA molecules from the two grafted plants. Homologous genes from closely related organisms often have highly similar sequences. Sequences that are identical cannot be distinguished. We assume that distinguishable homologous mRNAs from the two grafted plants with different genetic backgrounds (genotypes) differ in a number of nucleotide positions, single-nucleotides polymorphisms (SNPs). Other differences, such as insertions or deletions, are possible but not considered here. Given that the numbers of transported transcripts are often low compared with the number of endogenous copies [18] and that sequencing errors can be expected [19,20], it is important to identify and account for potential errors in the data. In addition to sequencing errors, processing errors can arise from the differing qualities of the reference genomes, sequence divergence between the accessions used for grafting and the closest references genomes, and the level of sequence similarity between the genomes of the grafted plants. Our goal is thus to develop a framework for assessing from which genotype transcripts have originated. If we sample tissue from one of the grafted genotypes (local), we may find SNPs associated with the other genotype (distal). A challenge is to assess whether the RNA-Seq reads for these positions that could be indicative of transcripts that have crossed the graft junction can be explained by expected biological and technical variation in the local genotype. Here, we construct a probabilistic description of how to distinguish sequencing errors from graft-mobile transcripts using a Bayesian formalism for which we derive exact (analytical rather than numerical) solutions. Bayesian approaches [21,22] have been employed extensively in many fields, including sequence analysis and systems biology [23][24][25][26][27][28][29][30][31], but cases that are analytically tractable remain limited [32][33][34]. Analytical solutions do not require computationally expensive numerical approximations, such as Monte Carlo integration, that have become common practice in Bayesian inference. The presented approach takes read depths, SNP-specific error rates, replicates and multiple SNPs per transcript into account. We assume that the SNP positions themselves have been identified from the comparison of high-quality genomes or high-quality transcriptome data. We consider sequencing errors at those positions but do not account for uncertainty in those positions. We develop a Bayesian framework for evaluating the evidence for a transcript being graft-mobile, over the evidence for that data being consistent with sequencing or processing errors. This has the advantage that we can rank the transcripts by the data supporting their movement. We evaluate our approach using simulated data. This allows us to have confidence in the assignment as well as to examine the performance over a range of possible data properties (error rates, read depths and number of SNPs). We compare our methodology with other approaches, and based on true and false positive rates (TPR and FPR), we find that filtering SNPs with measurable errors and not taking error rates or read depths adequately into account have significant detrimental consequences on the classification performance. The proposed Bayesian approach overcomes these issues and with controlled simulated data can accurately identify graft-mobile transcripts, substantially outperforming other methods for classifying transcripts based on RNA-Seq reads. Furthermore, the Bayesian formalism can elegantly incorporate all SNPs found on each transcript. The analysis using simulated data shows that combining the information from multiple SNP positions increases classification accuracy. In fact, we find that over a large range of parameters the proposed method classifies simulated data near perfectly with an accuracy of close to 100%. We wish to identify homologous transcripts from the grafted plants that differ in a number of nucleotide positions (SNPs). If we perform RNA-Seq on material from the scion and stock of grafted plants and map the reads onto reference genomes of A and B, we can expect to find numbers of reads that correspond to A and/or B for each SNP. See Thieme et al. [13] for details on the grafting and RNA-Seq protocols. For each SNP, we denote the total number of reads by N and the total number of reads that contain SNPs that correspond to the genotype that is different from the genotype of the sampled tissue by n (for short we will say n reads map to the other genotype). For instance, if we perform RNA-Seq on tissue taken from a scion of genotype A, then those reads that map best to genotype B at a certain SNP will be denoted by n. If the sample was taken from a homograft A: A and n reads map to genotype B, then these n must be viewed as 'errors' that occurred during the process. Potential error sources include biological variation due to, for instance, mutations or the occurrence of different splice variants, or technical issues during library preparation or polymerase chain reaction (PCR) amplification steps, mapping errors to the reference genome and errors therein, as well as bioinformatic tools and the choice of associated parameters. In electronic supplementary material, figure S1, we describe the relationships between the numbers from RNA-Seq data and the actual reads from each genetic background. The key question is whether the data support the presence of transcripts from the other genotype (e.g. from B) in the tissue of the sampled genotype (e.g. A) and how many reads we can attribute as being graft-mobile. We denote the genetic background of the sampled tissue, the local genotype, by '1' (where '1' could be genotype A or B). We assume that a transcript has a set of SNPs, S = {s}, for which the RNA-Seq analysis results in data in the form D = {D s } = {N s , n s }, where N s is the total number of reads for SNP s, of which n s map to the other, non-sampled tissue of another genetic background, the distal genotype (denoted by '2'). We define two hypotheses for which we wish to evaluate the statistical evidence: -Hypothesis H 1 states the data can be explained by a statistical model with only one genetic background, i.e. RNA-Seq reads that appear to be from a second genetic background are consistent with sequencing errors, mapping errors, frequencies of somatic mutations, presence of splice variants and any other process that can be expected to introduce mis-assignments in the sampled tissue of genotype 1 → the data are consistent with the expected biological and technical variance from only one genotype. -Hypothesis H 2 states that the data are best explained by a statistical model that includes transcripts from distal grafted tissue, i.e. there are RNA-Seq reads from transcripts from a second genetic background that are unlikely to have arisen from the expected variance of genotype 1 → the data support the presence of RNA-Seq reads from two genotypes and the transcript being graft-mobile. Hypothesis 1 thus posits that RNA-Seq reads from only one genetic background are sufficient to explain the data, whereas hypothesis 2 requires that two genetic backgrounds are present. If statistical evidence for two genetic backgrounds is found in the data, then a plausible inference is that transcripts have moved across the graft junction. We can compare hypotheses using the posterior odds ratio, P(H 2 |D)/P(H 1 |D) [21,22]. When both hypotheses, H 1 and H 2 , are equally likely a priori, P(H 1 ) = P(H 2 ), the posterior odds ratio becomes equal to the marginal likelihood ratio (Bayes factor), BF 21 = P(D|H 2 )/P(D|H 1 ), where P(D|H 1 ) and P(D|H 2 ) are the marginal likelihoods, also known as evidences. If the ratio of graft-mobile transcripts to nongraft-mobile transcripts were known, then this information could be used as the prior ratio for P(H 2 )/P(H 1 ). Here, we will assume a 1:1 ratio and use Bayes factors (BFs). Following Jaynes [21], we use the logarithm (of base 10) of the BF. A BF logBF 21 of 0 means that there is an equal probability for both hypotheses, whereas logBF 21 > 0 favours hypothesis 2 (graft-mobile) and logBF 21 < 0 favours hypothesis 1 (errors), see figure 2. The statistical evidence for a transcript being graftmobile can be computed from the contributions from its SNPs The computation of the evidence, P(D|H 1 ), requires integration over all parameters associated with H 1 , p 0 . We assume that these parameters can be specific to each SNP s in a transcript, p 0 ¼ ðp 0 1 , . . . , p 0 jSj Þ, where |S| is the cardinal number of S, i.e. the number of SNPs in the transcript. Assuming the parameters are independent between SNPs, the likelihood can be expressed as the product of the likelihoods for each SNP. The evidence for H 1 over all data related to the transcript can thus also be expressed as a product, P(D|H 1 ) = ∏ s∈S P(D s |H 1 ), where P(D s |H 1 ) is the evidence for H 1 for data D s associated with SNP s. Analogously we can derive equations for H 2 , whereby there may be a different number of parameters associated with each SNP for H 2 , p 00 ¼ ðp 00 1 , . . . , p 00 jSj Þ, compared with H 1 . The parameters are explained in the following sections and the electronic supplementary material, appendix. Following the aforementioned procedure, we can express the evidence P(D|H 2 ) as a product of SNP-based evidence, P(D|H 2 ) = ∏ s∈S P(D s |H 2 ). The log BF of hypothesis H 2 (graft-mobile, i.e. there are reads from two genotypes) over H 1 ('errors', i.e. reads from only one genotype) can thus be written as the sum over log BFs for each SNP s within a transcript (as set of SNPs, S), This factorization into contributions from each SNP allows us to focus on deriving equations for a single SNP. We then sum these contributions to obtain an overall log BF for a transcript being graft-mobile (H 2 ) or not (H 1 ). SNP-specific error distributions can be inferred from homograft data As mentioned earlier, we denote the total number of reads over a SNP by N and the total number of reads associated with genotype 2 by n. If the data came from a homograft or a non-grafted plant, we can assume that N − n reads were correct and n reads have sequencing errors or were mismapped, i.e. there is a SNP-specific error rate that we can infer from the data. A suitable likelihood, LðuÞ, for a two-outcome event is the binomial distribution, where θ is the error rate we wish to infer (see electronic supplementary material, appendix). The conjugate prior to the binomial distribution is the Beta distribution, Beta(u 1 , u 2 ), which results in the posterior having the same functional form, Replicates can be used to update the posterior distributions. The information content of a dataset remains the same regardless of how it is subdivided or the order in which it is processed, and this is reflected in the mechanics of Bayesian updates [21]. Electronic supplementary material, figure S4 depicts how information is combined (always updating to the latest state of knowledge) within the Bayesian framework described here. As expected, this leads to the same results no matter how the data are split (e.g. into replicates). The number of reads from transported transcripts can be inferred from heterograft data As described earlier, we can focus on each SNP individually and then combine the contributions from all SNPs in a transcript, equation (2.1). Given N total RNA-Seq reads from sampled tissue of genotype 1 of which n contains SNPs associated with genotype 2, we want to know how many reads actually came from genotype 2, N 2 , see electronic supplementary material, figure S1. Assuming that the biological and technical variation associated with RNA-Seq analysis will be similar between homografts and heterografts, we can use homograft data as a reference against which to evaluate the heterograft data. As described in the electronic supplementary material, appendix, we can derive the posterior distribution over N 2 , P(N 2 |D). From this posterior distribution, the expected ratio of reads from transported transcripts can be computed, hN 2 i ¼ P N N2¼0 N 2 PðN 2 jDÞ and the ratio as r 2 = 〈N 2 〉/N for each SNP s. For a transcript, the expected ratio is given by averaging over all SNPs. Figure 3 shows how BFs change as a function of n and how well N 2 can be estimated for different homograft read depths. 2.5. SNP-specific evidence for H 1 and H 2 can be computed from the posterior distribution over N 2 We now show how both P(H 1 |D) and P(H 2 |D) can be computed from the above described distribution P(N 2 |D : Figure 2. The BF scale and its interpretation. The BF is the ratio of the evidences supporting two different hypotheses. Here, the hypotheses are that the data are consistent with expected biological variation, sequencing and mapping errors (H 1 ), and the data support the transcript being graft-mobile (H 2 ). The interpretation of BFs relies on accepted ranges [21,35], as shown in the figure. royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 19: 20220644 (from 0 to N, see electronic supplementary material, appendix) ensures that each instance of N 2 > 0, which corresponds to H 2 , will have the same prior probability as N 2 = 0 that corresponds to H 1 , thus resulting in an equal prior for H 1 and H 2 and the posterior ratio being equal to the BF. The maximum log posterior odds ratio over all possible N 2 values for a transcript with |S| SNPs can now be computed from P(N 2 |D), where max is over all positive N 2 . Given that this approach has P(H 1 ) = P(H 2 ), the aforementioned equation is equal to the sum of SNP-specific BFs, P s[S log BF s,21 , for each transcript. The expression for P(N 2 |D s ) is given in the electronic supplementary material, appendix. The aforementioned equation can be used to assess whether a transcript is graftmobile from RNA-Seq data, figure 2. Validation against labelled data As most predicted graft-mobile mRNAs from RNA-Seq data have not been validated, there is uncertainty regarding their labelling, making an evaluation of the classification performance problematic. We circumvent this problem by creating datasets for which we know exactly which transcripts are assigned to be graft-mobile and which not (see §3). The use of simulated data gives us full control over important parameters such as error rates and read depths while testing the accuracy of our method. Error rates can be accurately inferred from read counts per SNP in homografts A first question was how well we can infer the true error rate from simulated data (see §3). Electronic supplementary material, figure S5 shows the inferred error rate, 〈θ〉, plotted against the true error rate q for a range of q from 0 to 1 and for read depth N from 10 to 10 000. For low read depths (N = 10), the possible number of outcomes is small, leading to a visibly discretized set of inferred error rates with significant variation. For N = 100, the inferred error rates match already well to the true error rates. For N = 1000 and above, the estimates are accurately defined with high precision. This suggests that with suitably high RNA-Seq read depths (relative to the error rate), we can expect the inferred error rate per SNP to be a fair reflection of the true error rate. Negative and positive controls are captured well by Bayes factors for individual SNPs We compare hypotheses by evaluating the evidence for one hypothesis over another. Here, we use logarithm base 10 of the BF for hypothesis 2 over hypothesis 1, log 10 BF 21 , meaning that a value of zero arises when the data give no evidence either way, a negative value when the data is consistent with expected error rates and a positive value if graft-mobile transcripts are present, figure 2. To investigate whether the BF calculation would correctly assign a negative value for cases where there are only errors (no graft-mobile transcripts), we generated datasets (see §3) for different error rates and different read depths. Data were generated that represent both homograft combinations to infer their posterior distributions for the error rates. Separate data were generated from the same process with the same error rates for each SNP and treated as heterograft data. We found that the BFs from most simulated datasets correctly favour hypothesis 1 that the observed data arose from a process with consistent error rates between homo-and heterograft data. However, as expected, there are several exceptions due to the stochastic nature of the simulations, electronic supplementary material, figure S5. The variability in BFs depends on how well the read depth captures the underlying error rate, for instance an error rate of q = 0.01 will not be well represented by a read depth of 100 or less. After confirming that the BFs perform satisfactorily on negative controls, we next validated the approach on positive controls. We used simulated data for which additional reads Figure 3. The read depth of the homograft data influences the BFs and the numbers of inferred graft-mobile transcripts. (a) It is shown how the BF changes as a function of the number of reads, n, that map to the other genotype for different homograft read depths. For an assumed error rate, q, of 0.01 and a total number of reads, N, of 1000, the expected number of mismatches would be 10. As shown in (a), the BF is negative for n < qN = 10 and moves closer to 0 as n approaches the expected number of mismatches. The BF remains negative somewhat beyond the expected number of reads as these low numbers are still consistent with the inferred error distribution. As n increases further, the BF becomes positive, favouring the hypothesis that reads may have originated from genotype 2. This change in the BF is shown for different read depths from the homograft data. (b) The behaviour of the inferred number of reads from genotype 2, N 2 , as a function of n and of the read depth from the homograft data, N hom . With higher read depths from the homograft data and therefore more accurate inferences of the error probability distribution, the estimated number of reads from the genotype 2, 〈N 2 〉 approaches the correct value (black dashed line). royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 19: 20220644 from genotype 2 were added (graft-mobile transcripts), thus making them less consistent with the inferred homograft error rates. As anticipated, the closer the number of reads from genotype 2 is to the expected number of errors, the more challenging it is to distinguish graft-mobile transcripts from errors, electronic supplementary material, figure S7. If the error rate for a SNP is q, then we would expect to have on average qN reads that are errors. The standard deviation of this value is ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi qð1 À qÞN p . If we infer the error rate from the homograft data, we obtain a distribution over the inferred error rate, θ, the expectation of which, 〈θ〉, is our best estimate for q. The detection of reads from graft-mobile transcripts is therefore limited by the available data through the variation ( precision) in the inferred error rate. We conclude that the BFs perform well at the individual SNP level, but that false assignments can be expected, in particular for low read depths that fail to represent the underlying error rates. Combining the evidence across SNPs increases the accuracy of classification A major advantage of the proposed framework is its ability to combine the evidence across multiple SNPs within a transcript, equation (2.1). If the data from several SNPs of a transcript are incompatible with expected errors, then this enhances the evidence of the transcript being graft-mobile. Conversely, if the data from only one out of several SNPs within a transcript are found to deviate from expected errors, and data from the other SNPs are probably errors, then this could result in evidence against the transcript being graft-mobile. We can demonstrate this effect by showing how the distribution of BFs for mobile and non-mobile populations changes as we sum across all of the SNPs in a transcript, electronic supplementary material, figure S8. Interestingly, for anything other than borderline cases of low read depth, this improvement saturates in the simulated data and little is to be gained beyond a SNP number of approximately 3, figure 4. Comparison with other methods One key advantage of using BFs is that we are not classifying transcripts per se but instead evaluating the evidence for them being graft-mobile, or not. The BFs are thus used to rank our confidence in a transcript being graft-mobile given the data. To compare the presented Bayesian approach with alternative criteria for defining mobile mRNA, we defined and implemented two approaches, Methods A and B, inspired by previous publications [13,18,36]. Method A determines a transcript as graft-mobile based on the number of reads mapping to genotype 2 being above a predefined threshold of 3, in two of three replicates. Method B filters out SNPs with reads mapping to genotype 2 in data from genotype 1 homografts and then assigns SNPs as being graft-mobile when reads in the heterograft data are above 3 in two replicates [36]. So, Method B corresponds to Method A using only pre-filtered SNPs. Figure 5 shows a comparison of the different methods. We find that as the read depth increases, more and more false positives occur using Method A, which leads to a drop in accuracy. This is because if every nucleotide has an error rate, then higher read depths will result in higher absolute numbers of errors. The conservative nature of Method B means that it has a low rate of false positives, but it is unable to detect graft-mobile transcripts when sequencing errors are present, as they always are, in the homograft data. Method B is thus unable to detect graftmobile transcripts, unless the error rates are very low, figure 5. Confidently inferring a low error rate per SNP from RNA-Seq data requires a high read depth. Consequently, SNPs with both low read depths and low error rates were not detected in real RNA-Seq data [13], electronic supplementary material, figures S10 and S11. The classification found in the simulated data is also observed using artificial data (see §3) generated from published RNA-Seq data from Thieme et al. [13], figure 6. By using a log 10 BF 21 ≥ 1 (see §3) to classify transcripts as graft-mobile and analysing the same dataset with the Bayesian method delivers both high TPRs and low FPRs, reflected in high accuracy ( figures 5-7). The approximately equal read depths between the homograft data and the blended heterograft data reduces the difference in performance between the Bayesian approach and Methods A and B (Methods A and B do not take read depths into account, which leads to a higher mis-classification rate when differences are present). Consistent with the observations made earlier, the Bayesian approach shows an excellent TPR (and accuracy) for simulations with reads from genotype 2 that exceed the expected number of errors in genotype 1, figure 7. The analysis of existing data in terms of sequencing depth and error rate per SNP, royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 19: 20220644 electronic supplementary material, figures S10 and S11, shows that experimental data falls into a parameter regime where mis-classifications of Methods A and B can be expected. We conclude that the Bayes factors perform well and significantly better than our implementations of Method A and Method B. Dataset generation We used two approaches for generating test data, electronic supplementary material, figure S6. The first uses purely simulated data based on the underlying statistical model described later. The second mixes RNA-Seq data from existing homograft experiments to generate an artificial heterograft dataset. Neither approach is likely to capture the full variation and noise inherent in real datasets but have the advantage of having known labels that allow us to evaluate our method in a controlled manner. Simulation of RNA-Seq data Simulated datasets are generated based on a binomial distribution with an error rate q for each SNP. A random number generator is used to provide stochasticity in line with the expected variance of a binomial distribution. For the homograft datasets, each SNP is assigned a number of reads (N) and a value for q. A range of values for N and q were used and are given in the individual figures. For each read, from 1 to N, a uniform random number is drawn, and if this number is greater than q, then the read is assigned to genotype 1, otherwise to genotype 2, electronic supplementary material, figure S6. The generated reads per SNP thus represent a discrete realization of a stochastic process with a defined error rate, q, with N reads assigned to genotype 1 and n reads assigned to genotype 2. The heterograft data are generated using the same process, but with the addition of further reads, N 2 , from genotype 2 that represent mobile transcripts. Different numbers of added mobile transcripts, N 2 , are used to evaluate the sensitivity of the method. For measuring the classification accuracy (see below), we use balanced datasets throughout to not distort any of the performance metrics. Blending real RNA-Seq data While the simulated data are inherently noisy, they follow an underlying statistical model which may not capture the variability inherent in experimental data. We therefore generated labelled data based on existing RNA-Seq datasets. We took RNA-Seq homograft data from the Arabidopsis Col-0 and Ped-0 accessions [13]. We then 'titrated' one dataset into the other to create a labelled heterograft dataset, following (1 − p) × Col-0 + p × Ped-0, where p is the blending proportion. For instance, p = 0.1 would include 10% of the RNA-Seq reads for each SNP from Ped-0 and add them to the reduced (1 − p) reads in Col-0. If a SNP in the Ped-0 dataset had 100 reads assigned to the Ped-0 genotype, then for p = 0.1, we would add 10 of these reads to create a graft-mobile transcript. We selected SNPs that had a comparable read depth in Col-0 and Ped-0, electronic supplementary material, figure S6. Note that this approach creates data homograft and heterograft data of similar read depths, providing favourable conditions for Methods A and B. Given the spread and limited data compared with the simulations, we used adaptive binning to provide approximately equal-sized sets to evaluate the performance of the method as a function of read depth N. Bayesian classification criterion The Bayes factor denotes the evidence provided by the data that supports one hypothesis over another [21,22]. We use these Bayes factors to rank our confidence in a transcript being graftmobile; however, to evaluate our approach against existing methods that classify into mobile and non-mobile, we need a binary output. Following standard practice [21,35], figure 2, we use a value logBF 21-≥ 1 to select hypothesis 2 over hypothesis 1. It is worth noting that we could also determine transcripts for which there is strong evidence for them not being graft-mobile (e.g. those with logBF 21 ≤ − 1). This latter point is important for curating high-confidence datasets for training using positive and negative examples. True and false positive rates The classification performance is evaluated using accuracy, and true and false positive rates. Each mRNA is given one of four different labels: an mRNA that is correctly classified as graftmobile is a true positive (TP); an mRNA incorrectly classified as graft-mobile is a false positive (FP); an mRNA correctly classified Discussion Long-distance transport of mRNAs has been shown to be important for plant development and recently also in mammalian systems [37]. A popular method for detecting transported mRNAs in plants involves grafting plants with different genetic backgrounds and using RNA-sequencing techniques to determine whether transcripts have traversed the graft junction. To do so robustly requires consideration of the associated sequencing and alignment errors that are known to occur. Here, we have presented an approach for distinguishing between expected variation (e.g. biological variation, RNA-Seq errors, processing errors) in RNA-Seq pipelines and putative graft-mobile transcripts. We set up the problem as an hypothesis-testing framework founded in Bayesian inference for which we derived an exact analytical solution. A key, yet unverified, assumption inherent to the current method and other approaches that compare with homograft data is that the biological and technical variance of the RNA-Seq analysis in homografts and heterografts will be similar. For equal prior probabilities for the hypotheses 1 and 2, the posterior probabilities ratios are equivalent to Bayes factors. Bayes factors are computed for each SNP position based on the associated read counts and then combined into a Bayes factor per transcript. Replicates can be handled analogously. Bayes factors allow transcripts to be ranked based on data supporting their graft-mobility. Higher Bayes factors are indicative of more translocated individual transcripts (the expectation value of which can be computed separately). We show that RNA-Seq error rates can be accurately estimated using the presented methodology (electronic supplementary material, figure S5) and how the read depth influences the precision (electronic supplementary material, figures S1, S3 and S5) and the inferred number of graftmobile transcripts (figure 3). Multiple SNPs and replicates can be readily accounted for by summing their contributions (equation (2.1), figure 4 and electronic supplementary material, S4 and S8). We validated our approach extensively with simulations (figure 4, electronic supplementary material, S7-S9). We used simulated RNA-Seq data to allow for accurate quantification of the performance of the method, avoiding the uncertainties inherent in previously published graft-mobile mRNA labels. An additional advantage is that the use of simulated RNA-Seq data allows us to evaluate the performance over a range of parameters. Further validation was carried out using datasets derived from published homograft data [13], figure 6. The evaluation of different mobile mRNA detection methods showed that approaches using absolute number thresholds of reads per SNP on selected positions lead to high rates of mis-classified transcripts, figures 5-7. Classifying graft-mobile transcripts based on absolute read numbers without consideration of error rates and read depths (Method A) leads to the assignment of mobility to many transcripts for which statistical support is not present (high false positive rate). On the other hand, filtering SNPs with errors in the homograft data (Method B) results in a reduction in the number of detected mobile transcripts (low true positive rate), figures 5-7. The Bayesian method relies on reference data for the estimation of error rates, and the quality and quantity of these data influences its performance. Here, we used a non-informative uniform prior to evaluate the method, but a more suitable choice of prior for the error rates based on existing experimental data and RNA-Seq error analyses [20] would help overcome the limitations arising from low read depths (electronic supplementary material, figure S3). Figure 6. A comparison of methods on artificially generated heterograft data from real RNA-Seq data shows a similar trend in performance to the simulated datasets. The Bayesian approach (a), Method A (b) and Method B (c) are evaluated in terms of their true (TPR) and false positive rates (FPR) for blended real data using a blend factor of p = 0.1 (see §3). Only SNPs of comparable read depths between datasets were used and the data were binned for N (see §3). TPR and FPR are shown per SNP. Note that for similar read depths at each SNP location between homografts and heterografts, Methods A and B make less mis-classifications than they would for different read depths. Without prior knowledge of the error rates, the Bayesian approach requires sufficient sequencing read depth to build an error model. royalsocietypublishing.org/journal/rsif J. R. Soc. Interface 19: 20220644 The approach could be extended in several ways. We have simplified the outcome at each SNP position to map to the two genetic backgrounds used for grafting, resulting in a binomial likelihood. Including errors for all four nucleotides could provide a better description of the overall nucleotide variability, leading to a replacement of the bionomial likelihood by a multinomial distribution. The conjugate prior of the multinomial distribution is the Dirichlet distribution; however, we have not explored how tractable this approach would be to solve analytically in full. A key assumption of the presented approach is that the error rates per SNP are comparable between experiments. This assumption will need to be checked for real data. If necessary the inferred error rate distributions may need to be adjusted to take large deviations into account (e.g. by increasing the spread by reducing the Beta function parameters). Another addition would be to include sequencing quality scores within the framework. These extensions will be the subject of future developments after the careful analysis of existing datasets to assess shortcomings in the presented developments. To summarize, this contribution presents a Bayesian framework that takes account of read depths, error rates, replicates and multiple SNPs per transcript, providing a powerful means for distinguishing graft-mobile mRNA from RNA-Seq errors. As graft-mobile mRNA are often rare compared with endogeneous mRNA [18], RNA-Seq read depths need to be sufficiently high [36] and chosen with care, and every effort should be made to reduce the risk of contamination [16]. Detecting rare events can be statistically challenging. Error rates from either ungrafted plants or homografts provide useful reference values for the number of sequencing errors to expect for different genomic locations. The higher the read depth of the reference dataset, the better the error rates can be inferred. The data are provided in electronic supplementary material [38]. Figure 7. The classification performance of graft-mobile mRNAs depends on the error rates (q) and the read depths (N) of the RNA-Seq data. The simulation conditions are the same as shown in figure 5. N is the same homograft and heterograft data. The numbers of transcripts correctly labelled (TP and TN) based on BFs increases with read depth (a). Higher read depths capture the error rates from the homograft data, resulting in clearer separation based on BFs between hypothesis 2 (graft-mobile) and hypothesis 1 (errors). However, as sequencing errors in heterograft data are more likely to arise with higher read depths, we see a rise in false positives (transcripts with errors being incorrectly labelled as being graft-mobile, FP) for Method A (b). Conversely, the increase in errors in the homograft data with read depth leads to an increase in false negatives (graft-mobile transcripts being incorrectly labelled as non-graft-mobile, FN) in Method B (c). Therefore, both Methods A and B display decreasing classification performance with increasing read depth. Graft-mobile transcripts have been shown to be present in low numbers [18], therefore requiring high sequencing depths to detect them and necessitating methods able to distinguish between errors and graft-mobile transcripts.
2022-12-14T14:03:53.064Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "c2d0e59b0fa041092ca6652630776f2cb20cdb03", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "RoyalSociety", "pdf_hash": "c2d0e59b0fa041092ca6652630776f2cb20cdb03", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256434445
pes2o/s2orc
v3-fos-license
Salivary Human β-Defensin 1-3 and Human α-Defensin-1 Levels in Relation to the Extent of Periodontal Disease and Tooth Loss in the Elderly The oral innate immune response may diminish with aging. In the present study, the aim was to examine human β-defensin (hBD) 1-3 and human neutrophil peptide (HNP)-1 levels in the saliva of an elderly population to establish the extent of periodontal disease and tooth loss. A total of 175 individuals aged ≥ 65 years were divided into five groups based on the number of teeth with a pocket depth ≥ 4 mm as follows: 17 pocket-free individuals (Control), 55 individuals having 1–6 pocket teeth (PerioA), 33 individuals having 7–13 pocket teeth (PerioB), 29 individuals having at least 14 pocket teeth (PerioC), and 41 edentulous individuals. Their salivary defensin levels were measured with ELISA kits. The salivary HNP-1 levels were significantly higher in the Perio groups (PerioB: p < 0.001 and PerioC: p < 0.001) in comparison to the Control. The associations between salivary HNP-1 levels and the number of pocket teeth remained significant after adjustments for age, gender, level of education, and number of teeth. The salivary HNP and hBD levels differed in terms of their correlation to the extent of periodontal disease and tooth loss in the elderly. Introduction Oral cavities harbor both commensal and pathogenic bacteria at all times, having a constant interaction with host cells. The homeostasis between the oral microbiome and the host is maintained mainly by the components of the oral innate and adaptive immune responses [1]. Defensins, which are small cationic peptides with broad antimicrobial and chemotactic activities, play an essential role in controlling the oral environment. Epithelial (human β-defensins, hBD) and neutrophilic (α-defensins, human neutrophilic peptide, HNP) defensins are located in oral mucosae [2,3] and can be detected in saliva and gingival crevicular fluid [4,5]. hBD-1 is constantly expressed in the oral cavity, while the expressions of hBD-2 and hBD-3 are dependent on the extent of infection and inflammation. Indeed, the localizations of oral mucosal defensins during health also differ; while HNP-1 is localized in the junctional epithelium, hBD-2 is localized in the superficial layers and hBD-3 is localized in the basal layers of the oral and sulcular epithelium [2,6]. Finally, whereas the oral commensal bacteria stimulate the expressions of these antimicrobial peptides, the periodontitis-associated bacteria are able to suppress their expression or to degrade defensins with bacterial proteases [6,7]. Various changes and disorders in the immune response, namely immunosenescence, occur in the human body with aging [8,9]. For example, it has been shown that the receptor expression and signal transduction pathways are disturbed in phagocytic cells [10] and HNP-1 production of neutrophils is decreased [11] with age. In the elderly, an age-related decrease in neutrophil extracellular trap formation has been observed in periodontitis patients [12]. Moreover, aged people with gingivitis show increased expressions of interleukin (IL)-1β and IL-6 [13]. There are few studies exploring the oral hBD-or HNP-expression profiles in the elderly. Reduced salivary hBD-2 levels found in the elderly compared to young individuals [14] differ from the conserved hBD-2 levels found in serum with aging [15]. However, an interesting observation of the age-related difference in the localization of hBD-2 in human gingiva [16] indicates that the oral and systemic hBD expressions are regulated differently. Nevertheless, the intraoral expression profiles of hBD 1-3 and HNP-1 in relation to the extent of periodontal destruction in the elderly are still unknown. In aged people, tooth loss is a common event that reflects compromised health conditions of teeth and/or tooth-supporting tissues. Indeed, the extent of tooth loss is considered a surrogate measure of past, but also present, oral disease(s). Periodontal disease is one typical reason for missing teeth in adult-aged populations [1]. It is notable that the destruction of tooth-supporting tissues proceeds without noticeable signs until considerably loosened attachment, meaning that periodontitis-affected teeth may stay for years without treatment. Consequently, the infectious and inflammatory burden in the mouth can be high. Recently, our group demonstrated that salivary hBD-2 and hBD-3 levels do not relate to the extent of gingival inflammation in children and adolescents [17] nor in adults with periodontitis [18]. To our knowledge, however, there is no study in the literature to compare the expression profile of oral hBDs and HNPs in aged people in this context. In this study, we hypothesized that the response of immune cells is reduced with age, and thus, intraoral epithelial hBD and neutrophilic HNP responses differ from each other. Therefore, the aim of the present study was to examine whether salivary profiles of hBD 1-3 and HNP-1 differ from each other in the elderly, in terms of their correlation to the extent of periodontal destruction. The Study Population and Ethical Permission The elderly population in the present study was part of a national health survey of the working-aged and elderly Finnish population (the Health 2000 Health Examination Survey), conducted by the Finnish Institute for Health and Welfare (the former National Public Health Institute) during the years 2000-2001. Data on each individual were gathered by questionnaires, an interview at home or in an institute, and health examination, including laboratory tests, in the local health care center or comparable premises. In the southern district premises, an oral specimen (saliva) was collected as part of the laboratory procedures [19]. Written informed consents were collected from the participants in the survey. The study protocols were approved by the Ethical Committee for Epidemiology and Public Health of the Hospital District of Helsinki and Uusimaa, Finland (Trial protocol 407/E3/2000). The present study includes data on all individuals aged ≥65 years, who participated in questionnaires, interviewing, and clinical general health and oral health examinations, and from whom stimulated saliva samples were collected (n = 175). The demographic information, including age, gender, the level of education (basic, secondary, and higher), and smoking history (a daily/occasional smoker or a former (had quit ≥ 1 year ago)/never smoker), originated from questionnaires and interviewing. Measurements for the body mass index (BMI) were registered in the health examination. The information on oral health status included the number of teeth and the number of pocket teeth, i.e., having probing pocket depths (PPDs) ≥ 4 mm, measured by a specialist dentist with the assistance of a dental nurse in a standard dental unit. More detailed information on oral and periodontal examination can be found elsewhere [19,20]. Determination of Salivary hBD 1-3 and HNP-1 Paraffin-stimulated whole saliva samples collected by expectoration were immediately frozen in carbonic acid ice for 1 to 3 days before and during the transportation. Thereafter, samples were frozen at −70 • C until they were processed. Statistical Analyses All data analyses were carried out with the SPSS statistical program (version 26.0; IBM Corp., Armonk, NY, USA). In normally distributed parameters (age, number of teeth, and BMI), a two-way analysis of variance (ANOVA) test followed by a post-hoc t-test was applied. Non-parametric Kruskal-Wallis (for multiple comparisons) and Dunn-Bonferroni post hoc methods were used when comparing non-normally distributed parameters (biochemical markers of saliva). The significance values were adjusted by the Bonferroni correction for multiple tests. The Chi-Square test was used to compare the percentage of males and smokers between the groups. A linear regression analysis was used to analyze the associations between the salivary HNP-1 concentrations and the increased number of teeth with PPD ≥ 4 mm (the edentulous group was omitted during the regression analysis), in the presence or absence of confounders. Statistical significance was defined as p < 0.05. Results The characteristics of the study population, which was divided into five groups, are presented in Table 1. The PerioA (p = 0.015), PerioB (p = 0.032), and PerioC (p = 0.001) groups were significantly younger than the edentulous participants (Table 1). Being edentulous was especially common in the elderly with a basic educational level only. The PerioB and PerioC groups had more teeth compared to both the Control group (p = 0.002 and p = 0.001, respectively) and the PerioA group (p = 0.001 and p < 0.001, respectively) ( Table 1). Salivary defensin levels in the five study groups are presented in Figures 1-4. Elevated HNP-1 levels were observed in the PerioB (p = 0.001) and PerioC (p < 0.001) groups compared to the Controls. There were no differences in hBD 1-3 or HNP-1 levels when edentulous participants and Controls were compared. Salivary defensin levels in the five study groups are presented in Figures 1-4. Elevated HNP-1 levels were observed in the PerioB (p = 0.001) and PerioC (p < 0.001) groups compared to the Controls. There were no differences in hBD 1-3 or HNP-1 levels when edentulous participants and Controls were compared. Linear regression analyses revealed an association between elevated salivary HNP-1 levels and having an increased number of teeth with PPD ≥ 4 mm ( Table 2). Discussion To our knowledge, our study is the first to compare the salivary levels of epithelial (hBD) and neutrophilic (HNP) defensins in relation to the periodontal status of an elderly population. Here, we demonstrated that unlike the salivary levels of hBD-1, hBD-2, and hBD-3, those of HNP-1 relate to the extent of periodontal disease in individuals aged 65 years or more. A second novel finding was that the salivary HNP-1 and hBD 1-3 levels of edentulous individuals were close to those of periodontal pocket-free Controls. This finding indicates that in a healthy oral cavity, the salivary defensin levels stay steady even after the extraction of all teeth. According to the present results, salivary HNP-1 levels elevate gradually with the Linear regression analyses revealed an association between elevated salivary HNP-1 levels and having an increased number of teeth with PPD ≥ 4 mm ( Table 2). Table 2. Unadjusted and adjusted associations between salivary HNP-1 levels and an increased number of teeth with PPD ≥ 4 mm. Discussion To our knowledge, our study is the first to compare the salivary levels of epithelial (hBD) and neutrophilic (HNP) defensins in relation to the periodontal status of an elderly population. Here, we demonstrated that unlike the salivary levels of hBD-1, hBD-2, and hBD-3, those of HNP-1 relate to the extent of periodontal disease in individuals aged 65 years or more. A second novel finding was that the salivary HNP-1 and hBD 1-3 levels of edentulous individuals were close to those of periodontal pocket-free Controls. This finding indicates that in a healthy oral cavity, the salivary defensin levels stay steady even after the extraction of all teeth. According to the present results, salivary HNP-1 levels elevate gradually with the increased number of pocket teeth. This finding of elevated HNP-1 levels in the periodontally diseased elderly is in line with our recent study on defensin levels in an adult population with an age range of 40-60 years [21]. A further comparison of defensin levels between the latter and present studies revealed that the levels of salivary HNP-1 in the Control group elderly were lower (median 33.4 pg/mL, range 0-329 pg/mL) than those in the adult population (median 81.9 pg/mL, range 17.4-184 pg/mL). On the other hand, a comparison of individuals having at least 14 teeth with PPD ≥ 4 mm, between these two study populations (i.e., corresponding with the PerioC group of the present study and a generalized periodontitis group of the latter study), revealed higher HNP-1 levels in the elderly than in working-aged adults (median 163 pg/mL, range 40.2-397 pg/mL vs. median 103 pg/mL, range 17.4-305 pg/mL, respectively). Thus, the earlier observation that HNP-1 levels are suppressed with aging [11] is supported by our findings only for the individuals without periodontitis. Age does not seem to have an effect on the quantity of circulating neutrophils or on the ability of neutrophils to migrate to the infected site, although antimicrobial mechanisms of neutrophils seem to diminish by age [22]. Despite these observations, the current findings indicate that HNP-1 secretion is activated in connection with periodontal infection in the elderly. Based on the present study's results, hBD-2 levels are elevated in saliva where there is an increased extent of periodontitis; however, significant differences disappeared when p values were adjusted by the Bonferroni correction for multiple tests. In our recent study on salivary defensins in working-aged adults [21], the highest hBD-2 levels were measured in the localized periodontitis group (PPD ≥ 4 mm at 2-7 teeth), whereas the hBD-2 levels did not differ between the Control and the generalized periodontitis groups (no teeth vs. ≥14 teeth with PPD ≥ 4 mm). It is possible that the decreased hBD-2 levels with the expanded burden of the infection-induced inflammation are related to the enzymatic degradation of hBDs by host-derived and bacterial proteases. In a microbiological study based on the southern Finnish population of the Health 2000 Health Examination Survey [19], it was observed that the salivary detection rates of Porphyromonas gingivalis, which is a highly proteolytic periodontal pathogen, increased considerably with age. After adjusting for several variables, the carriage of P. gingivalis was found to be approximately 20% in the youngest age group (30-34 years), but increased to over 50% in the two oldest age groups (65-74 years and ≥75 years). Although aging, per se, was significantly associated with increased P. gingivalis rates, the factors influencing this age-dependent carriage pattern remained unexplained [19]. Interestingly, ourselves and others have shown that hBDs are prone to proteolytic degradation [23] and an increase in proteolytic activity negatively correlates with hBD-2 levels in vivo [18]. In the present study, no significant changes were found between the increased number of pocket teeth and hBD-1 or hBD-3 levels in saliva. The latter defensin is highly related to cellular migration and proliferation, thus, activating wound healing as well [24]; whereas, hBD-1 is constantly expressed in the oral and sulcular epithelium [6,25]. Considering that the density and regeneration ability of the oral epithelium decrease with aging [26], it is possible that hBD-1 and hBD-3 responses in the gingiva are suppressed with age. The present study included all ≥65-year-old participants of the southern Finnish population of the Health 2000 Health Examination Survey [19] from whom saliva samples were available; therefore, no sample size determination was performed. Our observation that pocket-free individuals (Controls) had a significantly lower number of teeth than individuals in the PerioB and PerioC groups may be explained by the history of periodontitis treated with extractions in the Control group. As the number of teeth, smoking, and level of education may have direct or indirect effects on salivary hBD levels [6], these parameters were included in the linear regression analysis as covariants. It was observed that individ-uals with a basic level of education were more often edentulous than the more educated individuals, which is potentially due to differences in self-care attitudes and the cumulative effect of oral diseases [27]. The main limitation of the present study was its cross-sectional design. The missing defensin data of younger age groups were also a limitation; however, a comparison of our current results with those of working-aged (40-60 years) adults [21] was feasible. The methods for the saliva collection and defensin measurements were similar in both studies, which allowed the advantage of reliable comparison. Finally, considering that HNP-1 is the major HNP present in the oral cavity, the salivary HNP-2 and HNP-3 levels were not studied here. Indeed, it has been suggested that HNP-2 is produced by the post-translational proteolytic cleavage of HNP-3 [28]. Future longitudinal studies are necessary to understand the shifts in salivary defensin levels in relation to age, systemic conditions, lifestyle, and oral inflammatory status. Conclusions In the elderly, salivary defensins HNP-1 and hBDs differ in terms of their correlation to the extent of periodontal destruction, as defined by the number of pocket teeth. Further studies are needed to explain the differences in the regulation of these two groups of oral antimicrobial defensins with aging. In future, it would be beneficial to measure the activators and suppressors of both αand β-defensins to explain the underlying mechanisms of antimicrobial peptide stimulation in the elderly. The elevated HNP-1 levels observed in the elderly having advanced periodontal destruction may indicate its diagnostic predictive value for further progression of disease. Institutional Review Board Statement: The present study was performed in accordance with the ethical standards of the Ethical Committee for Epidemiology and Public Health of the Hospital District of Helsinki and Uusimaa, Finland and with the 1964 Helsinki declaration and its later amendments. Informed Consent Statement: Informed consent was obtained from all participants involved in the study. Data Availability Statement: The data of this study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-02-01T16:31:37.198Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "59bec8dcf1621eeb7db475617892280c0c0188df", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/3/976/pdf?version=1674827591", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14cbd6c22711b1a1f67d0349a9895235720f043b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119200211
pes2o/s2orc
v3-fos-license
Confusion in Thermodynamics For a long time now, confusion has existed in the minds of many over the meaning of various concepts in thermodynamics. Recently, this point has been brought to people's attention by two articles appearing on the well-known archive (arxiv) web site. The content of these two pieces serves to illustrate many of the problems and has occasioned the construction of this answer to at least some of them. The position of the axiom proposed by Carath\'eodory is central in this matter and here its position is clarified and secured within the framework of thermodynamics. In particular, its relation to the First Law is examined and justified. The Carathéodory and Kelvin Forms of the Second Law. The argument employed in a recent posting [2] on the well-known Cornell University administered archive site by Radhakrishnamurty is a perfect example of where this confusion has helped towards a chaotic misunderstanding of some of the basics of thermodynamics. The claims concerned the validity of the proofs of the equivalence of the Carathéodory and Kelvin forms of the Second Law of thermodynamics. He based his claim on the version of the Second Law which appears in Fermi's book on thermodynamics [3]. As printed, this states that "A transformation whose only final result is to transform into work heat extracted from a source which is at the same temperature throughout is impossible." As with all forms of the Second Law, whether old or more modern, it is important to read all the words and realise that they all form crucial parts of the statement. In the version quoted here, one highly important part is 'whose only final result'. This means quite simply that, at the end of the procedure, the only end result is the transformation of an amount of heat into work. This means that, at the end of the procedure, the entire system must be back where it started or, in other words, the whole process must have been cyclic. Nowadays, statements of the law usually refer specifically to everything occurring in a cycle. However, in Kelvin's day, as is seen quite clearly from the writings of Tait [4], Kelvin was constantly discussing cycles and possibly saw little need to include the word explicitly in his statement of the Second Law. The fact that cycles were always under consideration follows from the fact that thermodynamics arose out of the study of heat engines which were so important for raising and lowering cages in mines and for pumping water out of many of those mines. Such engines work in cycles. What Landsberg showed in his article [5] was that, if Carathéodory's accessibility criterion was untrue, then an amount of heat could be converted completely into work in a cycle, contrary to the Kelvin form of the Second Law. In Landsberg's cycle, although not stated explicitly since such a statement was not necessary, heat is given to the system from a heat reservoir and, therefore, that heat is transferred from a source at constant temperature. The cycle is completed by returning to the starting point by using an adiabatic process as allowed if Carathéodory's axiom is not valid since, if not valid, the said axiom would imply that, in the neighbourhood of a state, all other states would be adiabatically accessible from it. Hence, the proof that Kelvin's form of the Second Law implies the validity of Carathéodory's axiom is valid. Of course, some may point out that one of the processes here is isothermal and so occurs at a constant temperature, whereas the other is adiabatic and so is accompanied by a temperature change. Hence, the above mentioned cycle is impossible and so the proof invalid. This is simply not true. The argument advanced by Landsberg simply makes no assumptions about isothermal and adiabatic processes except that, in the first, heat is given to the system at a fixed temperature while, in the second, work is done but no heat is transferred. The Second Law in its more traditional form due to Kelvin then rules out the possibility of such a cycle and the validity of the Carathéodory axiom is verified. There is no contradiction here. A similar argument is used to see in what sense the reverse is true [6] and it is found that Carathéodory's axiom is more general than the usual form of Kelvin's statement of the Second Law but equivalent to the modified form applicable when negative temperatures are considered. Hence, the claims of Radhakrishnamurty are unfounded and, if you consider the form of Kelvin's statement usually in use today [7] and which was the one used by both Landsberg and Dunning-Davies in their discussions: "It is impossible to transform an amount of heat completely into work in a cyclic process in the absence of other effects", you can see immediately that there are no real differences between it and the form Fermi attributes to Kelvin; the main difference is that this later version specifically mentions cycles, while the concept is implied in the quoted Fermi version. This, of course, is yet another point that Radhakrishnamurty fails to appreciate; he constantly criticises Landsberg for using an 'incorrect' form of Kelvin's statement of the Second Law. This situation illustrates, once again, how confusion and lack of understanding can creep into people's knowledge of thermodynamics. However, in this particular instance, it is due to a lack of care in reading the separate statements of -in this case -the Second Law. It is always vitally important to read and digest each and every word of such statements. As pointed out earlier, in the form attributed by Fermi to Kelvin, the complete meaning of the words 'whose only final result' must be absorbed and followed. The more modern wording, given above and used by Landsberg, merely singles out for special mention the fact that everything must occur in a cycle so that, after everything, the entire system ends up in exactly the same position from which it started. Not realising that both forms are actually identical indicates a lack of basic understanding and an inability to read and absorb information in totality. Carathéodory and the First Law. Admittedly, as acknowledged earlier, thermodynamics is a subject which causes problems for people at all levels of academia. This problem has begun to be addressed [8] in a variety of ways but, from the outset, it must be acknowledged that one way in which problems are created is through unclear and muddled thinking and writing. This is a problem which has plagued thermodynamics almost from the very birth of the subject. The writings of Clausius are a typical case in point and it might be argued that the foundations of much of the confusion existing today rest in his early articles, although here attention is focused on problems associated with the more modern approach associated with the name of Carathéodory. The note to which reference is being made here [9] is an excellent example of the problem. Firstly, while Carathéodory was invited to make a contribution to the subject by Max Born and while Born himself made contributions, notably via his book Natural Philosophy of Cause and Chance (Dover, New York; 1964), the new more abstract approach was clarified and modified by several people, mainly P.T. Landsberg in England, L. A. Turner, F. W. Sears and M. W. Zemansky in the U.S.A., and H. A. Buchdahl in Australia. In fact, the end result of their work has seen the virtual abandonment of Carathéodory's axiom from use but the retention of methods originally associated with it, namely the technique for deriving the existence of both an absolute temperature and entropy from the Second Law, as well as the equation representing the Second Law: (1) where the ´ indicates an inexact differential. Hence, the first problem with the cited article is that it doesn't even represent the historical facts too accurately. However, in this second article by Radhakrishnamurty [9], it is noted that, if a system undergoes an adiabatic change between two equilibrium states A and B, the First Law reduces to (2) That is, during the change, work done produces a change in internal energy. The author then claims that 'state B is not reachable by an adiabatic process from state A only if dU ≠ d´W.' He goes on to claim that 'in other words state B is not reachable from state A by an adiabatic process only if the First Law is violated.' It is the second sentence here which is totally misleading. However, not only is the conclusion misleading, but one must also question the methodology involved. Adiabatic inaccessibility means that there is no work process that leads to the difference in energy between two states, and one way to demonstrate the validity or otherwise of this assertion is to show by logical argument that the assumption of such a process leads to a contradiction. For example, suppose that Carathéodory is invalid and that all states in the neighbourhood of every other state are accessible by an adiabatic process. Thus, state A is accessible from B and vice versa. This would mean that all adiabatic processes are reversible and that in consequence there is no such thing as an irreversible adiabatic process and would render much of the development of thermodynamics invalid. If we consider by way of example the free expansion from, say A to B, we know that state A cannot be attained from B by an adiabatic process as the volume must be decreased and work done on the gas. It is necessary, therefore to extract heat from the system in order to return to state A. Clausius' theorem that, around a closed, irreversible cycle the integral of dQ/T < 0, leads to the same result. Clausius' theorem, therefore, implies Carathéodory but, more importantly, we have shown that the initial assumption leads to a contradiction, namely that there are such things as irreversible adiabatic processes, the free expansion being but one example. Radhakrishnamurty seems to believe he has shown, by a similar process, that the assumption that Carathéodory is valid leads to a violation of the First Law. On the contrary, he has assumed an adiabatic process between two states and then simply considered the case when (3) This is nothing more than a contrary assumption to the original, namely that the process linking the two states is not adiabatic. By the First Law, which can be written generally as (4) it is seen that, if a non-adiabatic process is being considered, the term d´Q makes a contribution or, in other words, the difference between the terms dU and d´W is made up by a quantity of heat d´Q. Hence, the conclusion supposedly deduced from the first quoted sentence from quoted article [9] simply does not follow and Carathéodory's axiom is perfectly compatible with the First Law of Thermodynamics which is, in fact, what Radhakrishnamurty has actually demonstrated.
2019-04-12T23:16:09.071Z
2011-03-02T00:00:00.000
{ "year": 2011, "sha1": "856dbb380fe714251f493d59f71146410da12583", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "856dbb380fe714251f493d59f71146410da12583", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
209393690
pes2o/s2orc
v3-fos-license
Voxel‐Based quantitative MRI reveals spatial patterns of grey matter alteration in multiple sclerosis Despite robust postmortem evidence and potential clinical importance of gray matter (GM) pathology in multiple sclerosis (MS), assessing GM damage by conventional magnetic resonance imaging (MRI) remains challenging. This prospective cross‐sectional study aimed at characterizing the topography of GM microstructural and volumetric alteration in MS using, in addition to brain atrophy measures, three quantitative MRI (qMRI) parameters—magnetization transfer (MT) saturation, longitudinal (R1), and effective transverse (R2*) relaxation rates, derived from data acquired during a single scanning session. Our study involved 35 MS patients (14 relapsing–remitting MS; 21 primary or secondary progressive MS) and 36 age‐matched healthy controls (HC). The qMRI maps were computed and segmented in different tissue classes. Voxel‐based quantification (VBQ) and voxel‐based morphometry (VBM) statistical analyses were carried out using multiple linear regression models. In MS patients compared with HC, three configurations of GM microstructural/volumetric alterations were identified. (a) Co‐localization of GM atrophy with significant reduction of MT, R1, and/or R2*, usually observed in primary cortices. (b) Microstructural modifications without significant GM loss: hippocampus and paralimbic cortices, showing reduced MT and/or R1 values without significant atrophy. (c) Atrophy without significant change in microstructure, identified in deep GM nuclei. In conclusion, this quantitative multiparametric voxel‐based approach reveals three different spatially‐segregated combinations of GM microstructural/volumetric alterations in MS that might be associated with different neuropathology. Importantly most of these studies did not characterize the spatial distribution of GM microstructural alterations when considering several quantitative parameters. In this paper, we precisely provide a whole-brain voxel-based quantification (VBQ) of three qMRI parameters-MT saturation, R1 and R2*, derived from data acquired in a single MR session-and assess the spatial distribution of their changes in a cross-sectional study which contrasted MS patients to HC. Potential GM atrophy was also investigated by a concurrent voxel-based morphometry (VBM) analysis. Our study aimed at characterizing the spatial distribution of microstructural and volumetric GM alterations induced by MS at the regional level (i.e., voxel-wise). Results are presented as different spatial combinations of atrophy and microstructural damages in cortical and deep GM involvement, in MS compared with healthy controls. | Participants Seventy-two participants were initially included in the study (Lommers et al., 2019) (Lommers et al., 2019). This study was approved by the local ethic committee (B707201213806) and written informed consent was obtained from each participant. | MR image acquisition and spatial processing MRI data were acquired on either of the following 3 T MRI-scanners: Magnetom Allegra and Magnetom Prisma, Siemens Medical Solutions, Erlangen, Germany. The whole-brain MRI acquisitions included a multiparameter mapping (MPM) protocol that has been gradually optimized and validated for multi-centric acquisitions (Leutritz et al., 2020;Tabelow et al., 2019;Weiskopf et al., 2013). It consists of three co-localized series of 3D multi-echo fast low angle shot (FLASH) acquisitions at 1 × 1 × 1 mm 3 resolution and two additional calibration sequences to correct for inhomogeneities in the RF transmit field (Lutti et al., 2012). The FLASH data sets were acquired with predominantly proton density (PD), T1, and MT weighting, referred to in the following as PDw, T1w and MTw echoes. Volumes were acquired in 176 sagittal slices using a 256 × 224 voxel matrix. Details of the MPM protocol used for this study are available as supplementary data. An additional FLAIR sequence was recorded with spatial resolution 1 × 1 × 1 mm 3 and TR/TE/TI = 5,000 ms/516 ms/1800 ms. All data analyses and processing were performed in Matlab (The MathWorks Inc., Natick, MA) using SPM12 (http://www.fil.ion.ucl.ac. uk/spm) and its extensions. MT saturation, R1 and R2* quantitative maps were estimated using the hMRI toolbox (http://hmri.info/) as previously described (Tabelow et al., 2019). Briefly, echoes for T1w, PDw, and MTw were extrapolated to TE = 0 to increase the signal-tonoise ratio and get rid of the otherwise remaining R2* bias (Tabelow et al., 2019). The resulting MTw and T1w (TE = 0) images were used to calculate MT saturation and R1 quantitative maps. To maximize the accuracy of the R1 and MT saturation maps, inhomogeneity in the flip angle was corrected by mapping the B1 transmit field according to the procedure detailed in (Lutti et al., 2012). In addition, intrinsically imperfect spoiling characteristics were accounted for and corrected in R1 map, using the approach described previously (Preibisch et al., 2009). The MT saturation map differs from the commonly used MT ratio (MTR, percent reduction in steady state signal) by explicitly accounting for spatially varying T1 relaxation time and flip angles. MT saturation shows a higher brain contrast to noise ratio than the MTR, leading to improved and more robust segmentation in healthy subjects (Helms, Dathe, & Dechent, 2010). The R2* map was estimated from all three multi-echo series using the ESTATICS model (Tabelow et al., 2019). Example whole brain maps are shown in Figure 1. Note that these MR sequences at 3 T are not sensitive enough to detect focal cortical lesions, as previously described (Hulst & Geurts, 2011). Quantification of cortical parameters is thus possibly confounded by voxels located within cortical plaques. MR images multi-channel segmentation and normalization were performed with the standard "unified segmentation" (US) approach for the HC and its US-with-Lesion extension, accounting for WM lesions, for the MS patients. This part of the processing is largely detailed in a previous publication (Lommers et al., 2019). Briefly, for each MS patients, a preliminary lesion mask was derived from the FLAIR images and used to update the "tissue probability maps" with an extra lesion tissue class limited to the WM. This patient specific extended TPM was then used in the US tool, therefore accounting for the usual brain and head tissue plus the lesions (Phillips, Lommers, & Pernet, 2016). Individual lesion fraction (LF, ratio of WM lesion load to total intracranial volume) was computed afterwards from the segmented tissue classes. For VBM analyses, GM probability map (including cortical and deep GM) were spatially warped to standard space, modulated by the Jacobian determinants of the deformations, and smoothed with an isotropic Gaussian kernel (6 mm full width at half maximum-FWHM). For VBQ analyses, the 3 quantitative maps were normalized using the subject-specific deformation field but without modulation. A tissue weighted smoothing (3 mm FWHM isotropic) yielded smoothed tissue-specific multiparameter maps which optimally preserved quantitative parameter values within each tissue class (Draganski et al., 2011). Detailed analysis of the influence of spatial deformations onto quantitative parametric values proved this method to be largely insensitive to volumetric changes (i.e., atrophy) (Salvoni et al., 2019). Finally, a GM mask was generated: the smooth modulated warped individual GM, WM and CSF maps were averaged across all subjects and the GM mask included voxels for which mean GM probability was larger than that of WM or CSF and exceeded 20% (Callaghan et al., 2014). | Statistical analyses Whole-GM voxel-wise VBM and VBQ statistical analyses, explicitly using the GM mask, were carried out using a multiple linear regression model embedded in the general linear model framework of SPM12. MRI data were analyzed in a factorial design, with the 2 different scanners as one factor and the group (MS vs. HC) as the second factor (Stonnington et al., 2008). Age, gender and total intracranial volume were entered as covariates of no interest. Differences between MS patients and HC as well as interactions between groups and scanners were tested by separate F-tests for each quantitative parameter (MT, R1, R2*) and volume. Post hoc t tests explored significant effects. Cluster-level inferences were conducted at p < .05 after family-wise error rate (FWER) correction for multiple comparisons across the whole GM (p < .0001 uncorrected cluster-defining threshold). These 2-sample t-tests identified significant group effects, over and above the normal spatially heterogeneous distribution of quantitative parameters (Deistung et al., 2013) and accounting for potential unequal variance across groups. In the patient population, three F tests looked for significant voxel-wise regression between each qMRI parameter and clinical scores (EDSS, motor and cognitive composite scores) as well as lesion fraction. Significance threshold was set at p < .05 FWER corrected at cluster level (p < .0001 uncorrected cluster-defining threshold). | RESULTS Compared with HC, we identified significant loco- supposedly because of their intrinsically heavy metabolic load (Calabrese et al., 2015) and their frequent involvement in focal WM inflammation. Finally, because of their numerous folds, these regions are more exposed to cerebrospinal fluid (CSF) stasis, supporting the hypothesis that soluble factors produced in the CSF by lymphocytes influence subpial demyelination, particularly in patients with progressive MS (Magliozzi et al., 2018). | Pattern 2: Hippocampus The evidence of substantial demyelination of hippocampi beyond atrophic areas constitutes a key contribution of this study and usefully complements previous characterization of hippocampal damage in MS (Rocca et al., 2018). Indeed, demyelination is detected postmortem in 53 to 79% of MS hippocampi (Dutta et al., 2011;Dutta et al., 2013;Geurts et al., 2007) inconsistently observed in demyelinated hippocampi while synaptic density is systematically decreased (Dutta et al., 2011;Geurts et al., 2007;Papadopoulos et al., 2009). By the same token, chronic inflammation potentially enhances neurogenesis within dentate gyrus (Rocca et al., 2015). Although the functional significance of these cellular changes is still under debate (Pluchino et al., 2008;Zhao, Deng, & Gage, 2008), they may balance neuronal loss, at the structural level (Rocca et al., 2015). | Pattern 3: Deep gray matter nuclei Our results show a significant atrophy of DGM in MS and agree with previous reports (Hulst & Geurts, 2011). Thalamic and putamen atrophy relates to significant neuronal and axonal loss. It occurs very early in the disease course and exceeds cortical atrophy (Eshaghi et al., 2018). Due its extensive reciprocal connections with cortical and subcortical structures, thalami are particularly vulnerable to anterograde and retrograde degeneration. This interpretation is supported by the significant inverse relationship between each qMRI parameter value (MT, R1, and R2*) within thalami and the lesion load, which suggests that lesions in connecting WM tracts also alter thalamic microstructure. These neurodegenerative processes likely dominate local inflammatory activity and oxidative injury which were also reported (Haider et al., 2016) but were not sensitively assessed in this study. | Limitations This cross-sectional study was run on a relatively small sample size. To preserve statistical power, the two MS phenotypes were pooled together. We cannot rule out that results are partly driven by the larger proportion of PMS over RRMS patients, although exploratory tests did not show any significant difference between the two patient groups when corrected for multiple comparisons. If considering results significant at p < .001 uncorrected for multiple comparisons, PMS patients showed reduction in MT, R1, and R2* in a number of regions that were already detected in the contrast involving healthy control and the whole MS population. This might indicate that demyelination is even more severe as disease progresses. Because differences in microstructure between RRMS and PMS patients are of paramount importance, they will be assessed in future work, based on larger and independent population samples. The factorial design used in this study offers the possibility to test separately the effect of disease and the effect of scanner, as well as their interaction. The absence of significant group by scan interaction allows us to disentangle the potential confound introduced by the two different scanners and still reliably discuss the effect of disease on GM microstructure. Furthermore, acquisition protocol has been optimized, pointing out the opportunity for multi-centric studies (Leutritz et al., 2020). Finally, our results do not confirm previous reports linking thalamic and hippocampal damage to motor performance and cognitive dysfunction in MS patients (Eshaghi et al., 2018;Rocca et al., 2018). Inferences were conservatively made after correction for multiple comparisons over the whole GM, increasing the risk of Type II error. In this preliminary study, we indeed considered that conservative inferences had to be preferred to spurious results. Alternatively, it might be the case that microstructural alterations precede the occurrence of clinical symptom: longitudinal studies are needed to answer this question. Moreover, spinal cord lesions were not taken into account although they impact motor performance. | CONCLUSION This multiparametric voxel-based approach identifies three different spatially-segregated patterns of GM microstructural/volumetric alterations in MS patients, that might be associated with different neuropathology. The results highlight the usefulness of qMRI parameters and their complementarity with volumetric techniques in assessing GM status in MS. ACKNOWLEDGMENT The authors are particularly thankful for the patients and healthy participants who eagerly took part in this study. CONFLICT INTERESTS The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
2020-03-19T19:58:34.411Z
2020-11-06T00:00:00.000
{ "year": 2020, "sha1": "f8d90130a86c2a1a27d3485d06fdd2052f1b6ab0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hbm.25274", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "4033110230f5f014afdf5a0692aa6842d135952b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245385007
pes2o/s2orc
v3-fos-license
Reach Kinematics During Binocular Viewing in 7- to 12-Year-Old Children With Strabismus Purpose Eye–hand coordination is essential for normal development and learning. Discordant binocular experience from childhood strabismus results in sensory and ocular motor impairments that can affect eye–hand coordination. We assessed reach kinematics during visually guided reaching in children treated for strabismus compared with controls. Methods Thirty-six children aged 7 to 12 years diagnosed with esotropia, a form of strabismus, and a group of 35 age-similar control children were enrolled. Reach movements during visually guided reaching were recorded using the LEAP Motion Controller. While viewing binocularly, children reached out and touched a small dot that appeared randomly in one of four locations (±5° or ±10°). Kinematic measures were reach reaction time, total reach duration, peak velocity, acceleration duration, and deceleration duration. Touch accuracy and factors associated with impaired reach kinematics were evaluated. Results Strabismic children had longer total reach duration (545 ± 60 ms vs. 504 ± 43 ms; P = 0.002), had longer deceleration duration (343 ± 54 ms vs. 312 ± 45 ms; P = 0.010), and were less accurate (93% ± 6% vs. 96% ± 5%, P = 0.007) than controls. No differences were found for reach reaction time, peak velocity, or acceleration duration (all Ps ≥ 0.197). Binocular dysfunction was more related to slow reaching than amblyopic eye visual acuity. Conclusions Strabismus affects visually guided reaching in children, with slower reaching in the final approach and reduced endpoint accuracy. Binocular dysfunction was predictive of slow reaching. Unlike strabismic adults who show longer acceleration duration, longer deceleration in the final approach in strabismic children indicates a difference in control that could be due to reduced ability to use visual feedback. S trabismus is a common pediatric eye condition that affects 2% to 4% of children and results in discordant binocular experience. 1,2 Esotropia is a form of strabismus with a nasalward eye turn that can result in a constellation of vision deficits, including amblyopia, binocular dysfunction, and ocular motor deficits that persist even after the eyes have been aligned with glasses or surgery. [3][4][5][6][7][8] Because esotropia emerges during a critical period of brain development and the effects persist throughout childhood, it has the potential to interfere with other developing systems that rely on vision, such as the motor system. Coordination between eye and hand movements is essential for efficient object manipulation. Interacting with objects in three-dimensional space requires depth percep-tion cues in order to localize the object, plan the movements, and guide the arm toward the object of interest. 9,10 Normal binocular vision during childhood provides important sensory input for optimal development of eye-hand coordination. [11][12][13] Therefore, discordant binocular experience early in life can significantly affect the maturation of eye-hand coordination. Strabismic and amblyopic children have impaired fine motor skills that require eyehand coordination, such as placing coins into a box, threading beads on a string, and transferring test answers to a multiple-choice form. [14][15][16][17][18] We recently reported fine motor deficits in esotropic and anisometropic children on a standardized test of motor ability, the Movement Assessment Battery for Children 2. 16 Poor performance was associated with binocular dysfunction (reduced or nil stereoacuity and interocular suppression), regardless of whether amblyopia was present. 16 Therefore, the extent of visuomotor deficits appears to be more closely associated with binocular dysfunction than with the severity of visual acuity deficit, indicating that normal stereoacuity and fusion are essential to optimal task performance during childhood. 14,16,17,19 Reaching is completed in two stages-an acceleration stage that reflects feedforward control (i.e., motor planning) and a deceleration phase that reflects online feedback control. 20 Assessing reach kinematics during fine motor tasks can provide information on these two stages and on developmental changes that occur over time. Typically developing children 5 years of age use visual information for planning reaching movements but do not rely on visual feedback to make online corrections, while children 7 years of age begin to use visual feedback to adjust limb trajectory during the movement. 21,22 Young amblyopic children aged 4 to 8 years with strabismus or anisometropia have prolonged reach in the final approach when reaching to grasp during binocular viewing, related to binocular dysfunction. 11,23 However, it is unknown whether the more simple task of reaching to touch is also affected. Here, we evaluated reach kinematics in older children aged 7 to 12 years with a history of esotropic strabismus as they performed a simple reach-to-touch task that required children to touch a dot on the screen with both eyes open. Our goal was to determine the extent to which strabismus affects the maturation of visuomotor control in older children. Further, we aim to explore factors associated with any reach kinematic deficits, such as amblyopia and binocular dysfunction typical of strabismus. Because children are just learning to use visual feedback for online corrections at 7 years of age and may not yet have adapted or formed a compensatory strategy, we hypothesized that strabismic children will be slower than controls in the deceleration phase. Further, we predict that slow reaching will be associated with binocular dysfunction. These data will not only aid in the understanding of how motor skills develop and how they are disrupted by abnormal visual experience but may also help guide interventions to ameliorate or prevent eye-hand coordination impairments in children with esotropia. Participants Strabismic children aged 7 to 12 years diagnosed with esotropia (herein called strabismic) were diagnosed and referred to the Retina Foundation by pediatric ophthalmologists in the Dallas-Fort Worth area. Strabismic children were initially diagnosed with esotropia but aligned with surgery or spectacle correction to within 12 prism diopters of orthotropia near the time of the test visit. Children with combined mechanism (i.e., strabismus + anisometropia) were included in the strabismus group. Age-similar control children with age-normal visual acuity and stereoacuity and no history of vision disorders were also enrolled. All children were tested with their habitual spectacle correction, which was confirmed by medical record review. No child enrolled in the study was born preterm (<37 weeks gestational age) or had coexisting ocular or systemic disease, congenital infections/malformations, or (neuro)developmental delays. Piloting showed that children with arm lengths (shoulder to fingertip) less than 50 cm could not comfortably reach the dot on the screen and thus were not enrolled. Medical records were obtained from referring ophthalmologists to extract diagnosis, current alignment, and prior treatment plan. English was the primary language for all children. Ethics The research protocol observed the tenets of the Declaration of Helsinki, was approved by the Institutional Review Board of the University of Texas Southwestern Medical Center, and conformed to the requirements of the US Health Insurance Portability and Privacy Act. Informed consent was obtained from a parent or legal guardian, and assent was obtained from children ≥10 years of age prior to testing and after explanation of the study. Procedure Vision Assessment. Prior Visually Guided Reaching Testing took place in a well-lit room and children wore their habitual optical correction during testing if required. Testing was completed with both eyes open and with the child's self-reported dominant hand. Each child was seated at a table with their head stabilized using a forehead/chin rest. We used a previously established visually guided reaching protocol 30,31 to examine kinematic strategies used by strabismic children to plan and execute reaching. Reach kinematic measures were recorded with the Leap Motion Controller system (LMC, software version 4.0; Leap Motion, Inc., San Francisco, CA, USA), a three-dimensional (3D) motion capture system that records upper limb movements using two cameras and three infrared LEDs. The LMC was placed 10 cm in front of the initial hand position. The initial position of the hand was standardized by having the child use their index finger and thumb to hold a stick affixed to the table at body midline, 5 cm away from the eyes (Fig. 1). Viewing distance of the display monitor was 35 cm. Prior to testing, hand calibration was completed by having the child first hold onto the stick and then reach Once the cross disappeared, a small white dot appeared on the left or right displaced 5 or 10 degrees from fixation. The child was instructed to reach out and touch the dot as quickly and accurately as possible and then return to the stick. The LMC recorded hand movements and was placed 10 cm from the hand's initial starting position. out with their index finger to touch a 0.3°white dot that appeared sequentially from left to right on the black screen in five different horizontal positions (−10°, −5°, 0°, +5°, +10°). For visually guided reaching, the child was instructed to fixate a white cross (1.4°) with a red dot in the middle that appeared in the center of the screen. Once the cross disappeared, a 0.3°white dot appeared randomly at one of four locations horizontally displaced (±5°or ± 10°from fixation). The child was instructed to let go of the stick and reach out and touch the dot with the tip of their index finger as quickly and accurately as possible. As a measure of touch accuracy for each trial, an experimenter observed and recorded whether the child's finger covered the dot (touched) or whether the dot was visible when the child was touching the screen (missed). A total of 40 trials were completed per child, with the first 4 trials counting as practice trials (36 experimental trials). Test time was approximately 15 minutes. Saccades during visually guided reaching were simultaneously recorded with a 500-Hz high-speed video binocular eye tracker (EyeLink 1000; SR Research, Ontario, Canada), but saccades data are not reported in this article. Data Processing Reach kinematic data were collected for each trial with the LMC and recorded with a custom Java application using the LMC Software Development Kit (Core Assets 4.1.1). Because the task involved reaching and touching with the index finger, LMC position data from the index finger were extracted and analyzed using a custom MATLAB script (MathWorks, Inc., Natick, MA, USA) that followed previously established signal-processing techniques. 12,32 Briefly, data were first fitted using a cubic spline function and resampled at 50 Hz using the MATLAB function pchip. Next, a Hampel filter was used to remove outliers, and a low-pass second-order Butterworth filter with a cutoff frequency of 10 Hz was then applied. Instantaneous velocity was obtained Light blue line is raw LMC data, and dark blue line is resampled, filtered LMC data. Blue circle, reach initiation; red circle, peak velocity; green circle, reach termination. using a two-point differentiation method, and the velocity data were filtered using a low-pass second-order Butterworth filter with a cutoff frequency of 6 Hz. All position and velocity trajectories were visually inspected and screened for missing frames or artifacts based on previously established criteria. 12,32 Children with fewer than 14 useable trials (at least 7 useable trials per side, left/right) were excluded from further analysis (4 control, 10 strabismic). The custom MATLAB script was used to identify two kinematic events: reach initiation (defined as velocity exceeding 20 mm/s) and reach termination (defined as velocity falling below 100 mm/s). These criteria are consistent with previous literature measuring reach kinematics. 32,33 These events were used to calculate the following kinematic outcome measures (Fig. 2): 1. Reach reaction time (ms): the interval between onset of the dot and reach initiation 2. Total reach duration (ms): the interval between reach initiation and reach termination 3. Peak velocity (m/s): the maximum (i.e., peak) velocity attained during the reach (54) Age, mean ± SD (range), y 9.6 ± 1.7 (7.1 to 12.7) 9.7 ± 1.9 (7.0 to 12.9) Arm length, mean ± SD (range), cm 57 ± 5 (50 to 70) 58 ± 5 (51 to 68) Prior eye alignment surgery: yes, n (%) 17 (47) 4. Acceleration duration (ms): the interval between reach initiation and peak velocity 5. Deceleration duration (ms): the interval between peak velocity and reach termination Statistical Analyses Primary analyses. Our primary goal was to determine the impact of strabismus on reach kinematics during visually guided reaching. We used independent t-tests to compare strabismic children to control children on each of the reach kinematic measures (reach reaction time, total reach duration, peak velocity, acceleration duration, deceleration duration) and touch accuracy. Secondary analyses. To determine factors related to reach kinematics, we compared clinical and sensory factors among the strabismic group to controls using independent t-tests for prior surgery (yes, no), amblyopia present (yes, no), stereoacuity measurable (present, nil), extent of suppression (Worth four-dot; bifoveal/macular, −0.15 to 0.45 log deg; peripheral/none, 0.60 to 1.2 log deg), and depth of suppression (CBI; no suppression, ≤2; suppression, >2). For data that were not normally distributed according to the Shapiro-Wilk test of normality, Mann-Whitney U tests were performed. All tests were corrected for multiple comparisons, and P values were adjusted using Holm's sequential Bonferroni procedure, which corrects for type I error as effectively as the traditional Bonferroni method while retaining more statistical power. 34 Effect size was also calculated using Cohen's d. Multiple regression analyses using stepwise selection were conducted to determine the contribution of sensory factors assessed by common clinical tests (amblyopic eye BCVA, stereoacuity, extent of suppression) to reach kinematics. RESULTS Data from 36 strabismic children (female = 22; age, mean ± SD = 9.6 ± 1.7 years) and 35 control children (female = 19; 9.7 ± 1.9 years) were included in the analysis. Children with strabismus did not differ from controls in age (P = 0.90) or arm length (P = 0.23). Descriptive statistics for clinical and sensory information are provided in Table 1. Factors Associated With Reaching Kinematics We further probed why children with strabismus had longer total reach duration and longer deceleration duration by evaluating clinical and sensory factors. For each factor, we also examined the percentage of the total reach duration that was spent in the deceleration phase (deceleration duration/total reach duration * 100) and touch accuracy. In general, prior surgery, the presence of amblyopia, nil stereoacuity, and marked suppression (by extent and depth) were all associated with impaired reach kinematics compared to controls, whereas no prior surgery, no amblyopia, measurable stereoacuity, and minimal suppression were not (see Table 2). Multiple regression analyses were also used to test if sensory factors (amblyopic eye BCVA, stereoacuity, extent of suppression scotoma [Worth four-dot]) significantly predicted total reach duration and deceleration duration. For each model, the variance inflation factor (VIF) was < 1.2, indicating that the risk of multicollinearity was low. Total reach duration. Stereoacuity was the only significant predictor for total reach duration, accounting for 16.3% of the variance (R 2 = 0.163, F 1, 34 = 6.61, β = 0.40, P = 0.015). The regression estimate was positive, indicating that those with worse stereoacuity had longer total reach duration. Amblyopic eye BCVA and extent of suppression were not significant predictors of total reach duration. Deceleration duration. Extent of suppression was the only significant predictor for deceleration duration, accounting for 17.2% of the variance (R 2 = 0.172, F 1, 34 = 7.06, β = 0.42, P = 0.012). The regression estimate was positive, indicating that those with a larger suppression scotoma had longer deceleration duration. Amblyopic eye BCVA and stereoacuity were not significant predictors of deceleration duration. DISCUSSION Slower reaching in strabismic children diagnosed with esotropia, especially in the deceleration phase, is consistent with other studies of amblyopic children and adults who have prolonged reach in the final approach of the more complex task of grasping. Children in our study did not have to shape their hands in preparation for grasping, and thus less planning may be involved, which may have an effect on duration of the reach. Further, because there is less planning involved in reaching to touch than reaching to grasp, errors in our task come at less of a cost (i.e., colliding with/dropping object). Yet, even in this simple reach-totouch task, deficits in reaching time and accuracy were still present. Longer deceleration in the final approach may indicate impaired quality or use of visual feedback for motor control, supported by our finding of lower touch accuracy. 35 Spatial distortions and positional uncertainty are present in strabismus [36][37][38] and could affect the sensorimotor transformation during visually guided reaching. Further adding to the reduced efficiency of the use of visual feedback could be the ocular motor deficits typical of strabismus, including fixation instability, and abnormal saccade initiation and execution. 4,6,8 Temporal eye-hand coordination during visually guided reaching in amblyopic adults with strabismus and anisometropia is associated with increased corrective saccades, a compensatory strategy to maintain reach precision and accuracy, 39,40 particularly among adults with nil stereoacuity. Our preliminary saccade data show that strabismic children (n = 10) have longer saccade onset latency than controls (n = 10) during visually guided reaching (unpublished data). However, saccades did not differ from controls once the eyes started moving; that is, saccade amplitude, peak velocity, temporal eye-hand coordination (time between saccade initiation and reach initiation), and frequency of corrective saccades were similar to controls. This is unlike strabismic adults who show normal saccade latency but more reach-related corrective saccades during binocular viewing, suggesting a compensatory change in strategy that develops with age. Alternatively, children may be making corrective secondary movements during the reach to be more accurate, indicating a problem with motor planning rather than use of visual feedback. Unfortunately, we were unable to determine if corrective movements to the dot were made due to the spatial resolution limitations of the LMC. However, our strabismic children were less accurate in touching the dot compared with controls, suggesting inefficient use of visual feedback during online control is a more likely cause of slow reaching in the final approach. Adults with childhood-onset strabismus exhibit reduced peak acceleration and prolonged acceleration during binocular viewing on a visually guided reaching task while maintaining normal endpoint accuracy and precision. 31 Lower peak velocity and longer acceleration in the initial approach would indicate that their ability to use vision to plan movements is reduced or that the control system adapted a sensorimotor compensatory strategy in order to maintain endpoint accuracy and precision. 31,35 In typically developing children, developmental changes in reaching strategy occur with age-children 5 years of age use visual information for planning during reaching and grasping but do not rely on visual feedback to make online corrections. In other words, they rely more on their motor plan and do not adjust errors during reaching based on visual feedback. Children start relying on visual feedback to adjust for errors around 7 years of age, when their reach control starts to become more adultlike. 21,22 During grasping, amblyopic children 5 to 7 years of age take longer in the final approach than controls and rely more on visual feedback to guide movement, whereas children 7 to 9 years take longer to manipulate the object, relying more on tactile feedback. 11,23 Strabismic children in our study and children with binocular dysfunctions in a previous study 11 exhibit endpoint inaccuracies during reaching and grasping while viewing binocularly. Further, the reach unfolds in a different manner than controls, with a larger proportion of the total reach spent in the deceleration phase. Thus, the switch between longer deceleration in strabismic children and longer acceleration in strabismic adults points to a compensatory strategy that develops over time with experience so that they can be more accurate in their movements. Slower reaching in strabismic children who had eye alignment surgery does not necessarily point to a motor deficit caused by surgery. Instead, we suggest that the poorer binocular outcomes associated with the type and severity of strabismus that requires surgery may be at fault. Further, there may have been a longer duration of misalignment that occurs in strabismus that requires surgery rather than strabismus that requires only glasses to align the eyes. In our study, children who had surgery in fact had worse stereoacuity than those who did not have surgery, with almost all children who had a history of surgery (18/19) having nil stereoacuity but only half of those without surgery (8/17) having nil stereoacuity. It is possible that children who had surgery may be performing better compared to prior surgery. Specifically, eye alignment following surgery may influence visuomotor learning such that children become more efficient in using visual feedback as they reach out and manipulate objects. Indeed, there is evidence that motor skills improve following eye alignment surgery for strabismus in children. 41,42 Several studies report fine motor deficits in children with amblyopia, such as placing coins into a box, threading beads on a string, and transferring test answers to a multiple-choice form during binocular viewing. [14][15][16]18 Other studies show that amblyopia affects reach kinematics during a grasping task; children and adults with amblyopia are slow at planning and executing reaching movements and have inaccurate grasp during binocular viewing. 11,23,43 During visually guided reaching, adults with amblyopia exhibit reduced peak acceleration and prolonged acceleration during binocular viewing, without affecting accuracy and precision. 30,31 Thus, amblyopia affects fine motor skills during binocular viewing, even though one eye has normal visual acuity. The issue may be suppression so that the amblyopic eye is causing interference by adding additional noise to the system. Or, children with amblyopia and interocular suppression may not be using their amblyopic eye to perform the task and instead are using their fellow eye. This may point to a fellow eye deficit in performance. Indeed, studies have also shown slower reaching during fellow eye viewing in amblyopic individuals. Further, fellow eye deficits have been reported for other visual functions such as ocular motor function, motion perception, and reading (for a review, see Birch et al. 44 ). However, we did not test children monocularly either with their amblyopic eye or their fellow eye, and thus we cannot be sure which eye is contributing to the reaching deficit found in our study. Although the presence of amblyopia was associated with motor deficits, this is likely due to the correlation with reduced or absent binocular function. In support of this is our finding that binocular dysfunction (reduced stereoacuity, suppression) was predictive of slow reaching, whereas amblyopic eye visual acuity was not. This is consistent with previous studies showing the importance of good binocularity in fine motor performance. 11,14,17,19,[45][46][47] Binocular cues provide vital information for judging distance, location, and 3D properties of objects during motor tasks. 9,48 The use of binocular cues emerges in infancy and continues to mature during childhood. [49][50][51] Binocular discordant input from strabismus early in infancy and childhood may thus disrupt the ability to use binocular cues during the development of motor ability. 23,52 This is further supported by similar performance to controls in strabismic children with better binocularity in our study. Previous research shows better motor performance in those with recovered binocularity 11,43 and suggests that binocularity contributes to optimum planning and execution of visually guided reaching. Fine motor impairments may adversely affect a child's life and may cause difficulties when learning in the classroom, especially in earlier grades when children learn counting and vocabulary by manipulating objects. In later grades, children with strabismus and amblyopia take longer to transfer answers to a multiple-choice form, 18 which could affect performance on timed, standardized tests. Evidence shows that motor impairments are associated with low self-esteem and self-perception in amblyopic children. [53][54][55] Treating the visual acuity deficits and binocular dysfunction that accom-pany amblyopia and strabismus may help improve fine motor ability. 56 Our study had limitations. The LMC system has robust temporal resolution, but some problems with spatial resolution remain. Spatial accuracy error ranges between 2 and 5 cm, 32 and thus measures of endpoint accuracy and precision cannot be reliably obtained with the LMC for the small 0.3°(1.8 mm) dot in our study. Nonetheless, we obtained accuracy data by observing whether the dot was touched. Further, the spatial inaccuracy of the LMC makes it difficult to determine if corrective movements to the dot were made; however, we assessed the primary movement, providing an indication of pointing efficiency. This is the endpoint that has been studied previously with strabismic adults and thus allows us to be able to compare to the adult data. 31 Teasing apart the individual contributions of clinical and sensory factors is challenging as they often coexist with one another (e.g., strabismus, reduced stereoacuity, suppression). 4 Yet, it is evident from our data that binocular dysfunction typical of strabismus affects eye-hand coordination during visually guided reaching. Last, we were unable to control for experience with motor skills; however, our task was a simple reaching task with which all children will have had experience, regardless of whether they are enrolled in any physical recreational activities. Nonetheless, many of the children in this study participated in physical recreational activities. CONCLUSIONS Strabismus affects visually guided reaching in children. Longer total reach duration was due to more time spent in the deceleration phase. Binocular dysfunction was more predictive of slow reaching than severity of amblyopic eye visual acuity. Unlike adults with strabismus who show longer acceleration duration, longer deceleration in the final approach in strabismic children indicates a difference in control that could be due to reduced ability to use visual feedback. Understanding factors associated with eye-hand coordination deficits in strabismus may help guide development of more effective screening and interventions to prevent or ameliorate motor impairments in strabismic children.
2021-12-23T06:22:47.168Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "d2873af6e1ffb789b4c983007520244959ed3eac", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.62.15.21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "135b84c2797454941c483d5ac24681fe4c2f259e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139967196
pes2o/s2orc
v3-fos-license
The influence of high pressure to crystalline and magnetic structure of Ba2FeMoO6 The behavior of the crystalline and magnetic structure of Ba2FeMoO6 compound in a wide pressure range from 0 to 4.7 GPa was studied. The crystal structure of ceramic sample was described in the framework of SG I4/mmm (No 139) and contains less 10% of anti-site defects. The change of tetragonal structure (I4/mmm) was not observed in all measured pressure range. It was shown multidirectional influence of ambient pressure onto the average interionic distances of metal-ligand in oxygen octahedrons of FeO6 and MoO6. For tetragonal structure of Ba2FeMoO6 were determined coefficients of the linear and all-round compressibility. The influence of ambient pressure on the value of magnetic moment of iron sublattice was shown. Introduction Oxides of double perovskites A 2 M M O 6 (where A = Ba, Sr, Ca ...; M = Fe, Cr ...; M = Mo, W, Re ...) and their solid solutions attract the attention of researchers due to variety of physical properties associated with chemical composition and ordering of cations in crystalline structure. In general, cations M and M can be distributed randomly in their sublattices, however, if there are a large difference in charges or ionic radii between them, they can arranged ordered. In this case, the crystal structure of these materials contains arranged in staggered order oxygen octahedrons M O 6 and M O 6 , where M and M are cations of different chemical elements [1]. However, an appearance of anti-site defects (ASD) in crystal structures of these materials is able significantly influence onto their functional properties. This sort of structural disorder occurs when M ions are located at M sites and vice versa. Several earlier theoretical and experimental studies have shown clear connection between the magnetic properties of SFMO and the antisite disorder. The saturation magnetization [2,3,4,5] and the Curie temperature [3,6] has been seen to decrease with increasing amount of ASD. In addition to the T C and M s ASD reduces the band gap in the majority spin channel and this effect is large enough that the half metallic feature of SFMO is lost [7,8,9]. Double perovskites have higher values of the Curie temperature [10,11,12] and their tunnel type magnetoresistive properties appear in a weak range of magnetic fields near the room temperature [12,13], as opposed to ordinary manganitelanthanum perovskites. Another feature of this materials is there ferromagnetic half metallic properties, which have a potential for application in electronic devices of spintronics [12,14]. The spin polarization of carriers controlled by the magnetic field will determine the double perovskite as a conductor for electrons with spin oriented upwards (or down), or as an insulator for the other charge carriers with an oppositely directed spin orientation. It is possible to influence onto the spin polarization of carriers of double perovskites by controlling the concentration of ani-sites defects or creating "chemical presse", for example the incommensurate substitution of ions in the A-sublattice influences to bond-lengths and angles among ions of metal and oxygen and, consequently, onto the orbital overlap of ions located at M and M sites. Another way to control the spin polarization of charge carriers of these materials is the external action, for example an ambient temperature and pressure. The main aim of this work is ascertain regularities of influence of high pressure (up to 4.7 GPa) on the features of the crystalline and magnetic structure of the double perovskite Ba 2 F eM oO 6 . A main feature of this work is application of the neutron diffraction method that allowed one to collect information about both the crystalline and magnetic structure of the sample during one experiment and over a wide pressure range (from 0 to 4.7 GPa). Methods and methodology The ceramic sample of Ba 2 F eM oO 6 has been fabricated by solid state method from a mixture of F e 2 O 3 and M oO 3 oxides of analytical grade purity and BaCO 3 carbonate taken in appropriate ratios. The initial mixture was synthesized at 900 o C (4 h) in air and then was annealed at 1200 o C (10 h) under H 2 /Ar stream. After annealing, the sample was slowly cooled (∼ 100 o C/h). The phase composition was determined by X-ray diffraction method using an Empyrean diffractometer (firm PANalytical) in Cu-K α radiation at room temperature. Structural studies at high pressures (up to 4.7 GPa) have been performed using a high-pressure chamber with sapphire anvils on a DN-6 neutron diffractometer [15] (Dubna, Russia). An 1 mm 3 sample was placed in an anvil hole drilled in the center, that allowed to create a quasi-hydrostatic pressure distribution on the sample. The unevenness of pressure distribution across the surface of the test sample in the anvils with the holes usually does not exceed 15%. The magnitude of the applied pressure was measured from the shift of the ruby fluorescence line [16] (doublet 6942 and 6927 A) to within 0.05 GPa. The neutron diffraction spectra were measured at scattering angles 2Θ = 90 0 and 45 0 . For these scattering angles, the resolution of the diffractometer at wave length λ = 2Å was ∆d/d 0 = 0.02 and 0.025, respectively. A typical measurement time of one spectrum was 12 hours. Analysis of neutron diffraction patterns with the Rietveld method has been carried out by the software package FullProf [17] using the built-in tables for coherent scattering lengths and magnetic form factors. Measurements of the magnetization of the Ba 2 F eM oO 6 sample in the temperature range 77 -900 K and in a magnetic field of 0.86 T were carried out using an universal automated appliance. Magnetic properties The behavior of the specific magnetization shown in Fig.1 characterizes the response of magnetic properties of Ba 2 F eM oO 6 onto a change of ambient temperature. The perovskite sample is in a saturated state at low temperatures. A gradual increase the ambient temperature leads to a violation of the long-range magnetic ordering in the sample because of the increasing of thermal motion of ions and the magnetization is decreased. When the temperature reaches T C ∼ 317 K the energy of thermal motion becomes comparable with the energy of the exchange interaction, as a consequence the magnetic order is destroyed and the sample of Ba 2 F eM oO 6 passes from Crystal structure According to X-ray diffraction data, the sample Ba 2 F eM oO 6 is homoheneous. An analysis of the crystal structure of Ba 2 F eM oO 6 has been carried out by the Rietveld method. The results of the analysis are shown in Fig.2. At room temperature, Ba 2 F eM oO 6 has a tetragonal structure with the sp. gr. I4/mmm (No. 139). The scheme of the unit cell of the tetragonal structure is shown in Fig.3, a. An important characteristic that determines the functional properties of double perovskites is the concentration of ASD which can be determined from the ratio of integrated intensities of the diffraction peaks: I (101) /(I (200) + I (112) ) [18]. It should be noted that the diffraction peak (101) in the double perovskite structure is superstructural and and it is associated with the Fe/Mo alternative ordering. In completely disordered structure when a half of iron ions occupy positions in the molybdenum sublattice and vice versa, the diffraction peak (101) is absent. Examples of X-ray patterns corresponding to fully ordered and disordered structures were simulated in the FullProf program and they are shown in the inset of Fig.2. The degree of structural disorder was determined, similarly to the procedure described in [19]: According to Eq.1, the concentration of anti-site defects (ASD) in Ba 2 F eM oO 6 is less 10%. Crystal structure under high pressure The investigation of the atomic structure under the influence of high pressures allows ascertaining the relationship among changes of structural parameters, interatomic distances, magnetic structure and macroscopic properties that is necessary for understanding the nature and mechanisms of physical phenomena of double perovskites. Examples of neutron diffraction patterns Ba 2 F eM oO 6 obtained at a temperature of 290 K in the range of applied pressures from 1 to 4.7 GPa are shown in Fig.4 neutron patterns of Ba 2 F eM oO 6 in the entire range of applied external pressures was not observed that indicates the absence of a change in the type of crystal and magnetic structure. An increase of the external pressure leads to a monotonic decrease in the parameters of the unit cell (see Fig.5). The results of refining by the Rietveld method of neutron diffraction patterns of Ba 2 F eM oO 6 at different external pressures are shown in Table 1 where x = (V /V 0 ) is the relative volume change, V 0 is the unit cell volume at P = 0, B 0 and B are empirical parameters that have the meaning of the modulus of comprehensive compression in the equilibrium state (B 0 = −V · (dP/dV )| V =V 0 ) and its first order derivative (B = dB 0 /dP ), respectively. In order to describe the equation of state of Ba 2 F eM oO 6 was used a linear dependence corresponding to the Birch-Murnaghan equation for solids with low compressibility: The linear parameters of the compressibility of a unit cell of the tetragonal structure were calculated from the formula: The effect of external pressure on the magnitude of metal-oxygen bond-length for Ba 2 F eM oO 6 in the tetragonal symmetry structure I4/mmm (No 139) is shown in Fig.7 and in Table.2. This crystal structure consists of two types arranged in staggered order asymmetrically distorted oxygen octahedrons F eO 6 Table 2). An increase of the external pressure leads to a reduction of the interionic distances F e − O and, conversely, to a slightly increase of the interionic distances M o − O (Fig. 7, a and b). Magnetic structure The electroneutrality in double perovskites can be carried out by two possible valence states of iron and molybdenum ions: 1) F e 2+ (3d 6 ) and M o 6+ (4d 0 ) or 2) F e 3+ (3d 5 , S = 5/2) and M o 5+ (4d 1 , S = 1/2), respectively. In the first case, the magnetic moment of molybdenum ions is µ (M o 6+ ) = 0µ B therefore the total magnetic moment of this material will be determined by the nominal value of the ordered magnetic moments of ions in iron sublattice µ (F e 2+ ) = 4µ B . In the second case, the magnetic structure of the double perovskite should be formed by the ferromagnetic ordering of the magnetic moments of iron ions in F eO 6 octahedrons and by the antiferromagnetic ordering of the magnetic moments of molybdenum ions in M oO 6 octahedrons with nominal values µ (F e 3+ ) = 5µ B and µ (M o 5+ ) = 1µ B , accordingly. Therefore, in the fully magnetically ordered state (in the low-temperature region) the total magnetic moment of Ba 2 F eM oO 6 should have 4 µ B /f.un. The observation of the diffraction spectra of the sample over a wide range of pressures (from 0 to 4.7 GPa) allows one to analyze the change of its magnetic structure. The presence of additional superstructural magnetic reflexes on the neutron diffractograms (Fig. 4) is not observed. This indicates that in the case of the ferrimagnetic structure its wave vector is k = [0,0,0]. The dependencies of magnetic moments of sublattices of Ba 2 F eM oO 6 compound versus ambient pressure are shown in Fig. 8 (a) -for ferromagnetic and (b) for antiferromagnetic models. For ferromagnetic model, the value of magnetic moment of Mo sublattice is equal to 0 whereas for Fe sublattice magnetic moment monotonically increases from 0.15 µ B /f.un. to 1.17 µ B /f.un. with increasing of ambient pressure from 0 to 4.7 GPa. Magnetic moments of iron sublattice are ordered along easy magnetization axis, which coincide with tetragonal axis. The scheme of the ferromagnetic structure of Ba 2 F eM oO 6 is shown in inset of Fig.8, a. In the case of antiferromagnetic structure, magnetic moments of molybdenum sublattice are ordered in opposite direction to iron sublattice. The scheme of the antiferromagnetic structure of Ba 2 F eM oO 6 is shown in inset of Fig.8 Conclusions The behavior of the crystalline and magnetic structures of the double perovskite Ba 2 F eM oO 6 has been investigated by the neutron diffraction method in a wide range of ambient pressures (0 -4.7 GPa). An increase of ambient pressure does not lead to a change the type of crystalline and magnetic structures. It has been determined that a ceramic sample of Ba 2 F eM oO 6 prepared by the solid state method under H 2 /Ar stream has an ordered tetragonal structure with sp. gr. I4/mmm (No 139) and contains less 10% of anti-site defects. It was shown the influence of ambient pressure to the average interionic metal-ligand distances in the oxygen octahedrons of F eO 6 and M oO 6
2019-04-30T13:07:11.226Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "27e6d293fc0f413c566df5f84c76a7025e6d49c7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/994/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eee549cb725eba487484392bd932bdb28f80b575", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
263831023
pes2o/s2orc
v3-fos-license
Pain Forecasting using Self-supervised Learning and Patient Phenotyping: An attempt to prevent Opioid Addiction Sickle Cell Disease (SCD) is a chronic genetic disorder characterized by recurrent acute painful episodes. Opioids are often used to manage these painful episodes; the extent of their use in managing pain in this disorder is an issue of debate. The risk of addiction and side effects of these opioid treatments can often lead to more pain episodes in the future. Hence, it is crucial to forecast future patient pain trajectories to help patients manage their SCD to improve their quality of life without compromising their treatment. It is challenging to obtain many pain records to design forecasting models since it is mainly recorded by patients' self-report. Therefore, it is expensive and painful (due to the need for patient compliance) to solve pain forecasting problems in a purely supervised manner. In light of this challenge, we propose to solve the pain forecasting problem using self-supervised learning methods. Also, clustering such time-series data is crucial for patient phenotyping, anticipating patients' prognoses by identifying"similar"patients, and designing treatment guidelines tailored to homogeneous patient subgroups. Hence, we propose a self-supervised learning approach for clustering time-series data, where each cluster comprises patients who share similar future pain profiles. Experiments on five years of real-world datasets show that our models achieve superior performance over state-of-the-art benchmarks and identify meaningful clusters that can be translated into actionable information for clinical decision-making. I. INTRODUCTION Approximately half of the 117 million US adults are chronically ill and one in four has multiple chronic conditions (Ward et al. 2014). Patients with chronic diseases have to follow complex treatment regimens, making their care much more costly than that of a healthy individual. Not surprisingly, 86% of all US healthcare spending is used to treat chronically ill patients [1]. In this paper, we study Electronic Health Records (EHR) data from a cohort of patients with Sickle Cell Disease (SCD). SCD is a chronic lifelong illness and a disease multiplier, as it often goes hand-in-hand with other chronic conditions. Although severe acute pain episodes that correspond with vaso-occlusive events (VOEs) are the hallmark of SCD, a majority of patients with SCD report experiencing pain on most days [2]. Opioids are often used in the management of these painful episodes; the extent of its use in the management of pain in this disorder is an issue of debate. Some physicians advocate minimal use of opioid drugs for fear of addiction [3], [4] while others believe that the under use of these medications in the control of pain may result in pseudoaddiction [5], [6]. There have also been reports in the literature of substance abuse by SCD patients [7]- [9], though it is believed that the rate of drug abuse by SCD patients is not different from that of the general public [10]. In this paper, we address this issue by forecasting future pain trajectories for patients so that clinicians can plan a suitable and balanced pain management strategy, preventing risk for addictions. Current clinical standard involves patient self report for an estimate of pain which can be highly subjective. There can be a significant inter-individual variation in pain scores reported in response to the same stimulus. It can also be influenced by a confluence of factors such as mood and energy level at the time of pain report. Recent studies have demonstrated that machine learning models can be used to estimate these subjective pain levels from objective physiological signals [11]- [16], facial expressions [17], [18], activity and motion tracking [19], [20] have yielded promising results. In this paper, we address the problem of estimating future subjective pain scores in patients with SCD using six objective physiological measures collected by nurse coordinators in a hospital. We propose a predictive clustering based approach [21] to combine predictions on the future outcomes with clustering as illustrated in Figure 1. Illustration of Proposed Approach for Long-term Pain Management: We divide the entire data into bins corresponding to the year. In this case, we divide the dataset into five bins as we had data from five consecutive years. Next, we apply clustering algorithm on each of the bins to generate ground truth cluster labels for each patient in each year. As shown in Figure 1, two patients-black circle and red square belong to two different clusters in year 1. However, they might behave in a similar pattern in year 2 and get clustered together. Again, in year 3, their disease progression (vitals, pain) might be different and they are clustered separately. For long-term chronic pain management, we propose to forecast these future cluster alignments of patients. A clinician can utilize the information about such temporal patient phenotyping to design appropriate pain medication prescription for a patient so as Figure 1: An illustration of our predictive clustering for yearly patient phenotype forecasting (Input is the vital signs data and cluster labels for a year, Output is the cluster assignment forecast for a future year). to treat the patient and avoid any risks of addiction to pain medications. We utilize 51718 time-series Electronic Health Records (EHR) data points from 498 participants at Duke University Hospital over five consecutive years. As the dataset had one or more missing values for each time stamp, we first design a clinically motivated data interpolation method. Then we evaluate supervised and self-supervised machine learning algorithms for short-term pain forecasting and proposed long-term patient phenotype forecasting. Our results show that a variational autoencoder based learning method outperforms other methods in both the cases. In summary, we make the following contributions: • We propose a clinically motivated data interpolation approach considering the irregularity in clinical visits and inconsistencies in data records. • We show that self-supervised learning-based methods perform best while forecasting future pain at short-term (hourly) duration. • We propose a patient phenotyping approach based on dynamic time warping distance and self-supervised learning for long-term (yearly) patient subgroup/profile forecasting. • We show in a case study that our self-supervised patient phenotyping approach is able to capture the interplay between multiple vital signs influencing the evolution of future patient pain profiles. While the proposed approach is applied to patients with SCD, it can be readily applied to other chronic diseases such as diabetes, or cystic fibrosis due to its similar data structure. II. RELATED WORK Recently, studies on pain forecasting are gaining attention. There have been attempts to forecast pain, specifically postoperative pain, using data other than physiology and activity measurements. Tighe et al. [22] explored various classification algorithms to forecast whether a patient was at risk for moderate to severe postoperative pain for postoperative day 1 and day 3 using 796 clinical variables from Electronic Medical Records (EMRs) in a retrospective cohort of 8,071 surgical patients. In forecasting moderate to severe postoperative pain for the postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had a ROC of 0.665, and the knearest neighbors (k-NN) algorithm had a ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with a ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. The same group also developed a model based on RNNs to forecast pain levels after administering specific pain medication and trained it on pain score patterns [23]. Pain forecasting is a critical issue in pain management in SCD. Based on the recent advancements in the field and previous works from our group [24], we believe designing data-driven self-supervised learning models based on physiological data is a promising approach for pain forecasting. Temporal clustering, also known as time-series clustering, is a process of unsupervised partitioning of the time-series data into clusters in such a way that homogeneous time-series are grouped together based on a certain similarity measure. Temporal clustering is challenging because i) the data is often high-dimensional it consists of sequences not only with high-dimensional features but also with many time points and ii) defining a proper similarity measure for time-series is not straightforward since it is often highly sensitive to distortions [25]. To address these challenges, there have been various attempts to find a good representation with reduced dimensionality or to define a proper similarity measure for times-series [26]. Recently, [27] and [28] proposed temporal clustering methods that utilize low dimensional representations learned by recurrent neural networks (RNNs). These works are motivated by the success of applying deep neural networks to find "clustering friendly" latent representations for clustering static data [29], [30]. In particular, Baytas et al. [27] utilized a modified LSTM auto-encoder to find the latent representations that are effective to summarize the input time-series and conducted K-means on top of the learned representations as an adhoc process. Similarly, Madiraju et al. [28] proposed a bidirectional-LSTM auto-encoder that jointly optimizes the reconstruction loss for dimensionality reduction and the clustering objective. However, these methods do not associate a target property with clusters and, thus, provide little prognostic value about the underlying disease progression. Recently, Lee et.al. [21] have addressed this issue by proposing a temporal predictive clustering approach. Along similar direction, we design our long-term patient phenotype forecasting. III. DATA PREPARATION For chronic diseases, it is expected that the knowledge discovery in clinical time-series data will play a potent role. The reason is that it is humanly impossible even for experienced medical doctors to directly extract the tendencies of a great deal of multivariate time series. In order to obtain knowledge on changes in symptom, time-series analysis should be applied to clinical time-series data. However, the unfavorable characteristics of clinical time series data makes it impossible to apply conventional time series analysis methods that assume regular sampling. Patient's clinical examination records basically have irregular intervals, which are caused by irregular visits of the patient for clinical examination and the intensive execution of clinical examination when the patient's illness becomes worse. It is needed to make the intervals of clinical time-series data equal by interpolating them to for the application of the time series analysis methods. The simplest interpolation methods are averaging or linear regression with a constant period [31]. These methods slide a time window of constant width and calculate the average or the estimate by linear regression on the points in each window. They regard it as an interpolated point, namely a set of an equally spaced time point and the estimated value corresponding to the time point. Consequently, the time intervals are made equal via this process. However, the symptoms are different among patients and non-stationary for each patient, these methods thus have some problems, how the width and overlap of a time window are determined and whether it is proper to fix the width and overlap. We therefore propose a clinically-motivated interpolation method for this purpose in a step method: (1) First, we extracted the visit information of each record in our dataset following the definitions by Padhee et. al. [12], (2) Second, we modified the time grid rounding up to the nearest hour, and (3) Third, within each visit, we applied linear interpolation within 2 hours window based on the clinical relevance advice from our co-author clinician. In this study, we utilized 51718 records from 498 participants at Duke University Hospital over a maximum of five consecutive years. Each record contained measures for six vital signs as follows: (i) peripheral capillary oxygen saturation (SpO2), (ii) systolic blood pressure (SystolicBP), (iii) diastolic blood pressure (DiastolicBP), (iv) heart rate (Pulse), (v) respiratory rate (Resp), and (vi) temperature (Temp). Along with the vital signs, each record also included the patient's self-reported pain score with an ordinal range from 0 (no pain) to 10 (severe and unbearable pain). The data were de-identified using study labels to label the patient without identification. The timestamp for each data entry was also de-identified, preserving temporality. The dataset had missing values for one or more of the vital signs and the pain score. Table I shows that we reduced the statistics of missing variables in our dataset significantly, while preserving the clinical semantics. IV. EXPERIMENTS AND RESULTS We design our experiments in a two-stage framework consisting of 1) short-term pain forecasting, and 2) long-term patient phenotype forecasting. In this section, we discuss both the modules in detail. A. Short-term pain forecasting We design two scenarios to evaluate multiple supervised and self-supervised machine learning algorithms for short-term pain forecasting: (1) Individualized patient scenario, and (2) Mixed patient scenario. In an individualized patient scenario, we included past and future sequences from the same patient as input and output. In a mixed patient scenario, we combined all the patient data. A past sequence from a patient is input data and output is the forecast on a future sequence from a random patient, including the patient in the input (training) set. We implemented six models to predict future pain score: ( An ARIMA model has three parameters p, q, and d. Significant lags at 5 in the autocorrelation and partial autocorrelation function plots (Figure 2) extend beyond the dashed blue lines and indicate poor model fit. We decided the p and q parameters range for the grid search as 0 and 5. Thus, we applied a grid to search by passing the integer values in the range [0, 5] for both p and q and decided on the value of the p and q parameters to be 1 with AIC value of the model as 584.9 and both ACF and PACF plots showed no significant lags (Figure 3). Based on the ADF test [32], we found the time series of pain score to be non-stationary and required one-time differencing (d = 1). MLP and LSTM models were trained on physiology and pain scores on the interpolated time series dataset. We adjusted the hyper-parameters (layers, neurons, batch size, and epochs) in MLP and LSTM models to have an idea of the range (minimum and maximum) value of the parameters. The hyperparameters higher and lower than the values for which we did not get any improvements in the model's fit provided us with the minimum and the maximum range. Next, we varied the hyper-parameters in the obtained range and evaluated the model performance. We validated the model training on test dataset. Since we did not want to provide the memorybased LSTM model an advantage over the memory-less MLP model; we tested the same range of hyper-parameters for both the MLP and LSTM models. After getting the best set of hyper-parameters, we evaluated the model performance on these best hyper-parameters 40 times as there is a run-to-run variability in the model's output on training data. We finalized the model with the least objective function value among the 40 model runs as the final prediction from the model. For training both models (MLP and LSTM), we used the ReLU activation function. These models were created in Keras using the Tensorflow backend. In self-supervised learning, the network is trained to predict future physiological data from extensive unlabeled past physiological data. During the training process, our self-supervised learning network learned latent representations that were used to infer future pain states using a regression model. We used a similar architecture by Yang et.al. [24] to learn representations from physiological signals using a CPC network and a VAE architecture by our previous work [33]. Specifically, we used a three-layer Convolutional Neural Network (CNN) [34] as the encoder in CPC model. We then used a gated recurrent unit (GRU) based Recurrent Neural Network (RNN) [34] for the autoregressive part of the model with 64 dimensional of hidden states. The output of the GRU based RNN model c t is then used as the feature for pain forecasting task. The pretext task network was trained using the Adam optimizer with a batch size of 128 and a learning rate of 10 −3 . The network structure and hyperparameters were tuned based on experiments to maximize the accuracy of the pretext task. Next, we trained a regression model to predict future pain values using the representations learned (the output of the CPC and VAE network) as input features. To summarize, we used a past physiological signal sequence in the trained selfsupervised network to generate the latent representation, which is used as the input feature of a regression model to predict the pain score reported at a future time step. In the downstream task, we trained a regression model to predict future pain values using the latent context representation c t (the output of the autoregressive network) as input features. Specifically, a past vital signs sequence was fed into the trained CPC network to generate the context latent representation c t . Then c t was used as the input feature of a regression model to predict raw (not interpolated) pain scores reported at a future time step. Due to the lack of pain score labels (as we used raw pain scores instead of the interpolated pain scores for prediction), we utilized random forest [35] as the supervised regression model for pain forecasting. We chose this model because ensemble methods are more robust and have advantages in dealing with small sample sizes [36]. B. Long-term patient phenotype forecasting Clustering is an unsupervised learning process where an algorithm brings similar data points closer without any "ground truth" labels. The similarity between data points is measured with a distance metric, commonly Euclidean distance. In general, the Euclidean distance metric (or other types of Minkowski metric) is used to find an average of all the data within the clusters. However, its one-to-one mapping nature cannot capture the average shape of the two time series, in which case Dynamic Time Warping (DTW) [37] is more favorable. Clustering different time series data is challenging as each data point acts as an ordered sequence. Classically, the most common approach involves flattening the time series (sequence) into a table, with a column for each time point (or an aggregation of the entire sequence), and applying standard clustering algorithms like k-means. However, these clustering algorithms use standard measures such as Euclidean distance, which is often not the best for time series (ordered sequences). Hence, we replace the default distance measure with DTW to compare the temporal sequences that can measure the similarity between two sequences that do not align with each other rigidly in time, speed, or length. Unlike the Minkowski distance function, dynamic time warping breaks the one-to-one alignment limitation and supports non-equal-length time series. It uses a dynamic programming technique to find all possible paths and selects the one that yields a minimum distance between the two sequences of time series using a distance matrix, where each element in the matrix is a cumulative distance of the minimum of the three surrounding neighbors. Suppose we have two time series, a sequence Q = q 1 , q 2 , . . . , q i , . . . , q n and a sequence C = c 1 , c 2 , . . . , c j , . . . , c m . First, we create an n x m matrix, where every (i, j) element of the matrix is the cumulative distance of the distance at (i, j) and the minimum of the three elements neighboring the (i, j) element, where 0 < i ≤ n and 0 < j ≤ m. We can define the (i, j) element as: where d i j = (c i + q j ) 2 and e i j is (i, j) element of the matrix which is the summation between the squared distance of q i and c j , and the minimum cumulative distance of the three elements surrounding the (i, j) element. Then, to find an optimal path, we have to choose the path that gives minimum cumulative distance at (n, m). The distance is defined as: where P is a set of all possible warping paths, and wk is (i, j) at k t h element of a warping path and K is the length of the warping path. We compute the cluster centroids with respect to DTW by minimizing the sum of squared DTW distance between the cluster centroid and the series in the cluster. We employ kmeans clustering for each year of patient data and generated cluster labels for all patients for each year. Next, we use the cluster labels as an additional feature to our pain forecasting models to predict the future cluster label (ground truth) for next year. V. RESULTS Tables II and III show the MAE and R 2 for forecasting pain scores for mixed patients and individual patient models respectively. We present the results using the best predictions from 40 runs from each model. For mixed patient forecasting, we combined all the patient data. The training data consisted of a past sequence from a patient, and we generated the forecast on a future sequence from a random patient, including the patient in the training set. In the individualized patient experiment, we included past and future sequences from the same patient. Next, for each year, we used K means clustering using DTW distance on all patients' physiology and pain scores data to obtain an optimal number of seven clusters [with a Normalized Mutual Information (NMI) score of 0.35, purity score of 0.67, Silhouette Index of 0.12]. We treat the cluster labels generated as ground truth labels for each patient for each year. Our goal is to understand if we can predict future cluster alignment of patients. Hence, we train the same models discussed above on a past sequence of physiology data, pain scores, and cluster labels to forecast the cluster label for the following year. We report the AUROC of our models with raw pain scores as test data for years 2,3,4 and 5 in Table IV. The training for each is the data for all previous years, i.e., for the forecast of year 5, we trained models on interpolated data from years 1,2,3, and 4 and tested on original data available for year 5. The best MAE (= 0.58) on test data was obtained for our LSTM based VAE model which contained 2 hidden layers, 4 neurons in each hidden layer, 20 batch size, and for 30 epochs. In general, all the models resulted in lower MAE and higher R 2 scores in individualised models. As seen in the tables, both the MLP and LSTM models outperformed the RF Regression model (baseline) as well as the ARIMA model. Also, the self-supervised LSTM based VAE model performed the best among all the models. We report the area under the receiver operating characteristic (AUROC) score for evaluating our long-term cluster forecasting models. Also, we compare the performance of our best performing models from short-term forecasting as shown in Table IV. First, as expected, our LSTM based self supervised VAE netowrk performed best in long-term pain forecasting. Second, each model performed best in forecasting the cluster assignment for year 5 as it had more data to learn from (year 1-4). VI. DISCUSSION OF RESULTS The results presented in the previous section led to the below findings. The primary objective of this research was short-term pain forecasting and evaluating the performance of existing statistical (ARIMA), supervised neural (MLP and LSTM), and self-supervised (CPC, VAE) models for individual and mixed patient scenarios. Another objective of this paper was to systematically evaluate a predictive clustering based approach for long-term pain forecasting. Overall, we expected the selfsupervised models to perform better than the statistical and supervised neural models. First, as per our expectation, the best performance in terms of error was found from the VAE trained network, followed by CPC trained network, LSTM, MLP, ARIMA and RF regression models. A likely reason for these results is that ARIMA models are perhaps not able to capture the non-linearities present in the time-series data. Thus, overall, these models tended to perform not as well compared to other models. Also, overall, the neural network models (MLP and LSTM) performed similarly and better than the persistence and ARIMA models. That is likely because our dataset is were non-linear and neural network models, by their design, could account for the non-linear trends in datasets. However, another reason for this result could be simply because the self-supervised network models possess several weights (parameters), whereas the ARIMA model possesses only three parameters. A. Changing Patient Phenotype over time : Long-term forecasting In this subsection, we demonstrate run-time examples of how our predictive clustering approach was able to flexibly update the cluster assignments over time with respect to the future pain in the next year. We present a case study of six representative patients as discussed below and shown in Figure 4. In the second year, he/she had moderate average pain (pain score 4-7). Our clustering model was able to predict the temporal phenotype assigned to this patient as similar to that of patient F who had moderate average pain (pain score 4-7). As shown in Figure 5, we see that the systolic blood pressure for both these patients follow a similar trend (decreasing). Furthermore, our clustering algorithm phenotyped patient A and patient E together in the third year to a cluster predominantly having low/mild pain scores. Both of them had mild pain in the first year. So, our approach could change the phenotype of patient A from low/mid pain to moderate pain and again back to low/mild pain. We can see from Figure 5 that patients A and E followed a decreasing trend in systolic blood pressure from first year till third year. Furthermore our model predicted accurately patient D to be in the same cluster with patient A and F having moderate pain and decreasing blood pressure in the second year. Patient D had mild pain in first year, and moderate pain in the second year. • Patient B had an average moderate pain score in the first year and maintained a moderate pain score in second year, no pain in third and fourth year, and an average moderate pain in the fifth year. Our clustering model predicted that patient B and patient C belonged to the same cluster in second year (both had moderate pain). In the third year also, they were clustered together although patient B had no pain and patient C had moderate pain. We analyzed from the Figure 5 that although belonging to different pain range, they followed a similar trend in systolic blood pressure. Interestingly, our algorithm then allocated them to separate clusters in the fourth year. While patient B was allocated to a cluster with mixed pain scores, patient C was clustered with patients with moderate pain score. B. Limitations Based on our observations, we would like to point out a limitation in predictive clustering, the trade-off between the clustering performance (for better interpretability) -which quantifies how the data samples are homogeneous within each cluster and heterogeneous across clusters with respect to the future outcomes of interest -and the prediction performance is a common issue. The most critical parameter that governs this trade-off is the number of clusters. More specifically, increasing the number of clusters will give the predictive clusters higher diversity to represent the output distribution and, thus, increase the prediction performance while decreasing the clustering performance. One extreme example is that there are as many clusters as data samples which will make the identified clusters fully individualized; consequently, each cluster will lose interpretability as it no longer groups similar data samples. VII. CONCLUSION This paper proposes a self-supervised learning method for short-term and long-term pain forecasting. We evaluated the performance of supervised, self-supervised, and statistical time-series techniques for performing short-term (hourly) and long-term (yearly) time-series predictions of longitudinal healthcare data. Throughout the experiments, we showed that our self-supervised model achieves superior performance over state-of-the-art methods and identifies meaningful clusters that can be translated into actionable information for clinical decision-making. This work can be extended by incorporating other biological, social, and psychological factors.
2023-10-11T18:44:01.529Z
2023-10-09T00:00:00.000
{ "year": 2023, "sha1": "70a444271ea27fa2983732db2e2646a9ce49f732", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "70a444271ea27fa2983732db2e2646a9ce49f732", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
237652340
pes2o/s2orc
v3-fos-license
Stream network of Hanyang, a city of water ABSTRACT This study was conducted to investigate the stream network formed in Hanyang and the topographic features of the surrounding areas for the purpose of identifying the impact of Hanyang’s stream network on its city formation. To this end, three substudies were conducted: (i) Topographic restoration of late Joseon using a digitized map including the stream network of the Joseon period and a cadastral map; (ii) Identification and explanations of the discrepancies between the digitally restored stream network and the real stream network by superposing the former upon the latter; (iii) Identification of the Hanyang’s stream network on its urban structure. The results of this study can be summarized as follows: First, the waterways derived by GIS analysis and those on the 1912 Cadastral Map were verified to general coincide with each other, but some changed waterways were observed. Second, the watershed and administrative boundaries general coincided. Graphical Abstract Introduction Hanyang, the capital city of Joseon (the last dynasty of Korea, 1392-1897), known today as Seoul, was located in the basin surrounded with the so-called inner four mountains (Naesasan), namely Baegak, Naksan, Mongmyeok, and Inwang. Many streams flowing down from Naesasan come together in the Middle Stream (today's Cheonggyecheon Stream), which flows through Igansumun (two-arch water gate) and Ogansumun (five-arch water gate) into the Jungnangcheon Stream. After passing through Salgoji Bridge, Jungnangcheon Stream enters the Han river. The west-east orientation of stream flow changes back westwards after joining the Han River. Tributaries flowing down from Eungbong, the eastern peak of Baegak, one of the four inner mountains, should be studied within the dynamic structural framework of the streams as natural elements and the city as an anthropogenic element. The tributary-related changes of historical space should hence be studied from a complexly intertwined perspective encompassing the geographic viewpoint of exploring the nature impacting the spatial formation and evolution, viewpoint from historical hindsight, and viewpoint related to urban development. The purposes of this study are identify the regional characteristics and to examine the influence of Hanyang's stream network on the administrative area boundaries by comparing it with that of the 1900s'. First, a digital channel network of Joseon is derived using a geographic information system (GIS) by restoring the geomorphological features in the late Joseon period, and the cadastral map of 1912 is prepared as CAD data. Second, the digital drainage system assumed to represent Hanyang's channel network in the Joseon period and the real water-stream networks in the 1900s are compared to interpret the regional differences. Third, the impact of urban channel network on the urban spatial structure is investigated by analyzing the effect of the digital drainage system on the establishment of Hanyang's administrative regions. The study area is the topography of Hanyang including 24 tributary channels (Park 2006a) flowing down from the inner four mountains surrounding the city which continuously appear in earlier maps and records. Based on Juncheon-Sasil, Hangyeong-Jiryak, and Dongguk-Yeoji-Bigo, Park counted 24 streams flowing into rivers flowing into the stream network of the city. The study was carried out first by verifying the natural environment of Seoul's historical center including topography and soil characteristics as well as the aquatic environment such as watersheds and waterways. The geomorphology and water gates were analyzed using the datasets thus gained. The terrestrial and aquatic environments formed by tributaries in Seoul' historical center were restored using the "1:10,000 Joseon Map Compilation" (1915, 1921) and "Gyeongseong Map" (1928. Specifically, the city topography was restored using the "1:10,000 Joseon Map Compilation" of 1921 and refined using the "1:10,000 Joseon Map Compilation" of 1915, thus restoring the 1915 topography, which was named "Restored Map of Seoul 1915" and verified against the Gyeongseong Map of 1928, the oldest and most accurate map of Seoul. All work was done by CAD operations, and geographic/hydrologic analysis was performed using GIS datasets. Previous studies include Park Hyun-wook's basic research on Hanseong's waterways and bridges based on old maps and records (Park 2006 b), Enomoto Tadanobu's study on the modernization of Seoul's drainage system (Enomoto 2009), Koh Ara's study on the evolution of Seoul's waterway covers digital drainage system restoration and the impact of drainage system on administrative division (Koh 2018). This study examines the results of these researches to discuss the upgraded accuracy of digital restoration, the discrepancies in waterways between real contour and earlier maps, and the relationship between drainage system and administrative area boundary. Contour restoration and digital drainage system of the Joseon period The four mountains surrounding Hanyang have elevations ranging from 120 to 342 meters, with Baegak in the north standing highest, followed by Inwang in the west (339 m) and Mongmyeok in the south (265 m), and Naksan in the east being lowest. Despite different heights, their slopes are similarly sharp. Hanyang is surrounded with mountains and hills all around, whereby it opens towards the topographically lower southwestern and eastern areas. All four mountains become lower towards the city core starting from the fortresses. The stream network naturally becomes denser towards the city core (Kil 2017) (Figure 1). To derive stream network, the contour must be restored. However, there are no existing records of contour maps of the Joseon period. The oldest map is the "1:10,000 Joseon Map Compilation" of 1915, and there are "1:10,000 Joseon Map Compilation" of 1921 and the Gyeongseong Map of 1928, of which the latter shows detailed contour lines. A CAD-based digital map was generated using the "1:10,000 Joseon Map Compilation" of 1915 and 1920, and geomorphic details were corrected using the Gyeongseong Map of 1928. The GIS work was performed as follows: First, the topography of the city was restored from the "1:10,000 Joseon Map Compilation" of 1921 using CAD and digitized the restored topography after correcting it with the "1:10,000 Joseon Map Compilation" of 1915. Map correction was performed by restoring lost contours and the changed ones due to buildings. Comparison of "1:10,000 Joseon Map Compilation" of 1915 and 1921 and the "Gyeongseong Map" of 1928 allowed the estimation of the changed contour. The Gyeongseong Map (1928) shows a large number of lost contours, which could be restored based on the "1:10,000 Joseon Map Compilation" of 1915 and 1921 ( Figure 2). Second, the restored map (hereinafter the 1915 Computerized Map) was then used for constructing geomorphologic data, namely triangulated Irregular network (TIN) (Lee and Shim 2003a) and digital elevation model (DEM) (Lee and Shim 2003 b), followed by stream network sample extraction for hydraulic/hydrologic analysis (Lee and Shim 2003) (Figure 3). Third, in the process of topographical reproduction, the current topography and that of the 1915 Computerized Map had to be forcibly overlapped on the basis of the numerical map provided at the portal of the National Spatial Data Infrastructure (www.nsdi. go.kr). The areas with significant differences between the two maps were given coordinates and corrected through interpolation (Lee and Sohn 2016). Fourth, map correction was performed to match the confluences between the stream and its tributaries with those indicated in historical records, whereby the minimum necessary corrections as the discrepancies in the passage of waterways, attributing them to the topographical changes overtime. The result of this restoration and correction work is the "Restored contour and streams network in Joseon Dynasty" (Figure 4). Stream network according to the cadastral map (1912) The 1912 Cadastral Map is the first map generated using the modern survey technique in Korea. This cadastral map has no topographic designations, but stream network and road names are indicated. It was therefore possible to derive the stream network by applying CAD operations of the 1912 Cadastral Map ( Figure 5) to the 1915 Restored Map. The result of this CAD application is the real water-stream networks in the 1900s ( Figure 6). Comparison between the original stream network and the 1900s real stream network Given that there were no large-scale civil engineering projects that could change contour lines, the thus corrected contour lines fixed on the 1915 map were assumed to represent the actual map of Hanyang in the Joseon period and the digital stream network derived from the 1915 map by GIS operations. In this study, the 1915 Digital Stream Network (Figure 7, blue line) derived as a result of GIS analysis was considered the originally formed stream network in the Joseon period. Likewise, the 1900s Real Stream Network (Figure 7, red line) based on the 1912 Cadastral Map was considered the actual topographical map of late Joseon. As a result of waterway comparison by superposing, the waterways derived by GIS analysis upon those on the 1912 Cadastral Map revealed that most waterways including the origin of each tributary were similar (Figure 8). Primarily around the Cheonggyecheon Stream, the southern flow paths have not greatly changed, with variations mostly seen in the northern flow paths. It means that an artificial factors were more applied in the northern part than in the southern one. The southern part was developed organically according to a topology. Only in a few locations, altered waterways were observed between the original and 1900s' stream networks. As causes of such deviations, two factors may be pointed out: (i) flow path changes due to the urban development and the construction of facilities, (ii) terrain changes through human activities. First, there were two examples for flow path changes due to the urban development and the construction of facilities: (i) Gyeongmo Palace and Heungdeokdong-cheon (Figure 8 ③), (ii) Yukjo avenue establishment and Baegundong-cheon( Figure 8②, Figure 9), (iii) Jongno and Angukdong-cheon. Comparing the flowing path of the Heundeokdongcheon between the real shape and GIS analysis result, the flowing paths in the confluence of eastern and western branches into the Heundeokdong-cheon are different. GIS-based confluence is more natural. It could be verified that the flow paths of the eastern and western branches towards the confluence in the Joseon period and those on the 1912 Cadastral Map were artificially changed. The causes of artificial change of flow paths are assumed to be ascribable to the following three factors: (i) Gyeongmo Palace location; (ii) construction of a pond in front of the Gyeongmo palace; (iii) Banchon's development and expansion. Baegundong-cheon shows the largest variation when comparing the digital stream network, which represents that of the Joseon period, with that of the 1912 stream network. Its overall shape in the GIS analysis was skewed towards Gyeongbok Palace in deviation from the historical records that it flew along Jahamun-ro. It can be inferred that the original topography changed according to Yukjo avenue in which many administrative institutes had been constructed in front of Gyeongbok Palace at that time. It could make the flow path changed. (Figure 9) As shown Figure 7, two waterflows crossed Jongro which had been planned as a representative avenue in Joseon Dynasty. So their flow paths might be changed to show the dynasty's dignity. As shown Figure 7, the Angukdong-cheon originally flew southwards directly from Daesa-dong into the mainstream upstream of Supyu Bridge. However, in order to stop frequent flooding due to sudden inflow increase, the court accepted the proposal of Jeong Jin, the Pan-Hanseong-Busa (corresponding to today's Mayor of Seoul), in 1421 (the third year of Sejong's reign) and dug a ditch as a water path along the backside of the market place arrayed in the south-north direction of Jongno on the occasion of a large-scale engineering of stream branches and creeks within the city wall. At that time, Angukdong-cheon was induced to flow along the backside of the southern marketplace and the Howdong Jesaengdong-cheon, which used to flow along Gahoe-dong, was induced to flow eastwards along the backside of the northern marketplace, to join the Changgyeong Palace Okryu-cheon and flow into the mainstream (Park 2006b). Conclusively, the flow path discrepancies of the Angukdong-cheon and Hoedong Jesaengdong-cheon between the digital and real waterways were verified to be the artificial waterway engineering for flood prevention. A comparison between the digital and real waterways revealed that the flow paths of the Namsomundong-cheon are identical in general with some exceptions. The waterway that used to flow into the stream after being bifurcated at the crossroad of the Gwanghee Gate does not appear in the GIS analysis result. The earth and sand dug out by the dredging of the stream was piled up nearby to form a few hills, which were called Gasan (artificial hills). (Seoul Museum of History 2017) One of these hills changed the original flow path of the Namsomundong-cheon. The bifurcated waterways generated due to the creation of Gasan, not due to natural landform, disappeared as the Gasan disappeared in the 1900s, which explains the geomorphological discrepancies between the real and estimated waterways. (Figure 10) Hanyang's stream network and the formation of administrative boundaries (1) Hanyang's flood control and forest erosion control efforts The keywords of Hanyang's urban management were flood control and forest erosion control aimed The city of Hanyang, which is surrounded by four mountains, was formed by the naturally formed mountain system and stream network. According to the Annuls of the Joseon Dynasty, this natural environment frequently cause floods by through discharge of a large amount of water into the upstream areas of the mainstream. To prevent flooding, waterways were induced to flow along detour paths by digging ditches along the backside of the marketplace during the reign of King Sejong. Thus, he implemented city management policies using the natural environment formed by mountains and waterways while adapting to them. As part of forest erosion control and flood control efforts, Sasan-Geumpyo-Do (map of restricted areas in the four mountains) drawn in 1765 (the 41st year of King Yeongjo), on which areas where burial and logging of pine trees are prohibited are indicated. This demonstrates that forest erosion control is important for flood control and that Hanyang was governed in a manner to maximize safety while adapting to the natural environment. (1) Administrative boundaries and the stream network In order to evaluate the correlation between the administrative boundaries (Bang) and the city structure, this research overlapped the administrative boundaries of Hanyang and the urban structure. Hanyang was developed using these two elements while adapting to them. Figure 11 illustrates the waterways of the 24 tributaries of the mainstream of Hanyang. The blue line represents the stream networks identified by GIS analysis, the gray line represents areas where roads and Bang boundaries coincide, and the red line represents areas in which stream networks and Bang boundaries coincide. It shows the average match rate between Bang boundaries and other factors. The total circumference of Bang boundaries is 42.6 km. The stream network continues along the Bang boundary for 15.9 km, showing a match rate of 37.4%. Roads also coincide with the Bang boundaries for 19.9 km, showing a match rate of 46.8%. Hanyang was much influenced on by the stream network while many other cities' administrative boundaries generally coincided to the stream network. The coincidence of the stream network and the settlements located in them demonstrates that Hanyang was adapted to the given topography to control flood and mountains erosion. As representative examples, the Samcheongdongcheon(①)(Seoul Museum of History2015), Namsomundong-cheon(④), Heungdeokdong-chen (③) stream networks and administrative boundaries will be examined. (Figure 8) The stream network of the Samcheongdong-cheon corresponds to the administrative unit Jingjang-bang, Gwangwang-bang, Jingcheong-bang, and Seorinbang. The upper-stream areas of the Samcheongdongcheon, Jingjang-bang and Gwangwang-bang to the north, shared boundaries with those of the Samcheongdong-cheon and the Angukdong-cheon and were developed along the boundary of Yulgok-ro in front of the Gyeongbok Palace. The boundaries of the downstream areas of the Samcheongdong-cheon, centrally located Jingcheong-bang and Seorin-bang, were formed around Jongno and Yukjo avenue, urban planning streets, demonstrating that the flow path of the tributary influenced the district division. The Namsomundong-cheon area corresponds to the administrative unit Myeongcheol-bang in the Joseon period, and its stream network coincides with MyeongChelbang's boundaries. The Heungdeokdong-cheon area corresponds to Sunggyo-bang, Yeonhwa-bang, and Geondeok-bang in the Joseon period. Its stream network boundary partially coincides with the bang boundaries within these three units. After evaluating the variables that influenced the city structure of Hanyang, Roads showed a match rate of 46.85% and the stream networks showed 37.41% match with the administrative boundaries. This shows that both the roads and the stream networks greatly influenced the city structure of Hanyang. uch analysis results suggest that natural geomorphological analysis can be an important element of stream network exploration and multifactorial analysis of urban space. Conclusions This study was conducted to investigate the stream network formed in Hanyang and the topographic features of the surrounding areas for the purpose of identifying the impact of Hanyang's stream network on its city formation. To this end, three substudies were conducted: (i) Topographic restoration of late Joseon using a digitized map including the stream network of the Joseon period and a cadastral map; (ii) Identification and explanations of the discrepancies between the digitally restored stream network and the real stream network by superposing the former upon the latter; (iii) Identification of the Hanyang's stream network on its urban structure. The results of this study can be summarized as follows: First, tributaries' geomorphological features and watersheds were extracted using the GIS-based hydraulic/hydrologic analysis. The resultant digital stream network was considered the original topography and stream network of the Joseon period. Likewise, the real stream network based on the 1912 Cadastral Map was considered the altered topographical map of late Joseon. The waterways derived by GIS analysis and those on the 1912 Cadastral Map were verified to general coincide with each other, but some altered waterways were observed. This was mainly due to more apparent variations in the northern flow paths than the southern flow paths, centered around Cheonggyecheon Stream. Some changed waterways were observed, presumably due to two factors: (i) flow path changes due to the construction of facilities; (ii) terrain changes through human activities. Second, the majority of stream networks and roads coincided with administrative boundaries. The city structure of late Joseon was divided into Bang and Gye settlement units. By verifying that these settlement units were formed within the stream network of the tributaries, it could be concluded that Hanyang's city structure developed adapting to its mountain system and stream network and that settlement location was decided accordingly. Notes on contributor Seungwoo Yang studied the German urban form in Germany Otto Friedrich University Bamberg geography with W. Krings in 1998; he studied on the theme of action and urban design in the United States UC Davis Civil Construction in 2007. Currently, Professor Yang operates a Laboratory of Urban Design and Urban Form in the University of Seoul in the urban engineering department. The main concerns are, urban design, urban form, urban planning and carrying out research and projects related to them.
2021-09-25T16:18:52.845Z
2021-08-23T00:00:00.000
{ "year": 2022, "sha1": "71f4e2725752bf258efc99905ddcb0d33ae54370", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13467581.2021.1944164?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "85f0969ff73b2078bc751a6e92e3f9fa3d1052fc", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
256337085
pes2o/s2orc
v3-fos-license
Impact of investigational microbiota therapeutic RBX2660 on the gut microbiome and resistome revealed by a placebo-controlled clinical trial Intestinal microbiota restoration can be achieved by complementing a subject’s perturbed microbiota with that of a healthy donor. Recurrent Clostridioides difficile infection (rCDI) is one key application of such treatment. Another emerging application of interest is reducing antibiotic-resistant genes (ARGs) and organisms (AROs). In this study, we investigated fecal specimens from a multicenter, randomized, double-blind, placebo-controlled phase 2b study of microbiota-based investigational drug RBX2660. Patients were administered either placebo, 1 dose of RBX2660 and 1 placebo, or 2 doses of RBX2660 via enema and longitudinally tracked for changes in their microbiome and antibiotic resistome. All patients exhibited significant recovery of gut microbiome diversity and a decrease of ARG relative abundance during the first 7 days post-treatment. However, the microbiome and resistome shifts toward average configurations from unperturbed individuals were more significant and longer-lasting in RBX2660 recipients compared to placebo. We quantified microbiome and resistome modification by RBX2660 using a novel “transplantation index” metric. We identified taxonomic and metabolic features distinguishing the baseline microbiome of non-transplanted patients and taxa specifically enriched during the process of transplantation. We elucidated the correlation between resistome and taxonomic transplantations and post-treatment dynamics of patient-specific and RBX2660-specific ARGs. Whole genome sequencing of AROs cultured from RBX2660 product and patient samples indicate ARO eradication in patients via RBX2660 administration, but also, to a lesser extent, introduction of RBX2660-derived AROs. Through shotgun metagenomic sequencing, we elucidated the effects of RBX2660 in the microbiome and resistome. Antibiotic discontinuation alone resulted in significant recovery of gut microbial diversity and reduced ARG relative abundance, but RBX2660 administration more rapidly and completely changed the composition of patients’ microbiome, resistome, and ARO colonization by transplanting RBX2660 microbiota into the recipients. Although ARGs and AROs were transmitted through RBX2660, the resistome post-RBX2660 more closely resembled that of the administered product—a proxy for the donor—than an antibiotic perturbed state. ClinicalTrials.gov, NCT02299570. Registered 19 November 2014 Dh6N1cED492oiv-rJ_6UwR Video Abstract Video Abstract Background Intestinal microbiota restoration by microbiota-based therapy, such as fecal microbiota transplantation (FMT) from healthy donors to patients, has been applied as a treatment for disorders caused by intestinal dysbiosis [1]. As the contributions of the gut microbiota to the host immune system, energy metabolism, and central nervous system have been uncovered, the range of potential applications of intestinal microbiota restoration therapy is expanding to various disorders, such as inflammatory bowel disease [2], functional gastrointestinal disorders [3], metabolic syndrome [4,5], and neuropsychiatric disorders [6,7]. Accordingly, studies for understanding and refining the action of intestinal microbiota restoration therapies are being actively conducted [8]. Clostridioides difficile infection (CDI) is one area where intestinal microbiota restoration therapy has been applied successfully. Although oral administration of antibiotics is the standard first-line therapy for CDI, antibiotics perturb the commensal gut microbiota and decrease colonization resistance against other pathogens [9,10]. Approximately 15 to 30% of CDI patients therefore experience recurrent CDI (rCDI) resulting from either a relapse of the previous CDI or reinfection [11]. Moreover, antibiotic therapies during CDI treatment may promote the expansion of antibiotic-resistant organisms (AROs) such as vancomycin-resistant Enterococci (VRE) [12,13]. On the other hand, intestinal microbiota restoration has shown to be effective for CDI treatment as well as the restoration of colonization resistance against C. difficile and AROs [14,15]. Indeed, intestinal microbiota restoration has become a commonly performed investigational therapy for rCDI with decent success rates [8,[16][17][18][19]. However, due to the transmissive nature of the treatment, microbiota restoration therapy may communicate not only desirable but also undesirable factors derived from donors. For instance, the transmission of antibiotic-resistant genes (ARGs) and AROs derived from donor samples is a potential risk of fecal transplantation [20,21]. AROs are responsible for increasing infection cases each year, and more than 35,000 patients died as a result of ARO infections in the United States in 2017 [22]. Recently, two cases of bacteremia caused by extended-spectrum beta-lactamase (ESBL)-producing Escherichia coli in patients after FMT from the same donor sample have been reported, resulting in the death of one of the patients [21]. Moreover, the dissemination of ARGs and pathogenic AROs in patients hampers effective medical care of infections and results in longer hospitalization and higher medical expenditures [23]. Still, multiple studies report efficient reduction of ARGs and decolonization of AROs through microbiota transplantation [24,25]. In the current study, we explored the effect of a microbiota-based investigational drug RBX2660, a suspension of healthy donor microbiota [26][27][28][29], on the intestinal microbiome and resistome of recipients treated for rCDI. In an international, multicenter, randomized, and blinded phase 2b study, rCDI patients received either placebo (control group), one dose, or two doses of RBX2660 ( Fig. 1), with more patients being recurrencefree after either RBX2660 regimen than placebo [26]. Through shotgun metagenomic sequencing, we demonstrate considerable shifts of taxonomic and resistome structures common to both placebo-and RBX2660treated patients likely from discontinuation of antibiotics, particularly during the first week after treatment. By controlling for placebo effects, we could also distinguish taxonomic and resistome changes specific to RBX2660 treatment. Furthermore, we identified discriminative features strongly correlated with microbiota transplant and demonstrated an overall decrease in AROs as well as introduction of a few AROs by RBX2660. Study cohorts and sample collection All donors of RBX2660 microbiota completed a comprehensive initial health and lifestyle questionnaire. Their blood and fecal samples were tested for immunodeficiency viruses, C. difficile toxin, and pathogens including AROs such as VRE and methicillin-resistant Staphylococcus aureus before enrollment into the donor program [27,28]. Fecal specimens from a total of 66 patients and their corresponding RBX2660 products were collected during a multicenter, randomized, blinded, and placebo-controlled phase 2b study for the treatment of rCDI ( Fig. 1) [26]. Each RBX2660 dose derives from a single donor, and RBX2660 dose selection was not constrained to ensure a single donor was represented in patients that received two RBX2660 doses (Supplementary Table 1). The first dose of study drug (RBX2660 or placebo) was administered 24-48 h following completion of antibiotic treatment for CDI, and the second treatment was administered 7 ± 3 days later (Fig. 1). Patients who experienced a new rCDI episode within 60 days after the first dose (9 placebo recipients, 5 single RBX2660 recipients, 8 double RBX2660 recipients) were moved to open-label treatment and received two additional doses of randomized RBX2660 (Fig. 1). Patient fecal specimens were collected at selected time points from baseline (day 0) through 365 days after the first dose. AROs from each fecal sample were isolated on selective media plates (the "Methods" section, Supplementary Table 2). RBX2660 shifted taxonomic structures of patients' intestinal microbiome in a dose-dependent manner rCDI patients had significantly lower alpha diversity (Shannon diversity) than RBX2660 products before the treatment (Fig. 2a) as previously described with 16S sequencing [29]. Following study drug administration, the alpha diversity of all rCDI patients' microbiota increased to near-RBX2660 levels regardless of the treatment group, with the steepest increase during the first week (Fig. 2b). The largest taxonomic structural shift also occurred during the first week in all treatment groups ( Fig. S1 and S2). Bray-Curtis dissimilarities between recipient and corresponding RBX2660 product were calculated to assess the level of taxonomic transformation toward that of RBX2660. For placebo recipients, the dissimilarity was measured from a pseudo-donor (DS00) profile calculated from the average species-level taxonomic profile of all RBX2660 products in this study (Fig. 2c). The mean Bray-Curtis dissimilarity of DS00 from RBX2660 products was 0.4926, which was lower than the inter-RBX2660 Bray-Curtis distance of 0.6274. Considering the thorough inspection criteria for donors of RBX2660 products, we defined RBX2660 microbiomes as "unperturbed" gut microbiomes. Bray-Curtis dissimilarities between patients and RBX2660 demonstrate that RBX2660 administration effectively changed recipients' microbiome structure toward unperturbed configurations at a larger magnitude and for a longer duration as compared to placebo (Kruskal-Wallis test, P = 0.043 at day 30, P = 0.028 at day 60, Fig. 2d). These microbiome shifts by RBX2660 were not sensitive to the kind of antibiotic administered prior to RBX2660 (Fig. S3). We further compared the original Bray-Curtis dissimilarities between patients and respective RBX2660 (D R ) to dissimilarities between patients and other random RBX2660 (D O ). RBX2660 recipients still exhibited lower D O s than those of placebo recipients in dose-dependent manner (Fig. S4), indicating that RBX2660 shifted patients' gut microbiomes toward an unperturbed microbiome more actively than placebo. In addition, significantly lower D R s than D O s of double-dose recipients after the RBX2660 administration demonstrated dose-dependent and specific shifts toward corresponding RBX2660 (Fig. S4). Principal coordinates analysis (PCoA) and PERMANOVA for patients and RBX2660 also indicated that placebo recipients did exhibit taxonomic structural shifts toward RBX2660, but they were not as dramatic as those of double RBX2660 dose recipients toward the first dose RBX2660 (Fig. 2e). When comparing groups based on rCDI treatment success, treatment-failure patients (who experienced a new rCDI episode within 60 days post-treatment) and treatment-success patients did not exhibit significant differences ( Fig. S5a-c). This is likely due to limited number of treatment-failure samples after baseline, as patients were omitted from the current blinded study for the standard-of-care treatment at failure determination. Thus, we performed general linear model-based multivariate statistical analyses of patients' baseline metagenomes using MaAsLin2 [30] to identify baseline features correlated to rCDI prevention success or failure. Klebsiella pneumoniae was the only species whose relative abundance was significantly associated with treatment failure in all patients (Fig. S5d). When patients were (See figure on previous page.) Fig. 1 Study design for the use of RBX2660 to prevent recurrent Clostridioides difficile infection (rCDI). Total of 66 patients with a history of rCDI were treated with RBX2660 in a randomized and blinded manner. Placebo (white triangle) and RBX2660 (brown triangle) were administered and fecal samples (black circle) were collected at the indicated time points. Patients who were declared a new episode of rCDI within 60 days (white square) were moved to open-label treatment grouped by RBX2660 dose, the model identified K. pneumoniae as the only potential failure-associated feature again from placebo recipients ( Fig. S5e) but did not from RBX2660 recipients. RBX2660 transplanted taxonomic structures to patients To quantify and compare patients' levels of change in microbiome composition, we calculated a transplantation index quantifying the extent of microbiome convergence toward corresponding RBX2660 product. This index was defined as the change in Bray-Curtis distances between baseline (Distance BL ) and selected time point (Distance T ), scaled by the distance from RBX2660 at baseline: (Distance BL − Distance T )/Distance BL . DS00 was used for placebo recipients, who were then used to determine taxonomic transplantation success. To validate the transplantation index as a metric for quantifying microbiome shifts by RBX2660, we also calculated pseudo transplantation indices using dissimilarities between patients and random, non-corresponding RBX2660 products and compared them with the original transplantation indices. The dose-dependent increase in pseudo indices ( Fig. S6) is additional evidence that RBX2660 shifted patients' intestinal microbiome toward the unperturbed microbiome of RBX2660. Some of the pseudo indices were lower than zero, indicating that the transplantation index well reflects individual directionality of recipient's microbiome shift toward respective RBX2660 (Fig. S6). shifted taxonomic structures of the gut microbiome of recipients toward a healthy state. a RBX2660 products exhibited significantly higher alpha diversity than patient samples before treatment (Wilcoxon signed-rank test) based on the metagenomic taxonomic profiling data. b Alpha diversity of all patients including placebo recipients increased similarly after treatment. Changes in alpha diversity were significant for the first week after treatment, but there was no statistically significant difference among treatment groups (Kruskal-Wallis test). c Principal coordinates analysis (PCoA) showed a species-level clustering of RBX2660 (white) and pseudo-donor sample DS00 (yellow) distinct from patient baseline samples (violet). d Bray-Curtis distance between taxonomic structures of patients and corresponding RBX2660. D1 and D2 indicate the first dose and the second dose, respectively. DS00 was used for calculating the Bray-Curtis distance of placebo recipients. The decrease in Bray-Curtis distances was steepest during the first week after treatment (black, Wilcoxon signed-rank test). RBX2660 recipients showed a more dynamic decrease in Bray-Curtis distances than placebo recipients by day 60 (red, Kruskal-Wallis test). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001. e Upper panels: PCoA describing the direction of changes in taxonomic structures of RBX2660 recipients. Corresponding RBX2660 products and all placebo recipients were included. Lower panels: adjusted P values of PERMANOVA and relevant pairwise comparisons (Pillai-Bartlett non-parametric trace and Benjamini-Hochberg FDR correction). P values of comparisons between placebo and RBX2660 recipients (red asterisks, left y-axis), placebo recipients and RBX2660 (circle, right y-axis), single-dose recipients and RBX2660 (triangle, right y-axis), and doubledose recipients and RBX2660 (square, right y-axis) of PCoA plots were presented in corresponding lower panels Statistically significant differences between the original and pseudo transplantation indices of double-dose recipients, but not single dose (Fig. S6), connoted that doubledose administration allows more RBX2660-specific microbiome shift than single dose. RBX2660 recipients were categorized as transplanted or non-transplanted based on whether their transplantation index was higher (transplanted) or lower (nontransplanted) than the maximum value of the placebo group (Fig. 3a). The transplantation ratio trended higher in double-dose recipients versus single-dose recipients; this categorization showed 33.3% and 70.6% transplantation for single-and double-dose recipients, respectively, by day 7 (Chi-square test, P = 0.02752), and 29.4% and 58.3% by day 60 (Chi-square test, P = 0.1212). Non-transplanted patients at day 7 maintained nontransplanted status until day 60, regardless of dose. On the other hand, 1 single-dose recipient (R1-21) and 3 double-dose recipients (R2-01, R2-03, and R2-14) failed to maintain their transplanted state at day 7 until day 60 and eventually reverted to below the transplantation threshold. Veillonella atypica was the only baseline taxonomic feature determined by linear discriminant analysis effect size (LEfSe) [31] that distinguished patients with We defined the taxonomic transplantation as a state showing a higher transplantation index than that of all placebo recipients (green). The patients who were declared rCDI within 60 days were marked (x). The white square represents the patient who exhibited a lower transplantation index for the first dose but a higher transplantation index for the second dose than placebo patients (R2-21, Fig. S7a). b Higher baseline relative abundances of Veillonella atypica in patients who showed durable taxonomic transplantation by day 60 in both single and double RBX2660 treatment groups (Wilcoxon signed-rank test, P = 0.027). c Linear discriminant analysis effect size (LEfSe) determined baseline taxonomic features of the obstinate non-transplanted patients who exhibited lower transplantation indices than placebo recipients at day 60 after double RBX2660 treatment. Thirteen species among 18 taxonomic features were intrinsically vancomycin resistant (violet square, including E. casseliflavus of low resistance). There was no taxonomic feature specific to transplanted patients determined by LEfSe. Genus (d) and species enrichment (e) associated with the taxonomic transplantation (transplanted, green; non-transplanted, purple) were identified through a two-part zero-inflated Beta regression model with random effects (ZIBR) test. *P ≤ 0.05, **P ≤ 0.01 successful microbiome transplantation by day 60 from non-transplanted patients in both single and double RBX2660 treatment arms (Fig. 3b). Resistome regression significantly correlated with transplantation index Prior to treatment, rCDI patients showed a similar resistome alpha diversity (Wilcoxon signed-rank test, P = 0.18, Fig. 4a) when ARGs were grouped into ARG families based on the organizational structure in CARD [44]. However, the relative abundance of total ARGs was significantly higher in the patients than RBX2660 (Wilcoxon signed-rank test, P < 0.0001, Fig. 4b). It decreased over time in all treatment arms including the placebo group (Fig. 4c). Patients' resistome composition was distinct from RBX2660 products, but the antibiotic treatment prior to study drug administration did not lead to noticeable difference in resistome (Fig. S8a-c). Specifically, major facilitator superfamily (MFS) and resistancenodulation-cell division (RND) efflux pumps were the major ARG families present in rCDI patients before the treatment, whereas CfxA beta-lactamase, tetracyclineresistant ribosomal protection proteins, and Erm 23S rRNA methyltransferases were representative of the RBX2660 resistome (Fig. Se). We tracked individual changes in resistome composition of each patient for 60 days using t-distributed stochastic neighbor embedding (t-SNE) analysis [45] and resistome transplantation indices defined analogously to the microbiome transplantation index. rCDI patients showed distinctive resistome compositions as compared to those of RBX2660 prior to the treatment, but over time their resistome compositions converged to become similar to RBX2660 (Fig. 4d). The speed of resistome transformation toward RBX2660-like structures varied by patient. The convergence toward RBX2660 resistome structure showed strong correlation to the taxonomic transplantation irrespective of treatment arm (R 2 = 0.406, P < 0.0001, Fig. 4e). RBX2660 administration led to higher taxonomic and resistome transplantation indices than the placebo (Fig. 4e). To identify features distinguishing patient and RBX2660 resistomes, we used a random forest classifier ( Fig. S9a-b). Of the top 10 features of importance, 7 ARGs, namely MFS efflux pump, RND efflux pump, OXY β-lactamase, Pmr phosphoethanolamine transferase, undecaprenyl pyrophosphate related proteins, ATPbinding cassette (ABC) efflux pump, small multidrug resistance (SMR) efflux pump, and tetracycline-resistant ribosomal protein, were specific to patients' baseline resistomes. Class A β-lactamases (CfxA and CblA) and a tetracycline-resistance protein, which are frequently identified in healthy populations or donor stools in FMT trials [20,[46][47][48][49], were classified as RBX2660-specific ARGs (Fig. 5a). Relative abundances of all selected ARGs were significantly altered in recipients one week after study drug administration (Fig. 5b-k). The regression of patient-origin ARGs occurred in all patients without statistically significant differences among placebo and RBX2660 recipients (Fig. 5b-h and S9c-i). Administration of RBX2660 increased relative abundances of RBX2660-origin β-lactamases in a dose-dependent manner (Fig. 5i, j), while the relative abundance of tetracycline-resistant ribosomal protection protein increased in all patients irrespective of treatment (Fig. 5k). RBX2660 effectively cleared AROs compared to placebo but introduced new AROs We identified both persisting and newly introduced AROs based on whole genome sequence analyses of isolates from both blind and open-label treatment patients. ARO isolates were Escherichia coli (n = 104), vancomycin-resistant Enterococcus (VRE) (n = 25), and other species (n = 135). The majority of RBX2660derived AROs were E. coli (Fig. 6). We selected E. coli and VRE, the plurality of screened AROs, for further analyses based on availability of donor-recipient matched pairs and longitudinal samples. Pairwise average nucleotide identity (ANI) was above 97% for all E. coli isolates (Fig. S10), with more than 99.43% identity for all VRE (Fig. S11). Core genome phylogeny indicated the E. coli were mostly of the B2 and D phylogroups. Isolates not only clustered together based on the patient of origin, but also with their corresponding RBX2660 (Fig. S10). In general, RBX2660 recipients demonstrated faster clearance of AROs as compared to placebo recipients (Fig. 6). Simultaneously, new AROs from RBX2660, mostly E. coli, were introduced to corresponding patients. Calculation of single nucleotide polymorphism (SNP) distances (the "ARO tracking and SNP calling" section) revealed many of these AROs were likely clonal, with a median of 6 SNPs for all pairwise distances indicating near-identical genomes (Supplementary Table 3). We sorted post- . c Significant decrease in ARG RPKM was observed over time in all treatment groups (Wilcoxon signed-rank test with Benjamini-Hochberg FDR correction, FDR < 0.05). Bars indicate mean of individual ARG relative abundances. D1, the first dose; D2, the second dose. d Patients and RBX2660 products were clustered separately in t-distributed stochastic neighbor embedding (t-SNE) analysis of resistome structures at day 0. Patient resistome became similar to RBX2660 over time, but the speed of change varied for each patient regardless of RBX2660 dose and taxonomic transplantation index. e RBX2660 simultaneously fluctuated both taxonomic and resistome structures more dynamically as compared to placebo. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001 treatment ARO E. coli into RBX2660-origin or patientorigin strains and determined clonal persistence following RBX2660 intervention. The introduced AROs were found in patients longitudinally for up to 1 year post-treatment (Fig. 6). In some cases, we observed clonal persistence of patient AROs (e.g., patients R1-05 and R2-18), while in some we observed strain replacement by RBX2660derived AROs (e.g., patient R2-16). Interestingly, patients receiving the same RBX2660 product did not display identical trends. Patient R2-21 received the same RBX2660 product as R2-18 yet only R2-21 engrafted the RBX2660 ARO (Fig. 6). Persisting AROs derived from patients R1-05 and R2-18 showed higher phenotypic resistance than their corresponding RBX2660-derived AROs, which failed to engraft. On the other hand, patient R2-21 lacked baseline AROs and perhaps provided a "clean slate" for the ARO engraftment. Isolate ARGs did not indicate a changing resistance profile for these ARO lineages over time. For instance, E. coli isolates exhibited an average of 60 predicted ARGs, and these numbers remained stable throughout the time frame of this investigation (Supplementary Table 4). The 15 RBX2660-origin AROs which were engrafted to corresponding recipients harbored beta-lactamase genes such as AmpC (12 AROs Table 4). Antibiotic susceptibility testing (AST) corroborated these findings on the phenotypic level with all introduced AROs being resistant to ciprofloxacin and levofloxacin, and 60% (9/15) resistant to ampicillin (Fig. S12). Approximately half were resistant or intermediate to trimethoprim-sulfamethoxazole (7) and doxycycline (7), and a few were resistant to ampicillin-sulbactam (3) and cefazolin (4), while all were Recipients adopted a resistome profile similar to that of donors. a Ten most important patient-specific (violet) and RBX2660-specific (white) antibiotic-resistant gene (ARG) families were identified through random forest classifier. b-k Relative abundance of the selected 10 ARGs in RBX2660 (D) and patients who received placebo (gray), single RBX2660 (red), and double RBX2660 (blue). Relative abundance of patient-specific ARGs decreased over time in all patients without statistically significant difference among treatment arms (b-h). Relative abundance of the two RBX2660-specific beta-lactamases in patients increased by RBX2660 administration in a dose-dependent manner (i-j, red, Kruskal-Wallis test). Tetracycline-resistant ribosomal protection protein was a RBX2660-specific ARG, but its relative abundance in placebo recipients also increased after the treatment (k). These changes were significant during the first week after the treatment (black, Wilcoxon signed-rank test). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001 susceptible to cefotetan, ceftazidime, meropenem, imipenem, piperacillin-tazobactam, ceftazidime-avibactam, amikacin, aztreonam, tigecycline, and nitrofurantoin (Supplementary Table 2). The introduced AROs were Enterobacteriaceae and resistant to a median of 4 antibiotics, which was less than that of the patient-origin Enterobacteriaceae AROs (median resistance to 7 antibiotics, Supplementary Table 2). The most resistant isolate introduced from RBX2660 was an E. coli strain which was engrafted into patient R1-09. It was retrieved at 5 subsequent time points (final fecal sample collected at 12 months, all < 20 SNPs, Fig. 6, Supplementary Table 2). This isolate, DI11, was resistant to ceftriaxone and cefepime and classified as an ESBL-producing E. coli (Supplementary Table 2). We further validated ESBL production of DI11 and the corresponding patient isolates using double-disk diffusion tests (Supplementary Table 6). Discussion We investigated factors underlying changes in the microbiome derived from RBX2660 in a randomized, double-blind, placebo-controlled clinical trial [26]. Consistent with a previous evaluation [29] but in higher resolution using shotgun metagenomic sequencing, we demonstrated RBX2660 dose-dependent changes in the microbiome. Still, all patients initially increased alpha diversity and shifted taxonomic structure regardless of treatment, which could be accredited to the natural trajectory of recovery after antibiotic discontinuation [10,50]. We hypothesized that it would be possible to distinguish RBX2660-derived effects from the microbiome recovery after antibiotic discontinuation by assessing both extent and direction of microbiome shifts of placebo recipients as thresholds. To test the hypothesis, we developed a simple yet novel metric, the transplantation index. The transplantation index accounts for long-term changes in the microbiome toward corresponding RBX2660 while controlling for individual variation in Fig. 6 RBX2660 effectively cleared antibiotic-resistant organisms (AROs) compared to placebo and simultaneously introduced new AROs. We specifically tracked patient-derived (blue dot) and RBX2660-derived AROs (red dot). Patients with no ARO detected from both the baseline sample and corresponding RBX2660 were excluded. Persistency (solid line), disappearance (dash line), and introduction (curved line) of the AROs were determined by genomic comparison of AROs (the "ARO tracking and SNP calling" section). Squares indicate the sample availability (blue, patient baseline samples; red, RBX2660; gray, patient samples after RBX2660 administration). Patients with no samples after day 7 were marked with red. 1 R0-03 showed 2-3 separate lineages of E. coli prior to day 30, which were reduced to 1 lineage by day 60. 2 Patient R2-16 received the same RBX2660 product twice. 3 Although the two RBX2660 products for patient R2-05 were prepared from different donor samples, ARO E. coli strains screened from those appeared to be clonal (distance = 8 SNPs) baseline composition. With the highest transplantation index among placebo recipients as threshold, we demonstrated that RBX2660 recipients exhibited stronger and longer-lasting microbiome changes toward corresponding RBX2660 than placebo recipients. In an effort to predict transplantation success, we identified baseline taxonomic features that had strong correlations with taxonomic non-transplantation. Species with intrinsic vancomycin resistance were discriminative baseline features of the 4 patients who failed to acquire or maintain transplantation by double RBX2660 administration by day 60 (R2-01, R2-02, R2-03, and R2-14). Previously reported microbiome signatures of vancomycin administration including lower diversity, lower Firmicutes, and higher Proteobacteria levels [10,51,52] could not distinguish the 4 non-transplanted patients from transplanted patients. The specific enrichment of intrinsically vancomycin-resistant species therefore could be an indicator of more severe microbiome disturbance by vancomycin. Interestingly, the baseline relative abundance of V. atypica was significantly and positively correlated with durable taxonomic transplantation of RBX2660 microbiome in both the single-and double-dose arms. V. atypica has long been known as an oral bacteria that communicates and develops oral plaque biofilm with lactic acid bacteria [53,54], but a recent study has highlighted its capacity to build metabolomic networks via a peculiar metabolic function-converting lactate to propionate-in the host gut [55]. Further studies combining both metagenomic and metabolomic analyses are required to uncover the mechanism underlying the positive role of V. atypica in durable microbiota transplantation. Relative abundances of Barnesiella and Coprobacillus genera are significantly correlated with taxonomic transplantation status. Barnesiella, which exhibited positive correlation with taxonomic transplantation, also has been linked to clearance of VRE colonization in mice [56]. Two Bacteroides species, B. ovatus and B. uniformis, were overrepresented in transplanted patients, reflecting the previous report on their correlation with the unperturbed gut microbiome [57,58]. We also hypothesized that microbiome features of patients are also associated with the prevention of CDI recurrence during the RBX2660 clinical trial. General linear model-based multivariate statistical analyses identified K. pneumoniae as a species associated with treatment failure from all patients or only placebo recipients but did not from RBX2660 recipients. Baseline K. pneumoniae might indeed be a rCDI-associated feature, such as a biomarker of the imbalanced microbiome [59] that underlies CDI, but not correlate with the outcomes of RBX2660 recipients whose microbiomes were affected by RBX2660. Together with the higher efficacy for RBX2660 on the rCDI prevention than placebo [26], the model outputs suggest that RBX2660 transplantation restored the disturbed intestinal microbiota to outcompete C. difficile. We reckoned that both dose levels provide enough unperturbed microbiota to exceed a minimum threshold to achieve clinical efficacy, and the second dose provides additional microbiota from which the taxonomic transplantation may arise. Despite their apparent difference between transplantation indices of single-and double-dose recipients, the two treatment arms showed equivalent clinical efficacy [26]. Likewise, although early-stage transplantation by day 7 appeared to be an important factor determining durable transplant by day 60, it did not always secure successful prevention of rCDI and vice versa. The differences between rCDI patients and RBX2660 in both ARG relative abundance and resistome architecture became narrowed in all the three treatment arms over time. These outcomes suggest that antibiotic discontinuation could be the driver of the changes in resistome during this clinical trial. Despite the natural recovery after antibiotic discontinuation, we hypothesized that transplantation of RBX2660 microbiota shaped patient resistome. RBX2660 indeed simultaneously introduced and eradicated both ARGs and AROs in patients during the process of transplantation. Previous studies have also demonstrated the efficacy of FMT for eradicating AROs [60], but to our knowledge this is the first to comprehensively track clonality for both RBX2660-and patient-derived ARO isolates. Most introduced AROs were antibiotic-resistant E. coli that are commonly present in a healthy population [61,62]. We identified one ESBL-producing E. coli strain from a RBX2660 product carrying AmpC and CTX-M-14, whose RBX2660 product was administered to one patient, R1-09. The patient was a single-dose recipient, with recorded treatment success (i.e., no recurrence of CDI and absence of diarrhea for 8 weeks post-treatment) and no known clinical disease resulted from the trial. ESBL-producing E. coli are not inherently more virulent than other strains but can pose a therapeutic challenge if infection occurs [63]. Of note, this trial enrolled patients from December 2014 to November 2015, prior to recognition of ESBL as an important aspect of donor screening. At that time, donor stools were screened for carbapenem-resistant Enterobacteriaceae (CRE) but not ESBL, whereas Rebiotix now screens all donor stools for both CRE and ESBL. Moreover, to date, there have been no adverse infection events due to bacterial transmission from RBX2660 in any clinical trials. In light of a recent death caused by ESBL-producing E. coli bacteremia in an immunocompromised patient after FMT [21], our findings highlight the importance of a controlled and regulated donor screening program as well as mandatory, monitored safety reporting. Likewise, our findings prompt a general consideration of risk factors for infections from intestinal microorganisms in any life biotherapeutic investigational product. Conclusions We thoroughly examined the impact of RBX2660 on the taxonomic structure, resistome, and ARO colonization of recipients during a randomized and placebocontrolled clinical trial. This study is based on samples from a completed placebo-controlled clinical trial of intestinal microbiota restoration, which enabled us to determine microbiome effects of the microbiota-based drug. Using the transplantation index, the current study demonstrated that RBX2660 administration transplanted its microbiota in the recipients in a dose-dependent manner. V. atypica-and intrinsic vancomycin-resistant species were discriminative features of patients showing long-lasting microbiota transplantation and resisting microbiota transplantation, respectively. While antibiotic discontinuation alone significantly reduced patientorigin ARGs, RBX2660 administration led to more dynamic transformations of the resistome. RBX2660 simultaneously introduced RBX2660-origin ARGs in a dosedependent manner. RBX2660 more efficiently decolonized AROs than placebo but simultaneously introduced new AROs. Genomic outcomes of intestinal microbiota restoration with RBX2660 in the current study show both latent limitations of microbiota transplantation as well as its potential benefits and highlight the importance of the design and quality control of microbiotabased drugs. Study cohort, drug, and specimen Subjects were recruited from among 17 centers in the USA and Canada from 10 December 2014 through 13 November 2015. Subjects were adults with recurrent CDI who have had either (i) at least two recurrences after a primary episode (total three CDI episodes) and had completed at least two rounds of oral antibiotic therapy or (ii) had at least two episodes of severe CDI resulting in hospitalization. They were randomly assigned to one of three treatment groups: placebo, single, or double doses of RBX2660. All treatments were blinded and delivered by enema [26]. The second dose was administered approximately 7 days after the first dose. For patients that received two RBX2660 doses, donor selection was random and not constrained to provide a single representative donor per patient. The selection and screening of donors for RBX2660 were performed as previously described [27,28]. The placebo composed of normal saline and formulation solution including cryoprotectant in the same proportions used for the RBX2660 preparation. RBX2660 and placebo were stored frozen after preparation until administration. They were thawed for 24 h in a refrigerator and administered within 48 h after thawing. AROs were isolated from patient fecal samples and RBX2660 products on selective agar media plates, chromID VRE (bio-Merieux, Marcy-l'Etoile, France), MacConkey with Cefotaxime (Hardy Diagnostics, Santa Maria, CA), Mac-Conkey with Ciprofloxacin, (Hardy Diagnostics), and HardyCHROM TM ESBL (Hardy Diagnostics), at 35°C in air. The remaining fecal samples were stored frozen at − 80°C until metagenomic DNA extraction. Isolate colonies were sub-cultured to trypticase soy agar with 5% sheep blood (Becton Dickinson, Franklin Lakes, NJ) and identified using VITEK MS matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) system [64,65]. Each isolate was frozen in tryptic soy broth with glycerol at − 80°C. Antibiotic susceptibility testing Antibiotic susceptibility testing was performed through Kirby Bauer disk diffusion, and the resulting zone sizes were interpreted according to the M100 document from the Clinical and Laboratory Standards Institute [66]. DNA extraction and sequencing Metagenomic DNA was extracted from approximately 100 mg of fecal samples using DNeasy PowerSoil Kit (Qiagen) following the manufacturer's protocol excepting the lysis step: fecal samples were lysed by 2 rounds of bead beating for 2 min (total 4 min) at 2500 oscillations/min using a Mini-Beadbeater-24 (Biospec Products). Samples were chilled on ice for 2 min between the two bead beating rounds. Extracted DNA was quantified using a Qubit fluorometer dsDNA HS Assay (Invitrogen) and stored at − 20°C until the library preparation. Metagenomic DNA was diluted to 0.5 ng/μL before preparing the sequencing library. Libraries were prepared using the Nextera DNA Library Prep Kit (Illumina) as previously described [67]. The libraries then were purified through the Agencourt AMPure XP system (Beckman Coulter) and quantified by Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen) before sequencing. Approximately 70 library samples were pooled in an equimolar manner at the final concentration of 5 nM for each sequencing lane. Prepared pools were submitted for 2 × 150 bp paired-end sequencing on an Illumina NextSeq High-Output platform at the Center for Genome Sciences and Systems Biology at Washington University in St. Louis with a target sequencing depth of approximately 5.5 million reads per sample. Isolate genomic DNA was extracted using QIAmp BiOstic Bacteremia DNA Kit (Qiagen). Libraries for whole genome sequencing of isolates were prepared from diluted genomic DNA (0.5 ng/μL) as described above. About 180 libraries were pooled together in an equimolar manner at the final concentration of 5 nM for each sequencing lane. Prepared pools were submitted for 2 × 150 bp paired-end sequencing on an Illumina Next-Seq High-Output platform at the Center for Genome Sciences and Systems Biology at Washington University in St. Louis with a target sequencing depth of approximately 2 million reads per sample. Microbiomic analyses Microbiome taxonomic composition was predicted by MetaPhlAn v2.0 [77] and controlled for relative abundance. Genus-level composition plots were obtained by grouping together genus present in less than 50% of samples as "Other." DS00 pseudo-donor microbiome was obtained by averaging the species-level taxonomic profiles of all RBX2660 microbiomes. Bray-Curtis distances were calculated using the vegan package [78] and visualized as PCoA plots via the ape package [79] in R 3.5.3. LEfSe [31] identified baseline taxonomic and metabolic features distinguishing transplanted and non-transplanted patients (alpha value for the factorial Kruskal-Wallis test = 0.05, threshold on the logarithmic LDA score = 2). HUMAnN2 [80] was employed for metabolic pathway prediction. Longitudinal changes distinguishing transplanted and nontransplanted patients were identified using the ZIBR [43] package in R. Taxa were filtered for non-zero presence in at least 40% samples, and > 0.01 relative abundance in the 90th percentile. Each taxon's relative abundance was modeled as both the logistic (X) and beta (Z) components (alpha value for Benjamini-Hochberg-adjusted P = 0.05) with transplantation outcome as a fixed effect. Baseline features distinguishing patients with and without rCDI were detected using MaAsLin2. MaAsLin2 is a general linear model-based association detector for microbiome associations with metadata, in this case associations with treatment outcome (success or failure). Taxa were filtered with a minimum prevalence of 0.1 and a minimum relative abundance of 0.0001. Five different models were fitted: one for all patients (total n = 63), one for each treatment arm separately (placebo, n = 21; single dose, n = 22; double dose, n = 21), as well as one for RBX2660 recipients (n = 43) (alpha value for Benjamini-Hochbergadjusted P = 0.05). Resistome identification and random forest classifier ARGs in the microbiome were identified using ShortBRED [81] with CARD [44]. Isolate ARGs were identified with RGI and CARD [44,82]. The resulting genes were manually curated into more general ARG families (n = 64). A subset of 70% of available resistomes were then used to train a random forest classifier distinguishing patient baseline and RBX2660 resistomes (training set n = 103), which was then tested on the remaining samples (test set n = 45). The random forest classifier was built with the package scikit-learn (https:// scikit-learn.org/stable/index.html) on Python 3.7.3, with trees averaging 12 nodes and a maximum depth of 4. ARO tracking and SNP calling SNPs were called using Bowtie2 [83], SAMtools, and BCFtools [84], with the first isolate from the patient or corresponding RBX2660 product used as the reference genome. Reads from subsequent isolates of the same species were aligned against the reference with Bowtie2 (-X 2000 --no-mixed --very-sensitive --n-ceil 0,0.01). BAM files were obtained and sorted with SAMtools (view and sort), which were then converted to pileup files (mpileup). BCFtools view generated VCF files, and variants were called, with the following criteria: minimum coverage of 10 reads per SNP, major allele frequency above 95%, and FQ-score of − 85 or less. Indels were excluded. VCF files for each patient were compiled with BCFtools merge, after which SNPs were parsed and counted using custom python and R scripts. Additional file 1: Figure S1. Taxonomic overview of patient stool samples at the genus level. Figure S2. Taxonomic shift by treatments (related Fig. 2). Figure S3. The effect of antibiotics prior to study drug on taxonomic shift by RBX2660 (related Fig. 2 and 3). Figure S4. Bray-Curtis dissimilarities between patients and respective RBX2660 (D R ) or other random RBX2660 (D O ). Figure S5. Changes in the Bray-Curtis dissimilarities between a patient and corresponding donor. Figure S6. Transplantation
2023-01-29T15:32:44.107Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "21600824fc3fdbda73b0ee99624accb7531a2c49", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40168-020-00907-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "21600824fc3fdbda73b0ee99624accb7531a2c49", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
4110566
pes2o/s2orc
v3-fos-license
Nintedanib antiangiogenic inhibitor effectiveness in delaying adenocarcinoma progression in Transgenic Adenocarcinoma of the Mouse Prostate (TRAMP) Background In recent times, anti-cancer treatments have focused on Fibroblast Growth Factor (FGF) and Vascular-Endothelial Growth Factor (VEGF) pathway inhibitors so as to target tumor angiogenesis and cellular proliferation. One such drug is Nintedanib; the present study evaluated the effectiveness of Nintedanib treatment against in vitro proliferation of human prostate cancer (PCa) cell lines, and growth and progression of different grades of PCa lesions in pre-clinical PCa transgenic adenocarcinoma for the mouse prostate (TRAMP) model. Methods Both androgen-independent (LNCaP) and androgen-dependent (PC3) PCa cell lines were treated with a range of Nintedanib doses for 72 h, and effect on cell growth and expression of angiogenesis associated VEGF receptors was analyzed. In pre-clinical efficacy evaluation, male TRAMP mice starting at 8 and 12 weeks of age were orally-fed with vehicle control (10% Tween 20) or Nintedanib (10 mg/Kg/day in vehicle control) for 4 weeks, and sacrificed immediately after 4 weeks of drug treatment or sacrificed 6–10 weeks after stopping drug treatments. At the end of treatment schedule, mice were sacrificed and ventral lobe of prostate was excised along with essential metabolic organ liver, and subjected to histopathological and extensive molecular evaluations. Results The total cell number decreased by 56–80% in LNCaP and 45–93% in PC3 cells after 72 h of Nintedanib treatment at 2.5–25 μM concentrations. In pre-clinical TRAMP studies, Nintedanib led to a delay in tumor progression in all treatment groups; the effect was more pronounced when treatment was given at the beginning of the glandular lesion development and continued till study end. A decreased microvessel density and VEGF immunolocalization was observed, besides decreased expression of Androgen Receptor (AR), VEGFR-1 and FGFR-3 in some of the treated groups. No changes were observed in the histological liver analysis. Conclusions Nintedanib treatment was able to significantly decrease the growth of PCa cell lines and also delay growth and progression of PCa lesions to higher grades of malignancy (without inducing any hepatotoxic effects) in TRAMP mice. Furthermore, it was observed that Nintedanib intervention is more effective when administered during the early stages of neoplastic development, although the drug is capable of reducing cell proliferation even after treatment interruption. Background Prostate cancer (PCa) is the most common type of cancer in men and the second leading cause of cancer-associated deaths, particularly in men over 50 years of age [1]. According to Siegel [2], more than 1.6 million new cases of PCa were diagnosed in 2015 in the United States alone. Genetic aberrations, capable of disrupting homeostasis between the epithelial and the stromal prostate compartments, are considered as one of the leading causes of this disease; prostatic stroma is responsible for the quick response in case of tissue injury/damage to the prostatic epithelium [3,4]. Different animal models have been used in anti-PCa efficacy studies, including the transgenic adenocarcinoma for the mouse prostate (TRAMP) model, which mimics the spontaneous growth and progression of PCa as it occurs in humans. According to Wikstrom et al. [5], lesions found in the prostate of the TRAMP mice can be classified into different grades histopathologically, low and high grade prostatic intraepithelial neoplasia (PIN) lesions which advance to different stages of adenocarcinoma, such as well-differentiated, and poorly differentiated adenocarcinoma; in addition there are extensive changes in the expression of molecular markers [6]. The PIN stage is characterized by a stratification pattern and epithelial cell projection into the acinar lumen, showing atypical cells and cell polarity loss, nuclear increase, and chromatin condensation. Well-differentiated adenocarcinoma is characterized by the invasion of epithelial cells in the prostatic stroma and basement membrane disruption. This latter grade lesion can develop into poorly differentiated adenocarcinoma, where the tumor is made up of a cluster of indistinct cells [7]. Though the transgene is significantly expressed in dorsolateral prostate of TRAMP mice, it is also expressed at higher levels in the prostate ventral lobe [8]; a recent study has shown changes in the expression of 36 proteins during carcinogenesis in the ventral lobe of prostate [9]. Furthermore, a 2016 study showed a significant delay in tumor progression in the prostate ventral lobe of TRAMP mice after anti-inflammatory therapy [10]. Angiogenesis is known for its importance in the development and maintenance of the tumor and is responsible for the recruitment of new blood vessels from pre-existing vessels, occurring in response to the demand of nutrients and oxygen by tumor cells [11]. Currently, inhibition of tumor angiogenesis has been shown to be a promising therapeutic strategy in cancer treatment, and Vascular-Endothelial Growth Factor (VEGF) inhibitory drugs have been used successfully in clinical practice [12]. However, cancer cells may show a signaling exchange mechanism with the Fibroblast Growth Factor (FGF) pathway, leading to tumor growth even under VEGF inhibition. FGF signaling and receptors are responsible for regulating mechanisms such as differentiation, survival, motility and invasiveness, as well as being intimately involved in angiogenesis [13]. Currently Nintedanib (BIBF 1120), a selective FGF and VEGF receptor inhibitor, is being evaluated in clinical trials for its safety and efficacy against PCa treatment [13]. Studies have shown that Nintedanib is related to a significantly improved survival rate in patients [14]. Other studies have shown that Nintedanib administered at doses of 50-100 mg/Kg/day for 2 weeks could inhibit hepatocellular tumor growth in nude mice [15]. Furthermore, Nintedanib has been also shown to decrease tumor volume in mice injected with head, neck, and renal carcinoma cells [16]. Thus, the objective in the present study was to evaluate the efficacy of Nintedanib treatment against in vitro proliferation of human PCa cell lines, and the growth and progression of different grades of PCa lesions in TRAMP model. Besides investigating the effect on aberrant signaling pathways associated with PCa, the therapy effectiveness would also be analyzed on the structural and hormonal responses as well as the neovascularization of the prostate ventral lobe of TRAMP mice at different stages of the disease. Reagents and cell culture Human prostate carcinoma LNCaP and PC3 cells were obtained from American Type Culture Collection (Manassas, VA). Cells were cultured in RPMI 1640 with 10% fetal bovine serum (Hyclone, Logan, UT) under standard culture conditions (37°C, 95% humidified air and 5% CO 2 ). Cells were plated at a density of 5 × 10 3 cells/cm 2 plates under standard culture conditions. After 24 h, cells were treated with either DMSO alone (control), or different doses of Nintedanib dissolved in DMSO (concentration did not exceed 0.1% in any treatment (v/v)). At the end of treatment time (72 h), cells were collected after brief trypsinization, washed with PBS, and then stained with Trypan blue. The total cell number (colorless and blue stained) and dead cells (blue stained) were counted under light microscope using hemocytometer. Western blotting for cell lysates Approximately 80 μg of protein lysate from whole-cell extracts per sample was denatured in 5 × sample buffer and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) on 8% Tris-glycine gel. The separated proteins were transferred on to nitrocellulose membrane followed by blocking with 5% non-fat milk powder (w/v) in Tris-buffered saline (10 mM Tris-HCl, pH 7.5, 100 mM NaCl, 0.1% Tween 20) for 1 h at room temperature. Membranes were probed for VEGFR-1 Vascular-Endothelial Growth Factor Receptor 1 -VEGFR-1 (sc-316) (Santa Cruz Biotechnology, USA) and VEGFR-2 (sc-6251) (Santa Cruz Biotechnology, USA) followed by the appropriate peroxidase-conjugated secondary antibody and visualized by ECL detection system. Membranes were also stripped and re-probed again with the loading control β-actin (A2228 Sigma-Aldrich). In each case, blots were subjected to multiple exposures on the film to make sure that the band density is in the linear range. The autoradiograms/bands were scanned with Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA). Animals and experimental procedure A total of 72 transgenic male mice -TRAMP (C57BL/6-Tg (TRAMP) 824Ng/JX FVB/JUnib) were provided by the Multidisciplinary Center for Biological Investigation in Laboratory Animal Science at the State University of Campinas (UNICAMP). The animals received water and solid food ad libitum (Nuvilab, Colombo, PR, Brazil). Procedures were approved by the Committee of Ethics in Animal Research (protocol n°3285-1) and carried out in agreement with the Ethical Principles for Animal Research established by the Brazilian College for Animal Experimentation (COBEA). The mice were weighed and divided into 9 experimental groups (n = 10 per group), as follows (Fig. 1a), receiving either the vehicle (10% Tween 20, 10 mL/kg/day) or Nintedanib (10 mg/Kg/day in control vehicle) orally [16]: T8 group: untreated 8-week-old TRAMP mice; TC12 and TN12 groups: TRAMP mice treated with vehicle or Nintedanib, respectively, from 8 to 12 weeks of age and sacrificed thereafter; TC16 and TN16 groups: TRAMP mice treated with vehicle or Nintedanib, respectively, from 12 to 16 weeks of age and sacrificed thereafter; TC22 (8-12) and TN22 (8)(9)(10)(11)(12) groups: TRAMP mice treated with vehicle or Nintedanib, respectively, from 8 to 12 weeks of age and sacrificed 10 weeks later (22 weeks of mice age); TC22 (12)(13)(14)(15)(16) and TN22 (12)(13)(14)(15)(16) groups: TRAMP mice treated with vehicle or Nintedanib, respectively, from 12 to 16 weeks of age and sacrificed 6 weeks later (22 weeks of mice age). At the end of treatment, the mice were weighed on a Denver P-214 scale (Denver Instrument Company, Arvada, CO, EUA) anesthetized with 2% xylazine hydrochloride (5 mg/Kg; Konig, São Paulo, Brazil) and 10% ketamine hydrochloride (60 mg/Kg; Fort Dodge, IA) and euthanized. Samples from the prostate ventral lobe were collected and submitted to histopathological, immunohistochemical and western blotting analyses. The body weight gain during the treatment (n = 3) were recorder for each experimental group. Histopathological analysis Five prostate ventral lobe samples per experimental group were fixed in Bouin solution, washed in 70% ethanol, dehydrated in increasing concentration of ethanol, diaphanized in xylene and embedded in Paraplast (Paraplast Plus, St. Louis). Five μm thick sections were obtained using a microtome (Zeiss Hyrax M60) and the sections were deparaffinized in xylene, hydrated in a decreasing ethanol series, rinsed under distilled water and staining with hematoxylin-eosin. Photomicrographs were captured by a digital camera coupled to a Nikon Eclipse E-400 light microscope (Nikon, Tokyo, Japan). Morphological evaluation was performed (blind assay) in 15 randomly captured areas per animal (under 400x magnification), resulting in 75 areas per group. Each area was divided into 4 quadrants and histopathologically classified as per modified recommendations of Berman-Booty [7]: Normal, (without lesion), Low/High Grade Prostatic Intraepithelial Neoplasia (PIN) and Well-differentiated adenocarcinoma. Liver samples from TC12, TN12, TC22 (8)(9)(10)(11)(12) and TN22 (8-12) groups were also collected and subjected to same histopathological protocol as above. Immunohistochemistry Tissue samples of prostate ventral lobe as were used in the histopathological analyses were also used for CD31, Androgen Receptor (AR), Estrogen Receptor α (ERα), Proliferating cell nuclear antigen (PCNA) and Vascular Endothelial Growth Factor (VEGF) immunohistochemical analysis using standard protocol. Briefly, antigens were retrieved by boiling the sections in 10 mM citrate buffer (pH 6.0). After that, the sections were incubated in H 2 O 2 to block endogenous peroxidase. Nonspecific binding was blocked by incubating the sections in a blocking solution (BSA in TBS-T buffer) for 1 h at room temperature. AR, ERα, PCNA, CD31 and VEGF antigens were detected using polyclonal rabbit anti-AR (sc-816) (Santa Cruz Biotechnology, USA), polyclonal rabbit anti-ERα (sc-542) (Santa Cruz Biotechnology, CA), anti-PCNA [PC10] (ab29) Abcam for PCNA, polyclonal rabbit anti-CD31 (sc-1506-R) (Santa Cruz Biotechnology, CA) and polyclonal mouse anti-VEGF (sc-53462) (Santa Cruz Biotechnology, CA, USA), respectively, diluted in 1% BSA and applied to the sections overnight at 4°C. The sections were then washed with TBS-T and subsequently incubated in HRP-conjugated secondary antibody from the Envision Kit (DAKO) for 60 min for CD31, goat anti-rabbit IgG (W4018) (Promega Corporation, Madison, WI) for AR and ERα and goat anti-mouse IgG (W4021) for VEGF detection. After washing in TBS-T, peroxidase activity was detected using a diaminobenzidine (DAB) chromogenic (Sigma-Aldrich, St. Louis, MO) for 5 min, which indicated the immunoreactivity of antibodies. Sections were lightly counterstained with Harris Hematoxylin, dehydrated in an increasing ethanol series and xylene, mounted in Entellan (Merck, Darmstadt, Germany) and photographed using the Nikon Eclipse E-400 microscope. The prostatic sections were analyzed using a multipoint system with 160 intersections [17]. Thus, 10 fields per mice, totaling 50 fields per group, were randomly captured (under 400x magnification). Values were determined by immunoreactivity count coinciding with the grid intersection divided by the total number of points. The result was expressed as a relative frequency of positive staining for the molecule in all Fig. 1 Experimental groups, cell viability and glandular lesion features of the prostate. a Scheme of the experimental design of the different groups b and c PC3 and LNCaP viability after Nintedanib treatment. d acini with healthy glandular areas/without lesion (asterisk) and low grade PIN occurrence (arrow). e high grade PIN (arrow), acini with lumen occupied by epithelial projections (cribriform type -Cr). f Epithelial cell invasion through the stroma with membrane basal discontinuity (arrow), characterizing well-differentiated adenocarcinoma. g Glandular disorganization with acini structure loss, characterizing undifferentiated adenocarcinoma. Hematoxylin and Eosin, 400x, bar = 100 μm experimental groups. Samples that were not incubated with primary antibody were used as negative controls. Microvessel density Microvessel density determination was performed by counting the number of CD31 positive blood vessels in the prostatic stroma in ten random and non-overlapping fields per animal, following the criteria proposed by Weidner [18]. Microvessel density was expressed as the mean value obtained from the ten fields in each animal and also as the maximum density in a particular field (modified from Hochberg, [19]). Western blotting for tissue lysates Prostate ventral lobe samples from five animals per group were collected and frozen in liquid nitrogen. The samples were weighed and homogenized in a Polytron homogenizer (Kinematica Inc., Lucerne, Switzerland) in a 40 mL/mg protein extraction buffer. The tissue homogenates were centrifuged at 18,659 rcf for 20 min at 4°C and a sample of each extract was used for protein quantification with Bradford reagent (Bio-Rad Laboratories, Hercules, CA). The supernatants were mixed (1:1) with 3X Laemmli buffer and transferred to a dry bath at 100°C for 5 min. Aliquots containing 50 or 75 μg of protein were separated by electrophoresis in SDS-PAGE gels under reducing conditions. After electrophoresis, proteins were transferred to Hybond-ECL nitrocellulose membranes (Amersham, Pharmacia Biotech, Arlington Heights, IL) at 120 V for 90 min. The membranes were blocked with BSA in TBS-T for 60 min and incubated at 4°C overnight with the primary antibodies for AR, ERα and VEGF (described under the immunohistochemistry section) and Fibroblast Growth Factor Receptor 3 -FGFR-3 (sc-123) (Santa Cruz Biotechnology, USA) and Vascular-Endothelial Growth Factor Receptor 1 -VEGFR-1 (sc-9029) (Santa Cruz Biotechnology, USA). Membranes were then incubated for 2 h with the same HRP-conjugated secondary antibodies used for immunostaining diluted in a 1:5,000-1:10,000 range. After washing in TBS-T, peroxidase activity was detected through the incubation of the membranes in the chemiluminescent solution (Pierce Biotechnology, Rockford, IL) for 5 min, followed by fluorescence capture using the Gene Gnome equipment and the Gene Sys image acquisition software (Syngene Bio Imaging, Cambridge, UK). Mouse monoclonal anti-β-actin (sc-81178) (Santa Cruz Biotechnology, CA) antibody was used as an endogen control for comparison among groups. The intensity of antigen bands in each experimental group was determined by densitometry using the Image J (Image Analysis and Processing in Java) software for image analyses and was expressed as the mean percentage ± standard deviation in relation to β-actin band intensity. Statistical analysis Data are presented as average percentage or mean ± Standard Error of Mean (SEM). Parametric variables were compared by ANOVA followed by the test of Bonferroni or by two-tailed t-test (for morphology and Western Blotting analysis). Differences were considered significant when p<0.05. The statistical analyses were performed by the software GraphPad Prism (version 5.0). Inhibitory effects of Nintedanib on growth of androgendependent and -independent human PCa cells In order to evaluate the anticancer efficacy of Nintedanib against both androgen-dependent and -independent human PCa cells, we studied the in vitro effect of Nintedanib treatment on the cellular growth/viability of both cell types (under 10% serum conditions) using Trypan blue dye exclusion method. Both androgen-dependent LNCaP and androgen-independent PC3 cells treated with Nintedanib at the concentrations of 0, 2.5, 5, 10 and 25 μM for 72 h showed a concentration dependent decrease in growth of these cells. The total cell number decreased by 56-80% (P < 0.001) in LNCaP and 45-93% (P < 0.001) in PC3 cells after 72 h of Nintedanib treatment at 2.5-25 μM concentrations, respectively ( Fig. 1b and c). These in vitro observations confirmed that Nintedanib had inhibitory effect against the growth of both androgendependent and -independent human PCa cells. TRAMP mice as an appropriate model for prostate ventral lobe adenocarcinoma evaluation Though most of the TRAMP studies have focused on efficacy evaluation in dorsolateral prostate, the transgene is also expressed in ventral prostate of TRAMP mice. In this study, we focused on neoplastic changes as observed in ventral prostate; this lobe also showed progressive development of PCa with increase in age in the TRAMP mice. The lesions were characterized as acinar epithelium proliferative lesions with different grades of severity. In 8-and 12-week-old mice (T8 and TC12 groups) there was a predisposition of low grade PIN (43.86% and 43.16%, respectively), characterized by cellular proliferation without occupying the lumen acinar known as the micro-papillary form (Figs. 1d, 2b, f and g), at the beginning of the prostatic lesion development. In these lesions, cellular atypia, such as the increase of nuclear size and cytoplasm reduction were verified in the epithelium (Fig. 2e). Also, the occurrence of inflammatory cells and hyperplasia of the smooth muscle cells were verified in the prostatic stroma. The smooth muscle cells were placed around the stroma, in addition to collagen fiber thickening, particularly around glandular regions showing epithelial cell proliferation (Figs. 2o, p and q). On the other hand, the predisposition of high grade PIN (Fig. 1e). Finally, the 22-week-old mice TC22 (8-12) and TC22 (12)(13)(14)(15)(16) presented prostatic adenocarcinoma in advanced grades of development, characterized by the frequent occurrence of high grade PIN and representing 55.28% of the glandular total in the TC22 (8-12) group and 57.08% in the TC22 (12-16) group (Figs. 2c, k, and m). Also, there was welldifferentiated adenocarcinoma characterized by epithelial cell proliferation invading the stroma, characterizing basal membrane discontinuity (Fig. 1e). In addition, undifferentiated adenocarcinoma was seen, characterized by acinar structure loss, in which the epithelial cells occupied the organ entirely (Fig. 1g). The prostatic stroma of these two groups also presented collagen fiber thickening, with the disorganization of stromal elements (Figs. 2u and x). Nintedanib delayed adenocarcinoma progression in the prostate ventral lobe, particularly when it was administered at the beginning of glandular lesion development The mice from TN12 and TN16 groups showed late prostatic adenocarcinoma progression, characterized by a significant increase of healthy acinar frequency and decrease of high grade PIN foci in all Nintedanib treated groups vs. their control groups (Figs. 2a, c, h and j). On the other hand, the reduction of prostatic epithelial cell proliferation was higher in the TN12 group where Nintedanib treatment was given from 8 to 12 weeks of age and mice were sacrificed immediately after treatment end (Fig. 2h). The mice from the TN12 group also showed a significant increase in a no-lesion acini frequency (34%), in addition to epithelial atrophy in comparison to the control group (TC12). Also, Nintedanib treatment reduced the well-differentiated adenocarcinoma incidence to 1.07%. There was also a reduction of low grade PIN (37.21%) and high grade PIN (27.71%) in relation to the TC12 control group (Figs. 2b and h). The prostatic stroma from the Nintedanib treated mice showed a slight decrease of fibromuscular layer thickening. However, hypertrophied smooth muscle cells, and regions with inflammatory cells were still observed (Fig. 2r). The TC12 group showed a predominance of low grade PIN (43.16%), while only 19.62% of the total glandular area showed acini without structural changes ( Fig. 2b and g). High grade PIN foci (35.31%) and well-differentiated adenocarcinoma (1.68%) were verified in the TC12 group. The prostatic stroma in the mice from the TC12 group showed a slight fibrillar element increase, characterized by smooth muscle cell hypertrophy and collagen fiber layer thickening. In addition, inflammatory cells occurrence was identified, which was always seen around epithelial proliferation foci in the TC12 group prostate ventral lobe (Fig. 2q). Furthermore, the prostate ventral lobe in the TN16 group showed a decrease in high grade PIN incidence (36.80%), in addition to a significant increase in glandular areas with healthy acini (19.72%) compared to control prostates in the TC16 group, which showed a predominance of high grade PIN (46.61%) (Fig. 2a, c, i and j). Also, slight collagen fiber layer thickening in the prostatic stroma was observed in the TN16 group in contrast to the pronounced thickening in the TC16 group ( Fig. 2s and t). Nintedanib decreased adenocarcinoma frequency in the prostate ventral lobe despite stopping its treatment The histopathological evaluation indicated that there was also delay in adenocarcinoma progression in the prostate ventral lobe of TRAMP mice whose Nintedanib treatments had ended 10 or 6 weeks before euthanasia (See figure on previous page.) Fig. 2 Frequency of the different lesion types and photomicrographs of the prostate ventral lobe. a Normal prostatic acini. b Low-grade PIN. c High-grade PIN and d Well-differentiated adenocarcinoma. Values expressed as average percentage. *p < 0.05 e atypical secretory epithelial cells in the prostate, highlighting cellular increase (1000x, bar = 20 μm). f T8 groupepithelial cell proliferation; low grade PIN foci (arrow); inset: 1000x, bar = 20 μm. g TC12 groupcellular proliferation with prostatic epithelium stratification, occupying acinus lumen, characterizing high grade PIN with cribriform feature acini (Cr); h TN12 groupepithelial atrophy and cellular proliferation scarcity (inset (1000x, bar = 20 μm)). i TC16 group -High grade PIN foci, cribriform feature acini (Cr and inset (1000x, bar = 20 μm)). j TN16 groupa slight decrease of cellular proliferation and epithelial atrophy (arrow). k TC22(8-12) groupcellular proliferation foci, characterizing well-differentiated adenocarcinoma (arrows); inset: 1000x, bar = 20 μm. l TN22(8-12) group and n TN22(12-16) group -Occasional cellular proliferation foci. m TC22(12-16) group -Cellular proliferation and infiltration in the stroma (inset), with well-differentiated adenocarcinoma foci (arrow). o accumulation of inflammatory cells and thickening of the collagen fiber layer (asterisk) (1000x, bar = 20 μm). p T8 groupnormal distribution of fibrillar elements. q TC12 groupslight increase of fibrillar elements. r TN12 groupfibrillar stromal elements distributed regularly around the glandular acini. s TC16 group -a slight thickening of collagen fibers layer (arrow). t TN16 groupstroma without changes in the fibrillar and cellular elements. (Fig. 2k, l, m and n). This effect was also associated with a significant increase of healthy prostatic area frequency and a decrease of high grade PIN frequency (Fig. 2a and c). Nintedanib significantly decreased of lesion/tumor proliferation in ventral prostate of TRAMP mice Animals treated with Nintedanib in tumor development grades (8 and 12 weeks of age) and sacrificed immediately, showed a decrease in PCNA immunolabeling in the TN12 and TN16 groups. Furthermore, groups which received Nintedanib from 8 to 12 weeks of age and were sacrificed at 22 weeks of age, also showed a significant reduction of epithelial cell proliferation (Fig. 3), confirming the histopathological analyses. Nintedanib intervention inhibited the tumor angiogenesis process in the prostatic tumor tissue in TRAMP mice, despite stopping its treatment Ventral prostates of control TRAMP mice showed a gradual increase in prostatic tumor vascularization, characterized by increase in prostatic microvessel density and epithelial VEGF immunoreactivity. However, in Nintedanib treated TRAMP prostates there was a significant decrease in average and maximum (in a determined spot) microvessel density (Fig. 4). The reduction was stronger in the Nintedanib treated group which started treatment at 8 weeks of age. Also, the Nintedanib treatment groups showed a significant decrease in prostatic epithelial VEGF immunoreactivity in the TN12 and TN16 groups (Figs. 5 and 6a) and VEGFR-1 levels in the TN12 group (Fig. 6b). FGFR-3 maximum expression in ventral prostate was observed at 16 weeks of age in control mice and after which, the expression started to decrease in advanced cancer stages. Nintedanib treatment on the other hand caused a significant decrease in FGFR-3 levels in the TN16 and TN22 (8-12) (Fig. 6c). To confirm whether Nintedanib treatment (5 and 10 μM) could also affect the expression of VEGF receptors in human PCa cell lines, we analyzed the expression of both VEGFR-1 and R-2 in cell lysates. We observed that Nintedanib 10 μM conc. could decrease the expression of VEGFR-2 (although not significant) in PC3 cells at both 48 and 72 h time points, while has no effect on VEGFR-1 ( Fig. 7a and b), and also moderately decrease the expression of both VEGFR-1 and R-2 in LNCaP cells (Fig. 7c and d) after 48 and 72 h of treatment with 10 μM was observed. Nintedanib treatment led to a decrease in AR expression, but showed no effect on ERα levels The AR immunoreactivity in TRAMP ventral prostate showed a slight increase in the TRAMP mouse prostate from 8 to 22 weeks of age. Notably, Nintedanib treatment reduced not only AR immunoreactivity but also AR protein expression in tissue lysates in the prostate of mice, which began treatment at 8 or 12 weeks of age and were immediately sacrificed (Figs. 6d and 8). While TRAMP prostate also showed a progressive increase in ERα levels in both prostatic epithelial cell cytoplasm and nucleus with increase in mice age, Nintedanib administration had no effect on ERα expression (Figs. 6e and 9). Nintedanib treatment neither affected body weight gain nor caused any pathological changes in livers of TRAMP mice The animals from Nintedanib treated groups did not show significant alterations in body weight gain during the treatment when compared to control groups (Fig. 10a). Furthermore, no gross or histopathological changes were observed in the essential metabolic organ liver. Briefly, mice from TC12, TN12, TC22 (8-12) and TN22 (8)(9)(10)(11)(12) showed typical healthy hepatic morphology and no changes in ductular structures nor any stromal alterations were noted in the hepatic tissue analysis. Some hepatocytes were bi-nucleated or showed polyploidy (Fig. 10b and c) in the hepatic tissue of the TC12 and TN12 group mice. The bi-nucleated or polyploid hepatocytes were more frequently observed in the hepatic tissues of livers from TC22 (8-12) and TN22 (8)(9)(10)(11)(12) groups, but the typical hepatic arrangements were preserved in both groups and no histological differences were detected between them ( Fig. 10d and e). No metastatic lesions were found in the livers of the control or Nintedanib treated groups. Discussion It is known that the oncogenic process may cause an imbalance between pro-apoptotic and proliferative signals in the tissues, resulting in uncontrolled cell growth. Cells in a proliferative process demand an increase of nutrients and oxygen, triggering angiogenesis [20]. Angiogenesis is characterized by the formation of new blood vessels from pre-existing capillaries, participating in the critical growth process, progression and tumor metastasis [21]. Also, angiogenesis modulates the expression of several growth factors such as Vascular-Endothelial Growth Factor (VEGF), Platelet-derived growth factor (PDGF) and Fibroblast Growth Factor (FGF), essential for tumor cell growth and survival [22]. These molecules are directly involved in tumor growth and progression, in addition to deregulated cell proliferation, differentiation and survival [23,24]. Nowadays, studies have shown different prostate cancer treatments with anti-angiogenic drugs involving VEGF inhibition, which is the primary regulator of the proliferation and migration of endothelial cells [25]. Recently, two studies from our research group showed the effects of antiangiogenic therapy, where there was a decrease of microvessel density in TRAMP animals. In addition to this, there was also a reduction of tissue inflammation [26,27]. However, it is known that there is an occurrence of a compensatory mechanism for signaling exchange between VEGF, PDGF and FGF pathways, as angiogenesis is regulated by multiple factors, resulting in tumor resistance to therapy which only inhibits VEGF [25,28]. Currently, a new class of triple VEGF, PDGF and FGF signaling inhibitors, known as Nintedanib, is being evaluated. Nintedanib or BIBF1120 has shown good antitumor activity and cell proliferation inhibition in tumor growth due to their associated actions in the tumor cells, endothelial cells and pericytes. An in vitro study showed that Nintedanib treatment induced cell growth interruption and reduced the survival of smooth muscle cells, endothelial cells and pericytes [16]. Furthermore, clinical phase I safety studies showed that 76.2% of patients with different types of solid tumors, following daily treatment with Nintedanib, acquired stability in disease progression and increased average survival time [29]. Moreover, 68.4% of Nintedanib treated patients showed decreased serum levels of prostate specific antigen (PSA) [30]. Recent studies have shown the efficiency of Nintedanib on castrationresistant prostate cancer, the most challenging type of prostate cancer [31,32]. According to Hilberg et. al [16], Nintedanib effectiveness in delaying tumor progression is accompanied by little or no adverse effects when compared to other anti-angiogenic agents. Thus, based on our results, we concluded that Nintedanib treatment was able to delay prostate cancer progression in mice, especially when this treatment occurred in the initial grades of disease development, at 8 . Values related to β-actin (endogenous control), compared between treated and its respective control group and expressed as media ± S.E.M. *p < 0.05; **p < 0.01 or 12 weeks of age and continued till study end. In addition, the finding herein is important due to the molecular characterization of the prostatic microenvironment, establishing a correlation with clinical Nintedanib trials which are performed in relapsed or refractory cancer [33,34]. Thus, taking into consideration the results herein, we point to the fact that Nintedanib efficiency could be improved if treatment started in early lesion grades, although these cancer grades are difficult to identify. Importantly, even 10 weeks after stopping the drug treatment, the drug inhibitory effects on prostatic epithelial cell proliferation could be observed. The animals receiving Nintedanib did not show alterations in body weight gain, reinforcing that there are no toxicity signs after treatment in the study herein, considering that the animal body can provide important information about the animals' health [35]. Also, the present results showed that liver samples from control and Nintedanib treated groups in both periods analyzed (12 and 22 weeks of age) presented similar characteristics in terms of histology. Thus, there were no treatment-induced changes in liver morphology. The numerous binucleated hepatocytes are a common characteristic in the species, though its meaning is not yet known [36]. However, we found more frequent polyploid cells in the hepatocyte population in 22-week-old animals in both the control and the treated groups. These results suggest that the increase of polyploid hepatocytes was not the result of hepatotoxic processes, as was seen in the study by Gentric [37], where the induction of cellular stress resulted in this feature. On the other hand, it is known that the occurrence of polyploidy increases due to aging in mice [38], which may also vary with the strain [39]. Therefore, this seems to be a likely reason for the polyploid hepatocyte increase in 22 week-old TRAMP mice when compared to those at 12 weeks of age. However, so far there is no characterization of TRAMP mice liver, during the life cycle under healthy or pathological conditions, which allows a comparison with the histological results described in this study. Finally, the results of liver analysis reported here suggest that the concentrations tested in this model are safe. Nevertheless, more extended studies are needed to bring greater clarity to aspects of liver physiology in the TRAMP model, in control conditions or under specific treatment. Angiogenesis trigger known as "angiogenic switch", consists of a series of molecular events that can be activated by injuries, inflammatory and immune responses, hypoxia and genetic mutations. In prostate cancer, this trigger seems to occur in the early grades of tumor development in both TRAMP mice and human beings [40]. VEGF is a key molecule for angiogenesis initiation, as its active release signals lead to new vessel formation. Increased VEGF expression is related to tumor formation and progression [41]. A recent study of our research group showed that VEGF as well as the microvessel density increased in 8 to 18-week-old TRAMP mice in the prostatic tissue [26], assessed by CD31 immunoreactivity in endothelial cells. Similar findings were seen in the present study, which showed a progressive increase in VEGF and CD31 in the prostate of 8 to 22 week-old TRAMP mice. The microvessel density analysis by CD31 immunostaining is considered a marker for tumor progression evaluation in several types of cancer [42]. Singh and colleagues [43] showed an increase in microvessel density, shown by the CD31-positive vessel increase, during prostate cancer progression in TRAMP mice. Another study verified that there is also increased vascularization and subsequent tumor growth [44], showing the relationship of angiogenesis in cancer progression, in normal (C57BL/6 J) mice inoculated with TRAMP-C1 prostate cells. Also, according to Pan and colleagues [45] when angiogenesis is inhibited, there is also a decrease in tumor growth. Notably, in all Nintedanib treatment groups there was a decrease in tissue neovascularization, as observed by decreased microvessel density and VEGF immunoreactivity, a crucial molecule for tumor angiogenesis. Among different pro-angiogenic tumor markers, the most important and well-known are the VEGFs and their receptors (VEGFRs). The VEGFR-1 is present in vascularendothelial cells and prostatic tumor cells, in addition to hematopoietic cells and monocytes; however, some of the VEGF biological effects on endothelial cells are also given through VEGFR-2, expressed by both prostate tumor cells and stroma [46,47]. Also, studies have shown that two events are responsible for the triggering of angiogenesis: the initiation event in which there is a predominance of the VEGFR-1 expression and the progression event showing increased VEGFR-2 expression [48,49]. Indeed, the (See figure on previous page.) Fig. 9 ERα immunoreactivity. a, b and c Quantification of ERα epithelial (cytoplasmic and nuclear) and stromal ERα, respectively. Negative control. e T8 group (inset: negative control). f TC12 group. g TN12 group. h TC16 group. i TN16 group. j TC22(8-12) group. k TN22(8-12) group. l TC22(12-16) group. j TN22(12-16) group. ERα-positive staining is indicated by an asterisk (cytoplasmic), arrow (nuclear) and arrowhead (stromal). Values expressed as media or maximum media values and compared between treated and its respective control group. Ep = epithelium; St = stroma; L = lumen. Counter-stain: Harris Hematoxylin. 400x, bar = 100 μm Fig. 10 Body weight gain and photomicrography of the liver in TRAMP mice from control and treated groups. a Body weight gain of control and treated TRAMP animals. b TC12. c TN12. d TC22 (8)(9)(10)(11)(12). e TN22 (8)(9)(10)(11)(12). Normal liver arrangement is noted. The typical hepatic histology with some hepatocytes with large nuclei, indicating polyploidy can be seen in the liver of 12-week-old mice (A and B). Only the presence of polyploid hepatocytes is more frequent in control mice (C) and treated mice (D) in 22 weeks of age. Hematoxylin and Eosin. 200x, bar = 200 μm results of the current study showed the Nintedanib antiangiogenic activity on reducing the VEGFR-1 levels in TRAMP mouse prostate, the main receptor involved in cell migration and proliferation [50]. The FGFs receptors (FGFRs) also participate in several normal cellular processes of prostatic tissue, such as cell growth and differentiation in addition to angiogenesis. The various receptors found in this tissue differ in their expression, specificity, pathways of activation and effects exerted [51]. Mutations, amplifications and translocations occurring in the genes for FGFRs have also been identified as responsible for abnormal cell proliferation and migration, as well as increasing anti-apoptosis and pro-angiogenic signaling. It is the presence of these alterations that confer tumor sensitivity to the antiangiogenic factor [52]. Changes in FGFR-3 expression are found in the prostate more frequently in low-grade tumors [53,54]. This fact can be indicated as an explanation for the peak expression of this molecule found in the present result in 16-week-old TRAMP mice prostate and their subsequent decrease with advancing age when the cancer progresses to a more advanced stage. Indeed, Nintedanib was effective precisely at that time when its expression was increased, since the treatment significantly decreased FGFR-3 levels in 16-week-old TRAMP animals. On the other hand, some studies have shown that FGFR-3 is not overexpressed in benign prostatic hyperplasia as well as in prostate cancer itself [55][56][57] corroborating our results that showed that the levels of this receptor decrease with the advancement of this malignant disease. Thus, we can attribute the decrease in tumor cell proliferation and subsequent delay in tumor progression to the antiangiogenic effect of Nintedanib, considering the fact there is a decrease in different angiogenic parameters in the prostate of TRAMP mice. This suggests that angiogenesis pathway inhibition may be a clinically useful strategy in the prevention and intervention of PCa progression. Normal prostate growth and development is androgen dependent, by the binding and activation of testosterone and dihydrotestosterone receptors (AR), leading to cell survival and functional activity [58]. However, the overstimulation of the prostate by these hormones can result in PCa development, showing the central role of ARs in tumor cells growth and survival. Studies in castration resistant cancers, in which there is AR overproduction, result in hypersensitivity to small amounts of circulating androgens, indicating that the overproduction of the receptor contributes to the development and tumor progression [58,59]. Recently, researchers have shown that a derivative compound of Magnolia officinalis, called Honokiol, was able to decrease the viability of androgen-dependent (LNCaP) and independent (C4-2) tumor cells. Also, this could be related to AR expression inhibition, both in these cells and in cells derived from TRAMP mice (TRAMP-C1) [60]. The same authors referred to the PSA regulation by AR, demonstrating that the action of Honokiol in decreasing AR expression was accompanied by the inhibition of their activity. Nevertheless, there was a reduction in PSA secretion by both LNCaP and C4-2 [60]. Another study, using stromal and epithelial cells in AR-knockout TRAMP mice, showed that the lack of expression of this receptor leads to a decrease of tumor size, severity and metastatic invasiveness, and testosterone level reduction. This shows that the AR has a dominant role in promoting proliferation of cancer cells [61]. Thus, the present results are in line with literature reports, which show that AR levels increase slightly in cancer progression in TRAMP mice from 8 to 22 weeks of age. Importantly, we observed that Nintedanib significantly reduced AR levels in the prostate of TRAMP mice, mainly in the group receiving the drug in early tumor development (8-12 and 12-16 weeks of age) and sacrificed immediately, indicating the effectiveness of the drug in inhibiting cell proliferation via androgenic pathways. It is known that the ERα has proliferative effects in PCa and that it participates in more aggressive lesion development, considering that its blockade reduces tumorigenesis [62,63]. Previous studies have shown that ER levels increased in advanced cancer grades in the ventral prostate of rats and in senile rats when compared to young animals [64], presenting a positive correlation with Gleason score in human beings [60]. In the study herein, we identified a progressive increase in the ERα immunoreactivity with mice age in both epithelial cell nucleus and cytoplasm. On the other hand, Nintedanib treatment showed no direct action on the ERα, suggesting that the Nintedanib protective effect could not occur via the estrogen pathway, at least not directly. Furthermore, taking into consideration both malignant and non-malignant growth, the uncontrolled epithelial cell proliferation is responsible for glandular volume increase that occurs in the prostate adenocarcinoma of TRAMP mice. Recent studies have shown reduced proliferation of tumor cells to be dependent on the angiogenic pathway after Nintedanib treatment [65][66][67], corroborating our results that showed reduced cell numbers after treatment of both androgen-dependent (LNCaP) and androgen -independent PC3 human PCa cell lines with Nintedanib. Conclusion Briefly, the results of the study indicate that PCa growth and progression in TRAMP ventral prostate occurs gradually as the age advances. This is partly due to an imbalance of pro-and anti-angiogenic factors in the prostatic microenvironment. Therefore, antiangiogenic therapy could be an effective strategy for targeting PCa. Thus, in our investigative pre-clinical studies in TRAMP model of PCa, Nintedanib, an angiogenesis inhibitor, was able to delay the neoplastic transformation in the prostatic microenvironment, thus reducing tissue vascularization needed for tumorigenesis, which in turn delayed the progression of neoplastic lesions to more advanced stages of the disease. Furthermore, the anti-PCa effects of Nintedanib were confirmed against both androgendependent and androgen-independent human PCa cell lines in vitro. Therefore, we can conclude that the Nintedanib antiangiogenic therapy is a promising strategy for both prevention and intervention of PCa, since it is capable of decreasing neovascularization, AR immunoreactivity and delaying tumor progression.
2017-08-03T01:52:12.129Z
2017-05-12T00:00:00.000
{ "year": 2017, "sha1": "3b67974bd9fe8a52e879ca7493bea8cb485618b0", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/s12929-017-0334-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "019342e8ae71185d652dbb5690a02eccc14a6553", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17530812
pes2o/s2orc
v3-fos-license
A CA+ Pair Adjacent to a Sheared GA or AA Pair Stabilizes Size-Symmetric RNA Internal Loops† RNA internal loops are often important sites for folding and function. Residues in internal loops can have pKa values shifted close to neutral pH because of the local structural environment. A series of RNA internal loops were studied at different pH by UV absorbance versus temperature melting experiments and imino proton nuclear magnetic resonance (NMR). A stabilizing CA pair forms at pH 7 in the and nearest neighbors when the CA pair is the first noncanonical pair (loop-terminal pair) in 3 × 3 nucleotide and larger size-symmetric internal loops. These and nearest neighbors, with CA adjacent to a closing Watson−Crick pair, are further stabilized when the pH is lowered from 7 to 5.5. The results are consistent with a significantly larger fraction (from ∼20% at pH 7 to ∼90% at pH 5.5) of adenines being protonated at the N1 position to form stabilizing wobble CA+ pairs adjacent to a sheared GA or AA pair. The noncanonical pair adjacent to the GA pair in can either stabilize or destabilize the loop, consistent with the sequence-dependent thermodynamics of GA pairs. No significant pH-dependent stabilization is found for most of the other nearest neighbor combinations involving CA pairs (e.g., and ), which is consistent with the formation of various nonwobble pairs observed in different local sequence contexts in crystal and NMR structures. A revised free-energy model, including stabilization by wobble CA+ pairs, is derived for predicting stabilities of medium-size RNA internal loops. nearest neighbors when the CA pair is the first noncanonical pair (loop-terminal pair) in 3 Â 3 nucleotide and larger size-symmetric internal loops. These CG AA and CA AA nearest neighbors, with CA adjacent to a closing Watson-Crick pair, are further stabilized when the pH is lowered from 7 to 5.5. The results are consistent with a significantly larger fraction (from ∼20% at pH 7 to ∼90% at pH 5.5) of adenines being protonated at the N1 position to form stabilizing wobble CA + pairs adjacent to a sheared GA or AA pair. The noncanonical pair adjacent to the GA pair in CG AA can either stabilize or destabilize the loop, consistent with the sequencedependent thermodynamics of GA pairs. No significant pH-dependent stabilization is found for most of the other nearest neighbor combinations involving CA pairs (e.g., CA AG and AG CA ), which is consistent with the formation of various nonwobble pairs observed in different local sequence contexts in crystal and NMR structures. A revised free-energy model, including stabilization by wobble CA + pairs, is derived for predicting stabilities of medium-size RNA internal loops. The N1 nitrogen of adenine and N3 nitrogen of cytosine normally have pK a values of 3.5 and 4.2, respectively, but the pK a values (1-3) and thermodynamic contributions (4-7) of noncanonical pairs involving A and C in folded DNA and RNA are sequence-and context-dependent. General acid-base catalysis, involving protonation and deprotonation of nucleobases at physiological pH, has been found for ribozyme catalysis of cleavage and ligation of specific phosphodiester bonds (2,6). The formation of wobble CA + (cis Watson-Crick/Watson-Crick) pairs (Figure 1b) causes local and global conformational changes in RNA (8)(9)(10)(11)(12)(13). Understanding the sequence-dependent driving force of a pK a shift of nucleobases within noncanonical pairs is needed to provide insight into RNA folding and catalytic mechanisms (6,7). It may also facilitate better understanding of the pH-dependent assembly of RNA viruses (14). The thermodynamics of CA pairs is also important for bioinformatic approaches that reveal structure-function relationships for RNA. For example, an approach for identifying which strand of complementary RNAs is most likely to rely on structure for function depends upon the different thermodynamic stabilities of CA and GU pairs (15). Here, thermodynamic stabilities of a variety of RNA internal loops were measured in 1 M NaCl at pH 7 and 5.5. At pH 7, a nearest neighbor of CG AA or CA AA , with the CA adjacent to a closing canonical pair, can stabilize 3 Â 3 nucleotide and larger sizesymmetric (n1 = n2) 1 internal loops on average by about 1 kcal/mol at 37°C. Such nearest neighbors with the CA adjacent to a closing Watson-Crick pair are further stabilized on average by 1 kcal/mol at 37°C when the pH is lowered from 7 to 5.5. Dependent upon the sequence, the noncanonical pair adjacent to † This work was supported by National Institutes of Health (NIH) Grant GM22939. *To whom correspondence should be addressed. Telephone: (585) 275-3207. Fax: (585) 276-0205. E-mail: turner@chem.rochester.edu. 1 Abbreviations: C T , total concentration of all strands of oligonucleotides in solution; eu, entropy units in cal mol -1 K -1 ; n1 Â n2, an internal loop with n1 nucleotides on one side and n2 nucleotides on the opposite side; P, purine riboside; RY, canonical pair of GC, AU, or GU, with R on the 5 0 side and Y on the 3 0 side of the internal loop; size-symmetric internal loops, a n1 Â n2 nucleotide internal loop with n1 = n2; T M , melting temperature in kelvins; T m , melting temperature in degrees Celsius; YR, canonical pair of CG, UA, or UG, with Y on the 5 0 side and R on the 3 0 side of the internal loop; ΔG°50 CR/3 0 AA bonus , a free-energy bonus derived to account for stabilization in the CG AA and/or CA AA nearest neighbors when the CA pair is the first noncanonical pair (loop-terminal pair) in 3 Â 3 nucleotide and larger size-symmetric internal loops at pH 7, 1 M NaCl, and 37°C; ΔG°50 CR/3 0 AA, pH bonus , the free-energy bonus derived to account for stabilization from pH 7 to 5.5 in the CG AA and/or CA AA nearest neighbors when the CA pair is adjacent to a closing Watson-Crick pair in 3 Â 3 nucleotide and larger size-symmetric internal loops (ΔG°50 CR/3 0 AA, pH bonus is also applied for loops with tandem CA pairs); ΔG°3 7,pH7,loop , the measured loop free energy at 37°C and pH 7; ΔΔG°3 7,pH , the measured loop free-energy difference between pH 5.5 and 7, unless otherwise noted (see the footnotes of the tables). MATERIALS AND METHODS Oligonucleotide Synthesis and Purification. Oligonucleotides were synthesized on an Applied Biosystems 392 DNA/RNA synthesizer using the phosphoramidite method (17,18), deprotected, and purified, as described previously (19,20). Controlled pore glass (CPG) supports and phosphoramidites were purchased from Proligo, AZCO, Glen Research, or ChemGenes. The mass of all oligonucleotides was verified by electrospray ionization mass spectrometry (ESI-MS). Purities were checked by reverse-phase high-performance liquid chromatography (HPLC) or analytical thin-layer chromatography (TLC), and all were greater than 95% pure. UV Absorbance Versus Temperature Melting Experiments and Thermodynamics. Concentrations of singlestranded oligonucleotides were approximated from the absorbance at 280 nm and 80°C, and extinction coefficients were predicted from those of dinucleotide monophosphates and nucleosides (21,22) with RNAcalc (http://www.meltwin.com) (23). The extinction coefficients were estimated by replacing purine riboside with adenosine. Although extinction coefficients differ upon functional group substitutions, individual nucleotides contribute only a small portion of the oligomer extinction and, thus, do not significantly affect thermodynamic measurements. UV melting buffer conditions were 1 M NaCl, 20 mM sodium cacodylate, and 0.5 mM sodium ethylenediaminetetraacetic acid (EDTA) at pH 7 and 5.5 or 1 M NaCl, 20 mM 4-(2-hydroxyethyl)-1-piperazinepropanesulfonic acid) (HEPPS), 0.5 mM sodium EDTA at pH 8. Cacodylate and HEPPS were used because their pK a values are essentially temperature-independent. Curves of absorbance at 280 nm versus temperature were acquired using a heating rate of 1°C/min with a Beckman Coulter DU640C spectrophotometer, having a Peltier temperature controller cooled with a water bath. Melting curves were first fit to a two-state model with MeltWin (http://www.meltwin.com) (23), assuming linear sloping baselines and temperature-independent ΔH°and ΔS° (23)(24)(25). Presumably, the pK a values do not change until the RNA duplex melts; i.e., pK a values exhibit a two-state manner (with zerosloping baselines) coupled with the melting of an RNA structure (7). This is a reasonable assumption because nucleobase protonation/deprotonation is linked with the two-state folding/ unfolding of the RNA duplex. The temperature at which half the strands are in duplex, T M , at total strand concentration, C T , was used to calculate thermodynamic parameters for duplex formation according to (26) Here, R is the gas constant, 1.987 cal mol -1 K -1 , and a is 1 for self-complementary duplexes and 4 for non-self-complementary duplexes. All of the ΔH°values from T M -1 versus ln(C T /a) plots (eq 1) and from the average of the fits of melting curves to twostate transitions agree within 15%, suggesting that the two-state model is a reasonable approximation for these transitions. The equation ΔG°3 7 = ΔH°-(310.15)ΔS°was used to calculate the free-energy change at 37°C (310.15 K). Exchangeable Proton NMR Spectroscopy. All exchangeable proton spectra (27) were acquired on a Varian Inova 500 MHz ( 1 H) spectrometer. One-dimensional imino proton spectra were acquired with an S pulse sequence (28) at temperatures ranging from -5 to 40°C in 80 mM NaCl, 10 mM sodium phosphate, and 0.5 mM sodium EDTA. SNOESY spectra (28) were recorded with an 150 ms mixing time from -5 to 10°C. The Felix (2000) software package (Molecular Simulations, Inc.) was used to process 2D spectra. Proton spectra were referenced to H 2 O or HDO at a known temperature-dependent chemical shift relative to 3-(trimethylsilyl)tetradeutero sodium propionate (TSP). RESULTS Thermodynamics at Different pH. An RNA secondarystructure prediction and analysis program, RNAstructure 4.2 (http://rna.urmc.rochester.edu/rnastructure.html) (29), was used to design sequences that form heteroduplexes without competing homoduplexes. Thermal melting studies of the individual single strands (16,19) (see Table S1 in the Supporting Information) confirmed the absence of competing homoduplexes. Measured thermodynamic parameters at 1 M NaCl for duplexes and internal loops (calculated by eq 3a shown below) are listed in Tables 1 and 2, respectively. For a given duplex or internal loop, the values from bottom to top are for pH values 5.5, 7, and 8, respectively. In Tables 1 and 2, most sequences are ordered from For several duplexes with two loop-terminal CA pairs, ΔΔG°3 7,pH is half the value given by eq 2 (see the footnotes of Tables 1 and 2). Measured thermodynamic parameters for the formation of the internal loops (Table 2 and Table S2 in the Supporting Information) are calculated according to the following equation (30): For example, All of the thermodynamic parameters used in this calculation are derived from T M -1 versus ln(C T /a) plots (eq 1). The thermodynamics of canonical stems is calculated for pH 7 and assumed to be independent of the pH between 5.5 and 8, as shown for other stems (32,33). This is a reasonable assumption because the N1 of adenine and N3 of cytosine normally have pK a values of 3.5 and 4.2, respectively, and the pK a values shift further down in forming Watson-Crick pairs in canonical stems (1)(2)(3)7). In addition, most of the duplexes do not form wobble CA + or CC + pairs (panels b and i of Figure 1) and do not show a pH effect, consistent with the assumption of pH-independent thermodynamics in the absence of CA + or CC + pairs (Table 1 and Table S1 in the Supporting Information). Thermodynamic Model Including Stabilization Effects of CA and CA + Pairs in Medium-Size RNA Internal Loops. Measured free energies of RNA internal loops with 6-10 nucleotides, ΔG°3 7, loop , reported here and previously (16,19,20) for 1 M NaCl at pH 7 and 37°C were combined for linear regression to the equation Here, n1 and n2 are the number of nucleotides on each side of the loop; m1-m8 can be 0, 1, or 2; and the definitions of free-energy parameters are given in Table 3. Multiple linear regression on 168 a For each duplex, the values from bottom to top are measured at pH 5.5, 7, and 8, respectively. Sequences are ordered from the most negative to the most positive values of ΔΔG°3 7,pH = ΔG°3 7,pH5.5 -ΔG°3 7,pH7 , unless noted in footnote c. T m values were calculated from eq 1 at C T = 0.1 mM. Data in parentheses were measured in NMR buffer with 80 mM NaCl at pH 7. b Imino proton NMR spectra were measured ( Figure 2). c ΔΔG°3 7,pH is per CA pair. d Loop sequence from a J4/5 loop of a group I intron (36). e Data at pH 7 are from ref 19. f Loop sequence from the substrate loop of a VS ribozyme (8,9). g Loop sequence derived from the loop A of hairpin ribozyme (3). h Loop sequence from a leadzyme (1,(65)(66)(67). i Loop sequence from the Alu domain of human SRP RNA (71). j The pH-independent thermodynamics is consistent with the NMR structure without the formation of the C + U pair (61). k The pH-independent thermodynamics is consistent with the NMR structure without the formation of the UC + pair (62). Table 3, with an R 2 = 0.87 and standard deviation of 0.55 kcal/mol, which averages less than 0.07 kcal/mol for each nucleotide contributing to ΔG°p redicted at 37°C. The last term (ΔG°50 CR/3 0 AA bonus = -1.07 kcal/mol) in eq 4 represents the only difference with the equation derived previously (16). Without the last term, R 2 = 0.82 and the standard deviation is 0.65 kcal/mol. Aside from the last term, the parameters in Table 3 are essentially the same as previously derived (16). Note that the bonus and penalty parameters have negative and positive values, respectively. Size-symmetric internal loops with 5 0 CR/3 0 AA nearest neighbors with the CA adjacent to a closing Watson-Crick pair, are further stabilized on average by 1.03 ( 0.32 kcal/mol when the pH is lowered from 7 to 5.5 (see Table S2 in the Supporting Information). Thus, a bonus, ΔG°50 CR/3 0 AA, pH bonus = -1.03 ( 0.32 kcal/mol, is used to account for the pH stabilization at pH 5.5 compared to pH 7 (Table 3). At this stage, we do not apply ΔG°50 CR/3 0 AA, pH bonus for the size-symmetric internal loops with 5 0 CR/3 0 AA nearest neighbors with the CA adjacent to a closing UG or GU pair. Loops with tandem CA pairs are also further stabilized when the pH is lowered from 7 to 5.5 (see Table S2 in the Supporting Information). Dependent upon the sequence, the noncanonical pair adjacent to the GA pair in Y R CG AA or R Y CG AA can either stabilize or destabilize the medium-size internal loops, consistent with the previous thermodynamic model (e.g., ΔG°2 GA bonus and ΔG°50 GU/3 0 AN penalty (3 Â 3 loop) ) (16). No significant stabilization at pH 7 and 5.5 is found for most of the other nearest neighbor combinations involving CA pairs, which is consistent with wobble CA + pairs ( Figure 1b) not forming in different local sequence contexts in crystal and NMR structures ( (3,(34)(35)(36)(37)(38)(39)(40)). Thermodynamics of several duplexes were measured at pH 8, and no significant differences were observed compared to those at pH 7. Exchangeable Proton NMR Spectra at Different pH. Figure 2 shows 1D imino proton NMR spectra for selected sequences. The resonances observed are consistent with the expected canonical and sheared GA base pairs. Figure 3 shows 2D SNOESY spectra for GCA UCGU AGAA CAGG GGC CCG and GC PCCG CGAA AAGC GCCP CG . The spectra contain the typical cross-peak patterns expected for the imino protons in the duplexes, although in some cases, definitive assignment is not made. In Figure 3a, four of the five imino protons between 12 and 14 ppm exhibit cross-peak patterns typical of a Watson-Crick GC pair (two strong cross-peaks to resonances that show a very strong cross-peak to each other and to a likely H5 resonance, as expected for the C amino protons of a GC pair). The fifth imino proton shows a strong cross-peak to a narrow resonance, as expected for a U imino proton close to the AH2 in a Watson-Crick AU pair. There is a very weak cross-peak between the imino protons of two of the GC pairs, which are assigned to G1 and G19. Three other resonances between 9.5 and 11 ppm have chemical shifts and cross-peaks typical of G imino protons in sheared GA pairs, including those observed in a duplex with the same sequence of three GA pairs (20). In Figure 3b, the two imino proton resonances between 13.0 and 13.5 ppm show typical GC pair characteristics. A cross-peak between the equivalent imino protons in the similar sequence, GC GCG CGAA AAGC GCG CG , confirms that these protons are in adjacent pairs (see Figure S2 in the Supporting Information). The 1D imino proton spectra of several duplexes in Figure 2 reveal a similar peak near ∼10.6 ppm that increases in intensity at lower pH. These peaks are likely due to adenine amino protons in CA + pairs, as observed in other cases of CA + pairs (12). The broad peak in Figure 3b at ∼10.6 ppm assigned to the A6 amino group has a strong cross-peak to the other amino proton and a weak cross-peak to the G7 imino proton. DISCUSSION The pK a of N1 nitrogen of adenine is about 3.5 and shifted by less than 0.3 pK unit when incorporated into unpaired single a Calculated from eq 3a and data in Table 1 unless noted otherwise. Experimental errors for ΔG°3 7 , ΔH°, and ΔS°for the canonical stems are estimated as 4, 12, and 13.5%, respectively, according to ref 25. These errors were propagated to estimate errors in loop thermodynamics. For each duplex, the values from the bottom to the top are measured at pH 5.5, 7, and 8, respectively. Sequences are ordered from the most negative to the most positive values of ΔΔG°3 7,pH = ΔG°3 7,pH5.5 -ΔG°3 7,pH7 , except for (GCCCGAGCG) 2 and those noted in footnote c, where ΔΔG°3 7,pH is divided by 2. ΔG°p redicted values are calculated according to eq 4. Loops smaller than 3 Â 3 nucleotides are predicted according to refs (16, 29, and 31). b Imino proton NMR spectra were measured ( Figure 2). c ΔG°50 CR/3 0 AA bonus is applied twice to predict the free energy for loop formation. d Loop sequence from a J4/5 loop of a group I intron (36). e Data at pH 7 are from ref 19. f Loop sequence from the substrate loop of a VS ribozyme (8,9). g Loop sequence derived from the loop A of hairpin ribozyme (8). h Loop sequence from a leadzyme (1,(65)(66)(67). i Loop sequence from the Alu domain of human SRP RNA (71). j The pH-independent thermodynamics is consistent with the NMR structure without the formation of the C + U pair (61). k The pH-independent thermodynamics is consistent with the NMR structure without the formation of the UC + pair (62). strands (7). Small pK a shifts were also observed for other nucleobases when incorporated in unpaired single strands (7,41). When incorporated into double helices, however, the pK a of A shifts down in Watson-Crick pairs but up by as much as 3 pK units in some noncanonical pairs (1,7). For example, the pK a of the A in a GAC CUG sequence is e3.1, whereas the two A's in C G CGAG AG C G (loop sequence of a leadzyme) have pK a values of 6.5 (shown in bold) and 4.3, respectively (1). In addition to local context effects, pK a values may also be shifted by global context. For example, the local dielectric constant in the middle of large structures, such as the ribosome and viral RNA encapsidated in virion, may differ from that in bulk water. Thus, it is important to know the possible effects of protonation on thermodynamic stability of RNA structures. Dependent upon the sequence context and pH, a CA + pair can form with A protonated at the N1 position (Figure 1b). The CA + pair can form two hydrogen bonds and easily fit into an A-form helix. Thus, it has the potential to stabilize a helix. Protonation will also affect base stacking and other interactions, however, so that effects of protonation will be sequence-dependent. Single CA + Pairs Stabilize Watson-Crick Stems. The CA + wobble pair is isosteric with a UG wobble pair (panels b and e of Figure 1) and can fit in an A-or B-form structure without large backbone distortion (see Figure S1 in the Supporting Information) (12,42,43). Consistent with formation of a CA + wobble pair, the measured loop free energy of GC GCG C A CG GC A C GCG CG (ΔG°3 7,pH7,loop = -0.56 kcal/mol for each CA pair) is about 1 kcal/mol more stable than that predicted by a previous thermodynamic model (29,44), without considering a stabilization effect for the CA pair (Table 2). In addition, a stabilization of ΔΔG°3 7,pH = -1.59 kcal/mol was found per C G C A C G nearest neighbor combination at pH 5.5 compared to that at pH 7 ( Table 2). The resonance at ∼10.6 ppm in GC Figure 2a) is consistent with a previous assignment to A amino protons in a CA + pair (12). Thus, both UV thermal melting and Table 3: Free-Energy Parameters (kcal/mol) at 37°C for Predicting 3 Â 3 Nucleotide and Larger RNA Internal Loops a a These parameters are used to predict the free energy of the 3 Â 3 nucleotide and larger internal loops in 1 M NaCl according to eq 4. Except for the new parameters, ΔG°50 CR/3 0 AA bonus and ΔG°50 CR/3 0 AA, pH bonus , the parameters derived here are similar to those in ref 16. YR is a canonical pair of CG, UA, or UG, with the pyrimidine Y on the 5 0 side of the internal loop. In general, Y and R are defined, respectively, as U or C and A or G in the UG, UA, or CG pair. NMR results are consistent with the formation of the hydrogen bonds in a wobble CA + pair (Figure 1b). A similar pH effect on thermodynamics was found for single CA mismatches in DNA (4,7). The A + imino proton was not observed by NMR (4), probably because of broadening by solvent exchange. The pK a of the N1 of adenine in the DNA nearest neighbor combination, C G C A C G , is about 6.6, as measured with a pH profile of the chemical shifts of the N1 nitrogen (45). Detailed understanding of the stabilization effect of CA or CA + wobble pairs within different Watson-Crick stems will provide insight into RNA structure and function. For example, a single CA mismatch has been shown to be preferred for efficient A to I editing by adenosine deaminases acting on RNA (ADAR) (46). Understanding the sequence-dependent thermodynamics of CA (44) and CI mismatches and the pH effect might facilitate better understanding of the editing specificity and mechanism (46). CR AA Nearest Neighbor with CA Adjacent to a Closing Canonical Pair Stabilizes 3 Â 3 and Larger Size-Symmetric Internal Loops at pH 7. When the CA is the first noncanonical (loop-terminal) pair, most of the size-symmetric internal loops with nearest neighbors of CG AA and/or CA AA are more stable than predicted by a recently proposed thermodynamic model (16,19). A bonus parameter, ΔG°50 CR/3 0 AA bonus = -1.07 ( 0.13 kcal/mol at pH 7, is derived here for such nearest neighbor combinations with a loop-terminal CA pair followed by a GA or AA pair (Table 3). These nearest neighbor combinations occur in several internal loops within catalytic ribozymes, e.g., the VS ribozyme substrate loop (8,9) The thermodynamic stabilization is consistent with the geometric compatibility of CG AA and CA AA nearest neighbors if the CA pair is protonated and the purine-purine pair is sheared (panels f and g of Figure 1) (3,8,9,36). Solution NMR reveals a protonated wobble CA + pair adjacent to a sheared GA pair ( C G CG AA , sequence in a hairpin ribozyme and VS ribozyme) (see Figure 4 and Figure S1a in the Supporting Information), and the pK a of the A (in bold) is about 6.3, according to the pH profile of the chemical shifts of the C2 carbon in adenine (3,8,9). Consistently, the amino protons of A + (shown in bold) for the symmetric loop C G CGAA AAGC G C resonate at 10.6 ppm at neutral and lower pH (panels e and f of Figure 2 and Figure 3b). In addition, a wobble CA pair forms adjacent to a sheared AA pair (shown in bold) within the J4/5 loop, G C CAA AAA C G , in the crystal structure of a group I intron (see Figure S1c in the Supporting Information) (36). The noncanonical pair adjacent to a loop-terminal GA pair was previously found to either stabilize (e.g., ΔG°2 GA bonus ) or destabilize (e.g., ΔG°50 GU/3 0 AN penalty (3 Â 3 loop) ) the loop (16,19). Here, the noncanonical pair adjacent to the GA pair in the nearest neighbor combinations Y R CG AA and R Y CG AA was also found to be stabilizing or destabilizing, although the CA but not GA pair is a loop-terminal pair. Thus, when the parameters in Table 3 were derived, the CA pair in the nearest neighbor combinations Y R CG AA and R Y CG AA was treated in a way similar to a canonical wobble UG pair; i.e., the thermodynamic effect of the GA pair was modeled as a first noncanonical (loop-terminal) pair. For example, a penalty of ΔG°50 GU/3 0 AN penalty (3 Â 3 loop) = 0.74 kcal/mol was applied for GCA UCGU AGAA CUGC GGC CCG (ΔG°3 7,pH7,loop = 2.13 kcal/mol), although this parameter was proposed only for 3 Â 3 nucleotide internal loops (16,19). This is suggested by NMR data for this loop, which shows the formation of a stabilizing CA + wobble pair, isosteric to a canonical wobble UG pair and adjacent to a sheared GA pair, even at nearly neutral pH (3). Consistent with the penalty of ΔG°50 GU/3 0 AN penalty (3 Â 3 loop) , the U is flipped out in an NMR structure of the A U AGAA CUGC G C loop, which is from a hairpin ribozyme (3). Similarly, a bonus of ΔG°2 GA bonus = -1.16 kcal/mol ( NaCl, 10 mM sodium phosphate, and 0.5 mM sodium EDTA at 0°C unless otherwise noted at different pH values, with the top spectrum of each RNA sequence acquired at near pH 7 and the bottom spectrum at lower pH. Assignments are preliminary and largely based on assignments for similar sequences. Values between sequence and spectra are ΔG°3 7,loop in kcal/mol measured in 1 M NaCl at pH 5.5 (bottom) and pH 7 (top). Resonances labeled with arrows are consistent with a previous assignment to the adenine amino protons in a CA + pair (12). No resonances were observed between 14 and 16 ppm. (a) C T = 0.5 mM, pH 6.9 and 5.4; (b) C T = 0.3 mM, pH 6.9 and 5.0; (c) C T = 1.8 mM, pH 6.8 and 5.1; (d) C T = 0.5 mM, pH 6.9 and 5.3 (see Figure 3a for 2D spectrum); (e) C T = 0.5 mM, pH 6.9 and 5.9; and (f) C T = 1.5 mM, pH 6.6 and 5.1 (5°C, see Figure 3b for the 2D spectrum). observed for CR AA nearest neighbors, then lowering the pH should further enhance stability because a larger fraction of A is protonated for the formation of CA + pairs. About 89 and 17% of adenine N1 residues are protonated at pH 5.5 and 7.0, respectively, with a pK a of 6.3, as shown for the A (in bold) in C G CG AA (3,8,9). We observed an enhanced stabilization of 1.03 ( 0.32 kcal/mol on average per nearest neighbor CG AA or CA AA with the CA adjacent to a Watson-Crick pair when lowering pH from 7 to 5.5, e.g., the VS ribozyme substrate loop (8,9) Table 3). The sequence dependence is likely more complicated, however. For example, the thermodynamic stabilities of G Table 2 and Table S2 in the Supporting Information). The pK a of A N1 in the CA pair of C G CG AA (sequence found in a hairpin ribozyme and VS ribozyme) is about 6.3 with a wobble CA pair adjacent to a sheared GA pair (3,8,9). Presumably, the same noncanonical base pairs form in G C CG AA , although the pK a of A N1 in the CA pair is not known. Further detailed experimental (e.g., measurement of pK a ) and computational studies (47,48) The lack of extra stability when the A of an AC pair is 3 0 of a Watson-Crick pair is probably general. For example, on the basis of NMR spectra of a 7 Â 9 nucleotide loop B domain of a hairpin ribozyme, the apparent pK a of the N1 position of the bold A in a C G AG CA segment is 5.4, and, at pH 6.8, the AC has a single hydrogen-bond, A N1-C amino pair (Figure 1c). The GA is a sheared pair (37). Here, the single hydrogen-bond (A N1-C amino) AC pair has A and C shifted to major and minor grooves, respectively, which is opposite to a wobble AC pair. A sheared GA pair has G and A shifted to major and minor grooves, respectively, which favors base stacking between the single hydrogen-bond (A N1-C amino) AC and sheared GA pairs (see Figure 4d and Figure S1b in the Supporting Information). The enhanced stability of a CA pair with the C on the 3 0 side of a Watson-Crick pair relative to one with the A on the 3 0 side of a Watson-Crick pair may be related to stacking on the adjacent Figure 2d for 1D spectrum). There is a very weak cross-peak of G1H1-G19H1 (not shown). The imino protons of G5, G14, and G15 have chemical shifts and cross-peaks typical of consecutive sheared GA pairs (16,20,72). The G15 amino protons resonate at 9.2 and 5.5 ppm, respectively, suggesting the formation of sheared GA pairs with G5 and G15 in the C2 0 -endo sugar pucker (73,74). There is no indication of the formation of A + C pair in this loop. (b) GC PCCG CGAA AAGC GCCP CG (C T = 1.5 mM, pH 5.1, -5°C, see Figure 2f for 1D spectrum). The cross-peak of G1H1-G7H1 is unresolved because of overlap but is observed in GC GCG CGAA AAGC GCG CG (see Figure S2 in the Supporting Information and Figure 2e for 1D spectrum). The broad peak at ∼10.6 ppm is likely due to the amino protons of A + 6, which shows a strong cross-peak to the other amino proton and a weak cross-peak to the G7 imino proton. Adenine amino protons with similar chemical shift have been observed in other cases of CA + pairs (12). The G4 amino protons resonate at 8.8 and 6.2 ppm, respectively, suggesting the formation of sheared GA pairs with G4 in the C2 0 -endo sugar pucker (73,74). helix. As with the G of a UG pair (49), the A of a CA + pair stacks to its 3 0 side by shifting to the minor groove. Thus, having the Watson-Crick pair 3 0 of the A provides more favorable stacking by increasing the base overlap (see Figure 4a and Figure S1 in the Supporting Information). Interestingly, the U in G C AU C G C (loop sequence in a U6 RNA intramolecular stem loop), which is stacked within the helix at pH 7.0, is flipped out at pH 5.7 to favor a stacking interaction between wobble A + C and Watson-Crick GC pairs flanking the U bulge (10). Evidently, the stabilization effect of CA and/or CA + pairs and the pK a of A in a CA pair is sequence-context-dependent. well-predicted at pH 7 for the loops with a CA adjacent to a UG pair. Thus, it is unlikely that in these loops a wobble CA + pair is formed adjacent to a wobble UG pair, with the pK a significantly above 7 for the adenine N1. Note that there is also no significant thermodynamic difference between pH 8 and 7 for the loop GGU PCCG CAA AAG GGCU CCG (ΔG°3 7,pH8,loop = 0.90 kcal/mol). We applied the bonus parameter of ΔG°50 CR/3 0 AA bonus for GGU PCCG CAA AAG GGCU CCG at pH 7, although there is no further stabilization at pH 5.5. The pH-dependent shifting of the imino proton resonances from the UG pair suggests a pH-dependent conformational change within the loop, however (Figure 2b). This may be another example of the idiosyncratic behavior of UG pairs. For example, thermodynamic and NMR studies suggest that adjacent UG pairs do not always form canonical wobble pairs (52,53). Context-Dependent pH Effect of CC Pairs. CC can form a cis Watson-Crick/Watson-Crick CC + pair (Figure 1i) (5). This is consistent with crystal structures of U G UC CU G U that reveal cis Watson-Crick/Watson-Crick UC pairs with a watermediated hydrogen bond between the U imino proton and C N3 but without protonated nucleobases (Figure 1j) (56,57). Quantum chemical calculations show that a watermediated UC pair is energetically preferred over a UC pair with two direct hydrogen bonds (U O4 to C amino and U H3 to C N3) (Figure 1k (62), which contain no protonated C + U and UC + pairs (Figure 1l), respectively. A Watson-Crick-type UC pair with two direct hydrogen bonds (U O4 to C amino and U H3 to C N3) (Figure 1k) (1,(65)(66)(67) and is 0.73 kcal/mol more stable at pH 5.5 than pH 7. This stabilization is consistent with the formation of a wobble CA + pair in the NMR structure without multivalent metal ions (1, 65) but not with the crystal structure with multivalent ions (66) and a molecular modeling study of the active conformation (67). The molecular model of the active conformation is consistent with kinetic studies, in which different loop G's are forced to be in syn glycosidic conformation (68). In helix 58 of the large ribosomal subunit of Haloarcula marismortui (40), trans Hoogsteen/sugar AC (Figure 1d) and trans Hoogsteen/Hoogsteen AA pairs (Figure 1h) (43) form in G C CAUA AAG G C . The A in bold is in a syn glycosidic conformation, and the U is bulged out. Apparently, this conformation is more stable in this 3 Â 4 internal loop than a wobble CA + pair adjacent to a sheared AA pair. Further thermodynamic and structural studies are needed to see whether the loop structure is preformed or induced by tertiary and protein binding in the ribosome. CONCLUSION The pK a of the A N1 nitrogen in a CA pair depends upon local sequence context, as evidenced by thermodynamic and structural results shown here and previously (1,3,8,9,12,(34)(35)(36)(37)(38)(39)(40)42). In a nearest neighbor of CG AA or CA AA with the CA adjacent to a closing canonical pair (including wobble UG pairs), the formation of a wobble CA + adjacent to a sheared GA or AA pair stabilizes 3 Â 3 nucleotide and larger size-symmetric internal loops on average by about 1 kcal/mol at 37°C, pH 7, and 1 M NaCl. Such nearest neighbors with the CA adjacent to a closing Watson-Crick pair are further stabilized on average by 1 kcal/mol at 37°C when the pH is lowered from 7 to 5.5. Other stabilizing nearest neighbor combinations can exist to shift pK a . The pK a may also depend upon global context; e.g., pK a could be shifted in the middle of a large structure, such as the ribosome. The results presented here along with published NMR and crystal structures provide benchmarks to test free-energy and structural calculations by computational chemists. ACKNOWLEDGMENT G.C. thanks Prof. C. C. Kao for discussions on viral RNA encapsidation. SUPPORTING INFORMATION AVAILABLE Tables of single-strand UV melting results, linear regression data, figures of base stacking and base pairing involving CA pairs, and an exchangeable proton SNOESY spectrum. This material is available free of charge via the Internet at http://pubs.acs.org.
2014-10-01T00:00:00.000Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "1815533578ace756fc2d2885c474d2ccf1005131", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/bi8019405", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dc1b9314812042a649a64f02a3596cfb9c8f8b79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
18698649
pes2o/s2orc
v3-fos-license
Chronic gamma-hydroxybutyric-acid use followed by gamma-hydroxybutyric-acid withdrawal mimic schizophrenia: a case report Introduction Gamma-hydroxybutyric-acid is a potentially addictive drug known for its use in “rave” parties. Users have described heightened sexual drive, sensuality and emotional warmth. Its euphoric, sedative and anxiolytic-like properties are also sought by frequent users. Abrupt gamma-hydroxybutyric-acid withdrawal can rapidly cause tremor, autonomic dysfunction and anxiety, and may later culminate in severe confusion, delirium, auditory, visual or tactile hallucinations, or even death. Case presentation A 23-year-old woman presented to the emergency room with paranoid delusions and auditory hallucinations. Her psychiatric history included two brief psychotic episodes induced by amphetamines and marijuana. In the last six months, she had demonstrated bizarre behaviour, had been more isolated and apathetic, and unable to take care of daily chores. The patient reported occasional use of gamma-hydroxybutyric-acid, but her initial accounts of drug use were contradictory. Since the toxicology urine screen was negative, a schizophrenic disorder was initially suspected and an antipsychotic medication was prescribed. A few hours after her admission, signs of autonomic dysfunction (tachycardia and hypertension) appeared, lasting 24 hours. Severe agitation and confusion were also present. Restraints and a cumulative dose of 7 mg lorazepam were used to stabilize her. The confusion resolved in less than 72 hours. The patient then revealed that she had been using gamma-hydroxybutyric-acid daily for the last six months as self-medication to treat insomnia and anxiety, before stopping it abruptly 24 hours prior to her visit. Conclusions In our opinion, this original case illustrates the importance of considering gamma-hydroxybutyric-acid withdrawal delirium in the differential diagnosis of a first-break psychosis. In this case, the effects of chronic GHB use were incorrectly identified as the negative symptoms of schizophrenia prodrome. Likewise, severe gamma-hydroxybutyric-acid withdrawal syndrome was initially mistaken for acute positive symptoms of schizophrenia, until autonomic dysfunction manifested itself more clearly. Introduction Gamma-hydroxybutyric-acid (GHB) is a potentially addictive drug used in "rave" parties to heighten sexual drive, sensuality and emotional warmth. Its euphoric, sedative and anxiolytic-like properties, and ability to relieve inhibitions, are also sought by frequent users [1]. Most of the desired effects are mediated by a direct stimulation of GABA B receptors and auto receptors in the brain, and also by its metabolism to GABA in presynaptic neurons. This pathway is also that of GHB intoxication. The drug's addictive properties are mediated by decreased GABA release in the ventral tegmental area which causes an inhibition of dopaminergic neurons firing in the nucleus accumbens and frontal cortex [1]. Regular administration of GHB causes down regulation of these GABA receptors, leading to drug dependence as a means to maintain homeostasis. Abrupt GHB withdrawal will cause imbalance between under-stimulated GABA neurons and stimulatory input in the ventral tegmental area. This will cause withdrawal symptoms -tremor, autonomic dysfunction and anxiety -to appear as soon as one to 24 hrs after the last dose, resulting in drug craving. Although mild at presentation, symptoms may culminate in one to seven days to severe confusion, delirium, auditory, visual or tactile hallucinations, or even death [2]. Case presentation We report the case of a 23-year-old French Canadian woman who presented at the Emergency Room in summer 2006 with paranoid delusions and auditory hallucinations. Her previous history included two brief psychotic episodes induced by substances (amphetamines and marijuana). During these previous psychotic episodes, she exhibited ideas of reference and visual hallucinations which had responded well to olanzapine, an atypical antipsychotic. In the last six months, she had shown bizarre behaviour, had been more isolated and apathetic, and unable to participate in daily chores. Her motivation had declined. She had gradually withdrawn from significant social relationships and stopped working. She described feeling anxious on a regular basis with no apparent reason. This was confirmed by family members accompanying her. The patient reported occasional use of GHB, but as her initial accounts of drug use were contradictory, the product used could not be positively identified and frequency of use remained unknown at the time. Upon admission, the patient appeared perplexed and entertained the unsubstantiated fear that someone would try to kill her. She was under the impression that her whole entourage was speaking ill of her behind her back. She was whispering for fear of being heard, and attacked. She reported not having slept in days. She also mentioned visual hallucinations, claiming to have seen tigers in her apartment. The toxicology urine screen was negative. No evidence of depressive or manic symptoms was found. Patient history and physical examination indicated no general medical condition. The initial suspicion of a schizophrenic disorder was in accordance with the terms of this diagnosis as described in the DSM-IV: The patient presented an acute psychotic break (hallucinations and delusions, part of Criterion A of the DSM-IV diagnosis), preceded by a prodrome of negative symptoms (prolonged apathy and lack of motivation, part of Criterion A), lasting six months (Criterion C), with a significant decline in social and occupational functioning (Criterion B). Mood disorders were excluded (Criterion D). Substance-induced and general medical condition-induced psychoses were excluded as unlikely (Criterion E). The patient was thus prescribed olanzapine 10 mg at bed time. A few hours after her admission, she started presenting labile vital signs (Figure 1 and 2): tachycardia (range 160-120 beat/min) and hypertension (range 183/95-139/88 mmHg) that lasted for 24 hours. The patient's temperature was not taken. She also suffered from severe agitation and anxiety, showed disorganized speech and behaviour, had insomnia, and possibly visual hallucinations. Restraints and confinement had to be used to protect the patient from injury, and a cumulative dose of 7 mg lorazepam p.o. over 24 hours was necessary to stabilize her. At this point, we suspected a withdrawal delirium, either from ethanol or GHB because of the autonomic system instability. Olanzapine was discontinued (of which she had received a single dose). Apart from lorazepam, she did not require any further drug treatment. Her confusion resolved in less than 72 hours. She suffered from a complete amnesia of the episode. At this point, the patient revealed that she had been using GHB daily for the last six months as a self-medication to treat insomnia and anxiety. She denied using any other drug or medication during that time, including alcohol. She had developed a dependence to GHB as evidenced by increased anxiety, tremor and insomnia whenever she skipped a dose, and also by the increasing amount of time spent seeking and using GHB. She also admitted to often being very sedated while using GHB, a frequent state which resulted in a general lack of motivation and interest. One day prior to her admission, the patient had decided to put an end to her GHB use, an abrupt interruption which provoked the onset of intense anxiety and paranoid delusions within 18 hours, followed by disorganisation, and auditory and visual hallucinations within 24 hours. Conclusions In our opinion, this original case illustrates the importance of considering GHB withdrawal delirium in the differential diagnosis of a first break psychosis. In this case, chronic GHB use was incorrectly perceived as a schizophrenic prodrome characterized mainly by negative symptoms (Table 1). Likewise, a severe GHB withdrawal syndrome was initially mistaken for acute positive symptoms of schizophrenia, i.e. hallucinations and delusions, until autonomic dysfunction manifested itself more clearly ( Table 2). In a literature search, no other reports of this phenomenon were found. Since GHB is not usually part of routine toxicology screens, it is important to consider GHB use and GHB withdrawal delirium in the differential diagnosis of psychosis, especially since autonomic dysfunction is often milder in GHB withdrawal compared to that of ethanol [3], and since the treatment is different. Obtaining a detailed history from the patient and family members and keeping regular track of the vital signs may help make the proper diagnosis. The treatment of a GHB withdrawal delirium calls for a treatment similar that of to ethanol or benzodiazepine withdrawal, mainly: admission, supportive treatment, and a tapering regimen of benzodiazepines. Unless a delirium is present, antipsychotic drugs are not indicated in GHB withdrawal 3 , whereas in schizophrenia, they are the cornerstone of the treatment.
2017-06-27T13:44:57.228Z
2009-07-10T00:00:00.000
{ "year": 2009, "sha1": "f9a7ac6d7c67a06ee283498971732df81b07fa6f", "oa_license": "CCBY", "oa_url": "https://casesjournal.biomedcentral.com/track/pdf/10.4076/1757-1626-2-7520", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "db86396412e4d8668ca7724309965bbb45f565a7", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6037892
pes2o/s2orc
v3-fos-license
Native-Invasive Plants vs. Halophytes in Mediterranean Salt Marshes: Stress Tolerance Mechanisms in Two Related Species Dittrichia viscosa is a Mediterranean ruderal species that over the last decades has expanded into new habitats, including coastal salt marshes, ecosystems that are per se fragile and threatened by human activities. To assess the potential risk that this native-invasive species represents for the genuine salt marsh vegetation, we compared its distribution with that of Inula crithmoides, a taxonomically related halophyte, in three salt marshes located in “La Albufera” Natural Park, near the city of Valencia (East Spain). The presence of D. viscosa was restricted to areas of low and moderate salinity, while I. crithmoides was also present in the most saline zones of the salt marshes. Analyses of the responses of the two species to salt and water stress treatments in controlled experiments revealed that both activate the same physiological stress tolerance mechanisms, based essentially on the transport of toxic ions to the leaves—where they are presumably compartmentalized in vacuoles—and the accumulation of specific osmolytes for osmotic adjustment. The two species differ in the efficiency of those mechanisms: salt-induced increases in Na+ and Cl− contents were higher in I. crithmoides than in D. viscosa, and the osmolytes (especially glycine betaine, but also arabinose, fructose and glucose) accumulated at higher levels in the former species. This explains the (slightly) higher stress tolerance of I. crithmoides, as compared to D. viscosa, established from growth inhibition measurements and their distribution in nature. The possible activation of K+ transport to the leaves under high salinity conditions may also contribute to salt tolerance in I. crithmoides. Oxidative stress level—estimated from malondialdehyde accumulation—was higher in the less tolerant D. viscosa, which consequently activated antioxidant responses as a defense mechanism against stress; these responses were weaker or absent in the more tolerant I. crithmoides. Based on these results, we concluded that although D. viscosa cannot directly compete with true halophytes in highly saline environments, it is nevertheless quite stress tolerant and therefore represents a threat for the vegetation located on the salt marshes borders, where several endemic and threatened species are found in the area of study. Dittrichia viscosa is a Mediterranean ruderal species that over the last decades has expanded into new habitats, including coastal salt marshes, ecosystems that are per se fragile and threatened by human activities. To assess the potential risk that this nativeinvasive species represents for the genuine salt marsh vegetation, we compared its distribution with that of Inula crithmoides, a taxonomically related halophyte, in three salt marshes located in "La Albufera" Natural Park, near the city of Valencia (East Spain). The presence of D. viscosa was restricted to areas of low and moderate salinity, while I. crithmoides was also present in the most saline zones of the salt marshes. Analyses of the responses of the two species to salt and water stress treatments in controlled experiments revealed that both activate the same physiological stress tolerance mechanisms, based essentially on the transport of toxic ions to the leaveswhere they are presumably compartmentalized in vacuoles-and the accumulation of specific osmolytes for osmotic adjustment. The two species differ in the efficiency of those mechanisms: salt-induced increases in Na + and Cl − contents were higher in I. crithmoides than in D. viscosa, and the osmolytes (especially glycine betaine, but also arabinose, fructose and glucose) accumulated at higher levels in the former species. This explains the (slightly) higher stress tolerance of I. crithmoides, as compared to D. viscosa, established from growth inhibition measurements and their distribution in nature. The possible activation of K + transport to the leaves under high salinity conditions may also contribute to salt tolerance in I. crithmoides. Oxidative stress level-estimated from malondialdehyde accumulation-was higher in the less tolerant D. viscosa, which consequently activated antioxidant responses as a defense mechanism against stress; these responses were weaker or absent in the more tolerant I. crithmoides. Based on these results, we concluded that although D. viscosa cannot directly compete with true halophytes in highly saline environments, it is nevertheless quite stress tolerant and therefore represents a threat for the vegetation located on the salt marshes borders, where several endemic and threatened species are found in the area of study. INTRODUCTION Salt marshes are coastal ecosystems developed in temperate zones, occupied mainly by halophytic vegetation that can be exposed, in some cases, to tidal flooding. These are specialized habitats, characterized by a high primary productivity and species diversity, which support a wide variety of native flora and fauna, and constitute as well important areas for wintering aquatic birds (Simas et al., 2001). These ecosystems are economically important since they can be used as nursery grounds for several fish and crustacean fisheries (Dijkema et al., 1990;Reed, 1990). Salt marshes are among the most abundant, fertile, and accessible habitats on earth, and therefore are highly threatened by human activities (industrial pollution, urbanization, agriculture, etc.), which have damaged many existing salt marshes in the world (Bromberg Gedan et al., 2009). In the SE Iberian Peninsula, the situation of these habitats is particularly critical. Many of them were destroyed in the past, due to their transformation into cropland or by desiccation for fear of malaria. In the Valencia region (Spain), where this study has been carried out, the coastline supports virtually all farming, much of industrial activity, and shelters large population centers. This, along with a huge pressure from tourism, produced a high impact on natural ecosystems, including salt marshes. The situation has changed in recent years, when the ecological value of such habitats started to be considered. All coastal lagoons in the area have been proposed as SCIs (Sites of Community Interest) to join the Natura 2000 network as SACs (Special Areas of Conservation) and SPAs (Special Protection Areas for birds) and catalogued as priority habitats in the region of Valencia (Laguna, 2003). In particular, the habitats 1150 (Coastal lagoons) and 1510 (Mediterranean salt steppes, Limonietalia), intrinsically related to the salt marshes, are considered priority habitats by the European legislation (Council Directive, 1992). In addition, in such areas many Plant Micro-Reserves have been declared, some devoted to the protection of halophytic vegetation (Fos et al., 2014). These ecosystems house a characteristic flora of halophytes, which have developed different strategies to adapt to salt stress and include some taxa of special interest, being endemic or threatened; the species distributions are shaped by physical and chemical gradients in the environment, but also by biological interactions (Adams, 1963;Lefor et al., 1987). The high specialization of the organisms living in salt marshes contributes to the vulnerability of these habitats. Among numerous threats, the pressure of invasive plants has strong effects in such fragmented and linear ecosystems (Petillón et al., 2005, and references therein). The term "invasive" is usually employed for alien (synonymous to exotic, non-indigenous, introduced, or newcomer) species which have the ability to spread aggressively outside their natural range and are potentially dangerous for the environment, economy, or human health (Callaway and Aschehoug, 2000;Byers et al., 2002;Hejda et al., 2009). The term invasive is also accepted in a broader sense, including indigenous or native species that occupy new ecosystems, which they alter (Mack, 1985;Gouyon, 1990;Le Floch et al., 1990;Carey et al., 2012;Muñoz-Vallés and Cambrollé, 2015). When invasive species show increased abundance, density or geographic extent, this may be considered as potentially problematic (Richardson and Pyšek, 2004). The mechanism of invasion is based on the opening of novel niches or through the extension of pre-existing ones (Shea and Chesson, 2002;Valéry et al., 2008), and invasive species may eliminate autochthonous plants from their natural habitats, due to their high competitiveness. Weedy native species sometimes have major impacts and are the subject of intensive and expensive management efforts (Williamson, 1998). In this regard, the role of native species as a potential threat for biodiversity conservation is an emerging problem, which is still under discussion. These autochthonous species, which induce notable changes in the environment by undergoing rapid expansion, are known as "native-invasive" species. In SW Spain, for example, native Retama monosperma has become invasive in sand dunes, causing significant damage to the natural ecosystem, even greater than that due to alien invasive species under similar climatic conditions (Muñoz Vallés et al., 2011). The genus Dittrichia Greuter has a wide Mediterranean distribution and shows very close relation to the genus Inula L., differing in morphologic characteristics of the achenes and pappus-haire (Brullo and de Marco, 2000). Dittrichia viscosa (L.) Greuter, formerly ascribed to the genus Inula as I. viscosa (L.) Aiton, is a perennial, 40-130 cm high plant, very common in the western Mediterranean but with penetration also in its eastern part. Its primary habitats are gravel riverbeds, mountain screes, and sandy and rocky coasts, but it is frequent mostly in secondary habitats as roadsides and abandoned fields, and sometimes in croplands as a weed (Brullo and de Marco, 2000). The species shows a remarkable pioneer character, and in the last decades largely expanded its range in the circum-Mediterranean countries, possibly due to increased human disturbances (Wacquant, 1990;Mateo et al., 2013). D. viscosa has very high coverage percentages in riparian plant communities in SE Spain, that have undergone major alterations and have been seriously degraded as a result of anthropic effects (Salinas et al., 2000); in a recent study, it has been also reported as the most frequent species on roadsides in Portugal (Simões et al., 2013). Its capability to colonize new habitats and threaten the biodiversity has been well documented (Wacquant, 1990) and was related to characteristics such as its phenotypic plasticity (Wacquant and Bouab, 1983), high stress tolerance (Curadi et al., 2005), and resistance to chemical pollution (Murciego et al., 2007;Fernández et al., 2013), as well as to its allelopathic effects (Omezzine et al., 2011). In the last 50 years, D. viscosa has become an invader in the NW Mediterranean region, since it increased its ecological range under disturbance pressure and is colonizing new habitats (Wacquant, 1990;Boonne et al., 1992;Wacquant and Baus Picard, 1992;Mateo et al., 2013). The species recent expansion in the Iberian Peninsula has also been correlated to temperature increases due to the accelerated global warming (Sobrino Vesperinas et al., 2001). D. viscosa has been catalogued as an invasive alien plant in the region of Asturias (N Spain), where it invades especially sensitive environments of high ecologic value (Castaño, 2007). It was occasionally reported in salt marshes in the Mediterranean region (Molinier and Tallon, 1970;Llorens, 1986;Korakis and Gerasimidis, 2006), and it became very frequent over the last decades in the area under study; nowadays it is present in most of the 10 × 10 km UTM squares (Mateo et al., 2013). It has been proposed that D. viscosa could be used as a secondary plant in biological pest control, which is a method to control pests without the application of potentially dangerous chemical pesticides (Parolin et al., 2014). This and other potential biotechnological applications of this species, such as its promising use for phytoremediation in miningaffected semiarid soils, since it is an efficient bioaccumulator of trace metals (Barbafieri et al., 2011;Jimenez et al., 2011;Pérez et al., 2012), could extend its distribution as a consequence of human uses. The main aim of this work was to evaluate the potential risk that D. viscosa represents for Mediterranean salt marsh vegetation. To assess the degree of threat that D. viscosa may pose to the genuine halophytic vegetation in SE Spain, we have first analyzed its distribution in three salt marshes located in "La Albufera" Natural Park, Valencia, in relation to soil electric conductivity and moisture and in comparison with a closely related, typical halophyte, Inula crithmoides [syn. Limbarda crithmoides (L.) Dumort]. This latter species is a small shrub 30-100 cm high, frequent on cliffs and salt marshes, with a Mediterranean-Atlantic distribution, reaching its northern limit in Scotland. The field work was complemented studying the responses of the two species to salt and water stress under controlled experimental conditions in a greenhouse, to establish the relative resistance to stress of the two species and the underlying tolerance mechanisms. For this, some specific biochemical stress markers associated with basic, conserved response pathways were determined in young plants of D. viscosa and I. crithmoides subjected to salt and water stress treatments. We quantified, specifically: (1) growth parameters, that would clearly indicate their relative tolerance to stress, (2) photosynthetic pigments (chlorophylls a and b and total carotenoids), (3) the major putative osmolytes (proline, glycine betaine, and soluble carbohydrates), (4) monovalent ions (Na + , K + , and Cl − ), (5) levels of malondialdehyde (MDA), as an indicator of oxidative stress, (6) total antioxidant activity, by the DPPH radical scavenging assay, and total phenolics and flavonoids as examples of non-enzymatic antioxidants, and (7) antioxidant enzyme activities (superoxide dismutase, catalase, and glutathione reductase). Field Study The presence of D. viscosa and I. crithmoides plants in relation to soil electric conductivity (hereafter EC) and soil moisture was registered in three salt marshes located in "La Albufera" Natural Park (39 • 47 ′ 28 ′′ N, 1 • 04 ′ 25 ′′ W) near Valencia, in eastern Spain. La Albufera is the biggest lake in the Iberian Peninsula, formed by the gradual closure of an ancient marine gulf. The salt marshes appear in small inter-dune depressions located in the narrow land strip between the sea and the lake (Figure 1). Mediterranean salt marsh depressions usually appear to be endorheic. Sudden flooding caused by heavy rains coming in from the sea is usually followed by long droughts and periods of high concentration of salts (Quintana et al., 1998). The climate is typical Mediterranean, with a summer peak of temperature and absence of rainfall, resulting in periods of severe summer stress (Rivas-Martínez andRivas-Sáenz, 1996-2009). The presence of the two species was evaluated along a linear transect in the first salt marsh, since it is much extended and there is a high density of D. viscosa specimens. In the other two salt marshes, considerably smaller, soil EC was measured in each point where one of the two species appeared. Soil EC and moisture (%) were analyzed with a WET-2 Sensor (Delta-T Devices, UK) which allows direct in situ non-destructive measurements in the root zones of the plants. Coordinates of these points were established with a GPS (Garmin GPSMAP 76CSx). All field data were taken in spring 2015. Plant Material Mature floral heads of D. viscosa and I. crithmoides were sampled in one of the salt marshes, in October 2014, and well-shaped achenes were directly sown on a mixture of commercial peat and vermiculite (3:1). After 20 days, seedlings homogeneous in size were selected and placed in individual pots. Three weeks later, when young plants were robust enough, treatments were initiated. For the salt treatments, plants were watered twice a week with Hoagland nutritive solution (Hoagland and Arnon, 1950) containing NaCl at 75, 150, 300, or 600 mM final concentrations, or without salt for the non-stressed controls (1.5 L per tray, each containing 12 pots). For the water stress treatments, watering was completely stopped. All experiments were conducted in a controlled environment chamber, under the following conditions: long-day photoperiod (16 h of light), temperature of 23 • C during the day and 17 • C at night, ca. 300 ppm CO 2 level, and 50-80% relative humidity. All measurements and biochemical assays were performed using leaf material, half of which was harvested after 3 weeks, and the other half after 6 weeks of salt treatment; in the case of water stress, only the 3week samples were used, since not all plants survived the longer treatment. To better compare the effect of the stress treatments on plant growth for the two species, the fresh weight (hereafter FW) of the leaves of each sample was expressed as percentage of the FW of the corresponding non-treated control; the absolute FW values of the controls are indicated in the legend to Figure 3. Part of the leaf material was dried at 65 • C until constant weight, Electrical Conductivity of the Soil in the Pots Electrical conductivity of the substrate was measured after 3 and 6 week-treatments. Soil samples from five pots per treatment were air-dried and then passed through a 2-mm sieve. For each sample, a soil:water (1:5) suspension was prepared in deionized water and mixed for 1 h at 600 rpm, and 21 • C. Electric conductivity (EC 1:5 ) was measured with a Crison Conductivity-meter 522 and expressed in dS m −1 . Photosynthetic Pigments Total carotenoids, chlorophyll a (Chl a) and chlorophyll (Chl b) were measured following Lichtenthaler and Wellburn (1983): 100 mg of fresh leaf material was ground in the presence of 20 ml of ice-cold 80% acetone, mixed by vortexing and centrifuged. The supernatant was collected and its absorbance was measured at 663, 646, and 470 nm. The final values were expressed in mg g −1 DW. Osmolyte Quantification Glycine betaine (GB) was determined in dried leaf tissue according to Grieve and Grattan (1983). The sample (100 mg) was ground with 2 ml of Mili-Q water, and then extracted with 1, 2-dichlorethane; the absorbance of the solution was measured at a wavelength of 365 nm. GB concentration was expressed as µmol g −1 DW. Proline (Pro) content was quantified using fresh leaf material, according to the ninhydrin-acetic acid method of Bates et al. (1973). Pro was extracted in 3% aqueous sulfosalicylic acid, the extract was mixed with acid ninhydrin solution, incubated for 1 h at 95 • C, cooled on ice and then extracted with two volumes of toluene. Absorbance of the supernatant was read at 520 nm, using toluene as a blank. Pro concentration was expressed as µmol g −1 DW. Identification and Quantification of Soluble Carbohydrates by HPLC The water-soluble sugar fraction (mono and oligosaccharides) was analyzed using a Waters 1525 High Performance Liquid Chromatography system, coupled to a 2424 evaporative light scattering detector (ELSD). The source parameters of ELSD were the following: gain 75, data rate 1 point per second, nebulizer heating 60%, drift tube 50 • C, and gas pressure 2.8 Kg/cm 2 . Analysis was carried out injecting 20 µL aliquots with a Waters 717 auto-sampler into a Prontosil 120-3-amino column (4.6 × 125 mm; 3 µm particle size) maintained at room temperature. An isocratic flux (1 mL/min) of 85% acetronitrile (J.T. Baker) during 25 min was applied in each run. Standards of glucose, fructose, and arabinose served to identify peaks by co-injection. Sugars were quantified by peak integration using the Waters Empower software and comparison with glucose, fructose, and arabinose standard calibration curves. Monovalent Ions Levels Extractions were performed according to Weimberg (1987), by incubating the samples (0.15 g of dried and ground leaf material in 25 ml of water) for 1 h at 95 • C in a water bath, followed by filtration through a filter paper (particle retention 8-12 µm). Sodium and potassium were quantified with a PFP7 flame photometer (Jenway Inc., Burlington, USA) and chlorides were measured using a Merck Spectroquant Nova 60 R spectrophotometer and its associated test kit (Merck, Darmstadt, Germany). MDA and Non-enzymatic Antioxidants Dried leaf material was extracted in 80% methanol, in a rocker shaker, for 24-48 h. Malondialdehyde (MDA) content in the extracts was determined according to the method described by Hodges et al. (1999). The samples were mixed with 0.5% thiobarbituric acid (TBA) prepared in 20% TCA (or with 20% TCA without TBA for the controls), and then incubated at 95 • C for 20 min. After stopping the reaction, the absorbance of the supernatants was measured at 532 nm. The non-specific absorbance at 600 and 440 nm was subtracted and MDA concentration was determined using the equations described by Hodges et al. (1999). Total antioxidant activity in the extracts, measured by their ability to quench the radical 2,2-diphenyl-1-picrylhydrazyl (DPPH), was determined spectrophotometrically according to Falchi et al. (2006). The methanol-soluble fraction was appropriately diluted with 96% ethanol, in a final volume of 2 mL, to which 0.5 mL of a 0.5 mM DPPH solution in ethanol was added, and the absorbance was measured at 517 nm after 10 min. A control sample was prepared using 2.0 mL of ethanol and 0.5 mL of the same DPPH ethanolic solution, to check the radical stability. The percentage of radical scavenging activity (S) of each extract was calculated as where Ax is the absorbance of the DPPH solution in presence of the plant extract and A 0 the absorbance of the control DPPH solution without the plant sample. Total phenolic compounds (TPC) were quantified according to Blainski et al. (2013), by reaction with the Folin-Ciocalteu reagent. The methanol extracts were mixed with sodium bicarbonate and Folin-Ciocalteu reagent and left in the dark for 90 min. Absorbance was recorded at 765 nm, and the results expressed as equivalents of gallic acid (mg eq. GA g −1 DW). Total "antioxidant flavonoids" (TF) were measured following the method described by Zhishen et al. (1999). The methanol extracts were mixed with NaNO 2 , followed by AlCl 3 and NaOH, and the absorbance of the sample was measured at 510 nm. This protocol is often described to detect "total flavonoids" in the sample, although this is not strictly true. The method is based on the nitration of aromatic rings bearing a catechol group and only detects those phenolic compounds containing this chemical structure, which include several subclasses of flavonoids-such as flavonols or flavanols-but also other non-flavonoid phenolics, such as caffeic acid and derivatives. Nevertheless, the method was chosen since the metabolites determined by the reaction with AlCl 3 are all antioxidants and there is a good correlation between their levels and the total antioxidant activity of the samples (Zhishen et al., 1999). To simplify, further on in the text we refer to the AlCl 3 -reactive compounds simply as "total flavonoids" (TF), and express their contents as equivalents of catechin (mg eq. C g −1 DW). Protein Extraction and Quantification Crude protein extracts were prepared from plant material stored frozen at −80 • C, following the procedure described in Gil et al. (2014). Protein concentration in the extracts was determined by the method of Bradford (1976), using the Bio-Rad reagent and bovine serum albumin (BSA) as standard. Antioxidant Enzyme Activity Assays Catalase activity (CAT) was determined following the decrease in absorbance at 240 nm which accompanied the consumption of H 2 O 2 ( ε = 39.4 mM −1 cm −1 ) upon the addition of the plant extracts (Aebi, 1984). One CAT unit was defined as the amount of enzyme that will decompose 1 µmol of H 2 O 2 per min at 25 • C. Glutathione reductase (GR) activity was quantified according to Connell and Mullet (1986), following the oxidation of NADPH, the cofactor in the GR-catalyzed reduction of oxidized glutathione (GSSG). One GR unit was defined as the amount of enzyme that will oxidise 1 µmol of NADPH per min at 25 • C. Superoxide dismutase (SOD) activity was determined according to Beyer and Fridovich (1987) by monitoring the inhibition of nitroblue tetrazolium (NBT) photo-reduction, using riboflavin as the source of superoxide radicals. One SOD unit was defined as the amount of enzyme that causes 50% inhibition of NBT photo-reduction under the assay conditions. Minor modifications introduced in the aforementioned original assays are described in Gil et al. (2014). Data Analysis The location of the two species in the three salt marshes was established on the orthophotography Terrasit-72258-ecw provided by the "PNOA © , Instituto Geográfico Nacional de España -Institut Cartogràfic Valencià, " with the programme ArcGIS 2011. Statistical analyses were performed using the program Statgraphics Centurion XVI. Before the analysis of variance, the Shapiro-Wilk test was used to check for validity of normality assumption and Levene test for the homogeneity of variance. If ANOVA requirements were accomplished, the significance of the differences among treatments was tested by one-way ANOVA at a 95% confidence level and post-hoc comparisons were made using the Tukey HSD test. Differences between the two species in each treatment were assessed by a t-test for the aforementioned confidence level. All measured parameters in plants submitted to salt stress were correlated using principal component analysis (PCA), for 3 and 6 weeks in both I. crithmoides and D. viscosa. All means throughout the text include the SD. Field Study The distribution of the halophyte Inula crithmoides and the native-invasive Dittrichia viscosa was registered in three salt marshes in "La Albufera" Natural Park (Figure 1), and related to soil EC and moisture measurements in each point where the two species were present. Figure 1A shows a general view of the area of the Natural Park where the field work was carried out, with the location of the three selected salt marshes. In the second row panels, the location points of the two species in the three salt marshes (Figures 1B1-B3, respectively) are represented. Soil electric conductivity and soil moisture values measured in each of those points are shown in Figure 2. The first is the largest of the three analyzed salt marshes and has higher vegetation coverage. Although both species were present, D. viscosa was extremely abundant, contrasting with the scarcity of I. crithmoides. Soil salinity varied only slightly along the linear transect, except some occasional spots where electrical conductivity was higher. Mean values of EC ranged from 1 to 8 dS m −1 , data that revealed relatively low salinity, optimal for D. viscosa, which invaded most of this salt marsh, as can be seen in Figure 1C1. Regarding soil moisture, D. viscosa was found in a range of 12-39%, but with an optimum at 20-24%. The few specimens of I. crithmoides were found at similar soil humidity, although not below 14%, and at salinities up to 16 dS m −1 (Figures 2A,B). In the second salt marsh, both species were frequent but showed a different pattern of distribution: I. crithmoides was present mostly in the central, more depressed part of the salt marsh, whereas D. viscosa was predominantly found on its borders. I. crithmoides grew over a wide range of salinity, from 4 to 36 dS m −1 , whereas D. viscosa was concentrated at values of 1-6 dS m −1 and only a few individuals were localized at higher salinities, up to 20 dS m −1 (Figure 2C). Soil moisture values registered for the two species were similar, but D. viscosa was also found in drier locations ( Figure 2D). The third salt marsh is the smallest, with no vegetation in the centre but with high coverage on its border, including numerous specimens of I. crithmoides, but none of D. viscosa. This is related to its location, only a few meters from an effluent mouth, subjected to constant flooding during rainy periods. This implies that soil EC constantly changes and the salt marsh is wet most of the year, so that I. crithmoides, which apparently tolerates higher salinity, is favored. The species was present in soil areas with EC values up to 30 dS m −1 , but more frequently at moderate salinities not exceeding 15 dS m −1 (Figure 2E). Soil moisture was higher than that registered in the other two salt marshes ( Figure 2F). Plant Growth Inhibition under Controlled Stress Conditions The leaf FW of both, D. viscosa and I. crithmoides salttreated plants strongly decreased in parallel with increasing For each treatment, FW is expressed as percentage of the absolute weight of the corresponding non-treated control, taken as 100%: 10.85 and 24.40 g for D. viscosa and 11.11 and 19.99 g for I. crithmoides, at 3 and 6 weeks of growth, respectively. Values shown are means ± SD (n = 5). Different letters (lowercase for D. viscosa and capital letters for I. crithmoides) over the bars indicate significant differences between treatments for each species according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. external NaCl concentrations, as compared to the corresponding non-treated controls. After 3 weeks of treatment, the relative FW reductions were slightly greater in D. viscosa than in I. crithmoides, but the differences were statistically significant only in the presence of 450 mM NaCl: 77% decrease in the former species, as compared to 57% in the latter ( Figure 3A). When the salt treatment was prolonged for 6 weeks, these differences became clearer (Figure 3B), indicating that I. crithmoides is more resistant to salt stress than D. viscosa, in terms of inhibition of biomass accumulation. Similarly, after 3 weeks of water stress treatments-not all plants survived 6 weeks without wateringthe relative FW reduction was again significantly higher in D. viscosa than in I. crithmoides ( Figure 3C). Both taxa showed a strong resistance to dehydration under stress conditions, indicating that the observed FW decrease was indeed due to stress-induced inhibition of growth and not simply to loss of water. In I. crithmoides, leaf water content did not vary significantly, as compared with the control plants, even after 6 weeks in the presence of 600 mM NaCl, the highest salt concentration tested, or after 3 weeks without watering (Figures 3D-F). In D. viscosa, a slight salt-induced decrease in the average leaf water content was detected, but the difference with the control was only significant after 6 weeks of treatment with 600 mM NaCl ( Figure 3E); in water-stressed plants there was also a reduction in the mean leaf water content (ca. 15%) but, here again, the difference with the control plants was not statistically significant ( Figure 3F). When comparing the two species, significant differences in water loss were detected in the 6-week treatment at high salt concentrations (450 and 600 mM NaCl; Figure 3E) and in the water stress treatment ( Figure 3F); nevertheless, it should be pointed out that the differences observed between the two species, in absolute terms, were relatively small. Taken together, these results show that, although both taxa are quite resistant to stress, I. crithmoides is somewhat more tolerant than D. viscosa to salinity-in agreement with their relative distribution in the salt marshes-and also to drought. Photosynthetic Pigments Salt stress induced a relative reduction in the levels of photosynthetic pigments in the leaves of D. viscosa and I. crithmoides plants ( Table 1), but its effects appeared to be weaker in the latter species, in agreement with its relatively higher salt tolerance. For example, in the 3 weeks treatment, chlorophyll a contents progressively decreased in D. viscosa, in parallel with increasing salt concentrations, down to about half of the control levels in the presence of 600 mM NaCl; in I. crithmoides plants, a similar reduction was observed under the same conditions, but lower salt concentrations had no significant effect ( Table 1). After 6 weeks of salt treatment, the reduction in chlorophyll a levels was clearly stronger in D. viscosa (3.6-fold) than in I. crithmoides (twofold), and the same qualitative pattern was observed for chlorophyll b and total carotenoid contents ( Table 1). Water stress also caused a reduction in the levels of photosynthetic pigments in both species, in relation to the nonstressed controls, and this decrease was again more pronounced in D. viscosa than in I. crithmoides. Thus, after 3 weeks without watering, chlorophyll a concentration in D. viscosa leaves was reduced by ca. 40% of the level in control plants, as compared to 15% in I. crithmoides, chlorophyll b by 60% of the control (vs. 37% in I. crithmoides), and total carotenoids by 54% (27% in Inula; Table 1). Osmolyte Contents Plants of the two analyzed species accumulated glycine betaine (GB) in their leaves, as a response to the treatment with increasing NaCl concentrations, but reaching much higher levels in I. crithmoides than in D. viscosa (Figures 4A,B). GB contents in control plants were similar in the two species, about 50 µmol g −1 DW; under the strongest salt stress conditions tested (6 weeks in the presence of 600 mM NaCl), GB levels increased about twofold in D. viscosa, but nearly eight-fold in I. crithmoides ( Figure 4B). The high concentrations measured, almost 400 µmol g −1 DW, indicate that GB is the major functional osmolyte in I. crithmoides, responsible for osmotic adjustment in conditions of high soil salinity, as suggested by previous field studies (Gil et al., 2014). GB contents did not show any significant change in D. viscosa plants after 3 weeks of water stress treatment; in I. crithmoides, on the contrary, water stress did induce the accumulation of this compound, albeit at much lower levels than those measured in salt-stressed plants, only a twofold increase over the control, approximately ( Figure 4C). Proline (Pro) was also measured in D. viscosa and I. crithmoides plants subjected to salt and water stress treatments (Figure 5). Pro levels augmented in both species upon treatment with NaCl, in a concentration-dependent manner. After 3 weeks in the presence of salt, Pro accumulation was relatively stronger in D. viscosa, reaching more than a 20-fold increase over control levels in the plants watered with 600 mM NaCl, FIGURE 4 | Glycine betaine (GB) accumulation in leaves of D. viscosa and I. crithmoides stressed plants. GB contents after 3 weeks (A) and 6 weeks (B) of treatment with the indicated NaCl concentrations, or after 3 weeks of water stress (C). Values shown are means ± SD (n = 5). Different letters (lowercase for D. viscosa and capital letters for I. crithmoides) over the bars indicate significant differences between treatments for each species according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. as compared to ca. 7-fold in I. crithmoides under the same conditions ( Figure 5A); when the treatment was prolonged to 6 weeks, however, Inula plants appeared to further increase Pro accumulation, so that its concentration was similar in the two species ( Figure 5B). Water stress also led to Pro accumulation in the leaves of D. viscosa (12-fold increase over control values) and I. crithmoides (4-fold increase) plants ( Figure 5C). Yet, it should be mentioned that the absolute Pro concentrations reached, always below 40 µmol g −1 DW, were one order of magnitude lower than those of GB; therefore, Pro could have, at best, a modest contribution to osmotic adjustment in the stressed plants. Chlorophyll a (Chl a), chlorophyll b (Chl b), and total carotenoids (Caro) contents, after 3-week and 6-week treatments with the indicated salt concentrations, and after 3 weeks of water stress. Values shown are means ± SD (n = 5). For each pigment, different letters (lowercase for D. viscosa and capital letters for I. crithmoides) in a column indicate significant differences between treatments according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. WS, water stress. Spectrophotometric determination of total soluble sugars, after reaction with phenol and sulfuric acid (Dubois et al., 1956), did not provide clear patterns of variation in response to salt of water stress (data not shown). Therefore, carbohydrates in the water-soluble fraction were separated, identified and quantified by HPLC. Only three major peaks were detected in the extracts of the two species, corresponding to arabinose (Ara), fructose (Fru), and glucose (Glu). These measurements revealed, as a general pattern, that the leaf contents of the three sugars increased in parallel with increasing external NaCl concentrations and with the time of treatment, for both species (Table 2). Thus, in D. viscosa plants grown in the presence of salt for 6 weeks, relative increases of ca. 1.5-, 2.8-, and 5-fold in the levels of Ara, Fru, and Glu, respectively, were observed when comparing the highest NaCl concentration tested (600 mM) and the nonstressed controls. Sugar contents in I. crithmoides control plants were in most cases lower than in D. viscosa, even not detectable in the case of Fru (after the 3-week growth period) and Glu (in samples collected at 3 and 6 weeks); however, the salt-induced increase in the levels of the three sugars was relatively larger in the former species, so that in the presence of 600 mM NaCl the contents of Ara, Fru, and Glu were significantly higher in I. crithmoides than in D. viscosa ( Table 2). The absolute contents of the three sugars accumulated under the strongest salt stress conditions tested, taken together (100 µmol g −1 DW in D. viscosa and 160 µmol g −1 DW in I. crithmoides, approximately), could contribute significantly to osmotic adjustment, especially in D. viscosa, which accumulates much lower levels of GB. 28.01 ± 4.54b* 40.09 ± 7.01A* Arabinose (Ara), fructose (Fru), and glucose (Glu) contents, after 3-week, and 6-week treatments with the indicated salt concentrations, and 3 weeks of water stress. Carbohydrates were identified and quantified by HPLC of the water soluble leaf fraction. Values shown are means ± SD (n = 5). For each measured sugar, different letters (lowercase for D. viscosa and capital letters for I. crithmoides) in a column indicate significant differences between treatments according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. WS, water stress. (N.D. stands for "not detectable"). An increase in sugar contents was also detected in D. viscosa plants, in response to water stress-ca. 2.7-fold (Ara), 5.4-fold (Fru), or 8-fold (Glu) higher than in the control plants. However, these sugars were not detected in the HPLC chromatograms of the leaf samples of I. crithmoides water-stressed plants ( Table 2). Monovalent Ions Sodium (Na + ), chloride (Cl − ), and potassium (K + ) contents were measured in leaves of D. viscosa and I. crithmoides plants, following the treatments with NaCl for 3 and 6 weeks ( Table 3). Both species accumulated Na + and Cl − in response to salt stress, in a time and concentration-dependent manner; the relative increase in the concentration of these ions over the corresponding control plants, and the absolute levels reached, were higher in I. crithmoides than in D. viscosa. The measured K + levels did not allow a clear correlation with the external salt concentrations and, interestingly, its patterns of variation differed in the two species. In D. viscosa, K + contents, although fluctuating, showed a general tendency to decrease with increasing NaCl concentrations; in I. crithmoides, on the other hand, K + concentration decreased at moderate salinity levels but increased again in the presence of high external salt concentrations ( Table 3). Water stress did not induce any significant change in ion concentrations in the leaves of stressed plants of either species, in Sodium (Na + ), chloride (Cl − ), and potassium (K + ) concentrations, after 3-week and 6-week treatments with the indicated salt concentrations. Values shown are means ± SD (n = 5). For each ion, different letters (lowercase for D. viscosa and capital letters for I. crithmoides) in a column indicate significant differences between treatments according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. comparison to the non-stressed controls, as should be expected (data not shown). Oxidative Stress Responses Oxidative stress is usually associated to salt and water stress, through the generation of excess reactive oxygen species (ROS), toxic compounds that oxidize amino acid residues in proteins, unsaturated fatty acids in cell membranes, and DNA molecules, thus causing cellular damage (Halliwell, 2006). Malondialdehyde (MDA) is a product of membrane lipid peroxidation, considered an excellent marker of oxidative stress (Del Rio et al., 2005). MDA was found to increase moderately in plants of both species after 3 weeks of salt treatment, although significant differences with the corresponding controls were observed only in the presence of 600 mM NaCl ( Figure 6A); prolonged treatments (6 weeks) led to a similar pattern of MDA accumulation in I. crithmoides, whereas significant differences with the control were already detected at 300 mM external NaCl concentration in D. viscosa plants (Figure 6B). Water stress also induced an increase of MDA levels in D. viscosa, but not in I. crithmoides ( Figure 6C). Absolute MDA contents were significantly lower in I. crithmoides than in D. viscosa, under all stress conditions tested (except for the 6-week treatment with 150 mM NaCl), although the differences between the two species were not extremely large (Figures 6A-C). These results indicate that drought and salinity cause a higher degree of oxidative stress in D. viscosa than in I. crithmoides, which is in agreement FIGURE 5 | Proline (Pro) accumulation in leaves of D. viscosa and I. crithmoides stressed plants. Pro contents after 3 weeks (A) and 6 weeks (B) of treatment with the indicated NaCl concentrations, or after 3 weeks of water stress (C). Values shown are means ± SD (n = 5). Different letters (lowercase for D. viscosa and capital letters for I. crithmoides) over the bars indicate significant differences between treatments for each species according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. with the relative stress tolerance of the two taxa shown by the previous experiments and by their distribution in nature. Scavenging activity of the DPPH radical-used to estimate total antioxidant activity-was not detected in I. crithmoides leaf extracts, neither of control plants, nor in water-stressed or salt-stressed plants after the 3-week treatments (Figures 6D,F). A concentration-dependent increase of activity was however observed in plants treated with increasing salt concentrations for 6 weeks, reaching 45% in the presence of 600 mM NaCl ( Figure 6E). In accordance with the relatively higher degree of oxidative stress affecting D. viscosa, significant concentration and time-dependent increases in antioxidant activity were detected in plants of this species, reaching 90% upon application of water stress (Figure 6F), 80% in the presence of 600 mM NaCl for 3 weeks (Figure 6D) or up to 95% in the 6-week salt treatment ( Figure 6E). Phenolic compounds and particularly flavonoids possess well-established antioxidant and ROS scavenging activities, and can be considered as good examples of non-enzymatic antioxidant metabolites induced in plants as a response to abiotic stress conditions causing secondary oxidative stress. Total flavonoid (TF) contents increased in salt-stressed plants of D. viscosa, an effect that was more clearly observed after 6 weeks of treatment with salt, reaching a 6-fold increase in the presence of 600 mM NaCl, over the level in nontreated plants (Figures 7A,B). In I. crithmoides, TF levels were lower than those measured in D. viscosa under most tested conditions, and their variations with increasing salinity were also smaller and, generally, not statistically significant (Figures 7A,B). Water stress induced a slight (less than twofold) increase in TF contents in both species, but no significant differences were observed between I. crithmoides and D. viscosa plants ( Figure 7C). The patterns of variation of total phenolic compounds (TPC), in response to increasing salt concentration, were similar to those of TF, although their absolute levels were clearly higher in D. viscosa than in I. crithmoides under all conditions tested. No significant changes were observed, in general, after 3 weeks of salt treatments (Figure 7D), while in the 6-week treated plants TPC accumulated with increasing NaCl concentrations in both species, although the relative increase over the corresponding control was more pronounced in D. viscosa than in I. crithmoides (Figures 7D,E). Regarding water-stressed plants, a significant increase of TPC levels was detected in D. viscosa but not in I. crithmoides (Figure 7F). The specific activity of several antioxidant enzymes was determined in leaf protein extracts prepared from plants of the two investigated species. Although some changes were observed in response to the applied salt and water stress treatments, in general the differences were small and, in most cases, statistically non-significant (Table 4). Thus, in I. crithmoides, catalase (CAT) activity did not vary in the presence of salt, under all conditions tested. The specific activities of glutathione reductase (GR) fluctuated in response to increasing salt concentrations, but the differences between the control plants and those watered with 600 mM NaCl were again non-significant. Only superoxide dismutase (SOD) activity increased significantly, albeit slightly (up to 1.5-fold), in the 6-week treatment with high NaCl concentrations. Regarding the I. crithmoides water-stressed plants, no significant changes were detected in any of the antioxidant activities tested ( Table 4). In D. viscosa, apart from a small decrease in CAT activity in the presence of 600 mM NaCl for 6 weeks, the only remarkable change was the activation of SOD (up to about 3.6-fold) induced by increasing salt concentrations. Contrary to I. crithmoides, water stress led to significant (but again small) variations of antioxidant enzyme activities in D. viscosa, reducing CAT and GR, and increasing SOD, as compared to the non-stressed controls ( Table 4). Values shown are means ± SD (n = 5). Different letters (lowercase for D. viscosa and capital letters for I. crithmoides) over the bars indicate significant differences between treatments for each species according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. (N.D. stands for "not detectable"). Principal Component Analyses Principal component analyses (PCAs) were performed independently for D. viscosa and I. crithmoides, and for the 3-week and 6-week salt treatments, and included all measured parameters (Figure 8). In the four PCAs shown, two components with an Eigenvalue equal to or greater than 1 explained a cumulative percentage of variance of more than 80%. The first component (X-axis) was determined by the electrical conductivity of the substrates (shown as Supplementary Material in Table S1), and was found to be strongly correlated, negatively, with growth parameters (FW%, WC%) and contents of photosynthetic pigments (Chl a, Chl b, and Caro), and positively with toxic ions (Na + and Cl − ) levels, osmolyte (GB, Pro, Glu, Fru, Ara) contents, MDA, total antioxidant activity (DPPH), non-enzymatic antioxidants (TPC and TF), or SOD activity. Although similar correlation patterns were observed for each species, regardless of the duration of the treatment, the variation explained by the first component increased from about 67% at 3 weeks to 79% at 6 weeks, in both D. viscosa and I. crithmoides. This is in agreement with the fact that, in general, the loading vectors of the analyzed variables presented smaller angles with the X-axis-that is, higher correlation with salinity-in the PCAs corresponding the 6 weeks of treatment, due to the prolonged salt stress effects. The joint analysis of all variables indicated that the responses of the two species to salt stress are the same or very similar, qualitatively, and that quantitative differences are most clearly manifested after a longer treatment. DISCUSSION Salt marsh ecosystems are characterized by quick changes in their environmental conditions (Chapman, 1974). In Mediterranean FIGURE 7 | Total flavonoids (TF) and total phenolic compounds (TPC) accumulation in leaves of D. viscosa and I. crithmoides stressed plants. TF (A-C) and TPC (D-F) contents after 3 weeks (A,D) and 6 weeks (B,E) of treatment with the indicated NaCl concentrations, or after 3 weeks of water stress (C,F). Values shown are means ± SD (n = 5). Different letters (lowercase for D. viscosa and capital letters for I. crithmoides) over the bars indicate significant differences between treatments for each species according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. salt marshes, spatial and temporal gradients of salinity and soil moisture have been reported among the most important physical factors for plant distribution (Álvarez Rogel et al., 2001). A strong seasonal variation in soil electric conductivity was reported by Gil et al. (2011Gil et al. ( , 2014 in the area of study, which is explained by high evapotranspiration in summer, causing an upward movement of water with dissolved salts that accumulate in the soil upper layers, thus increasing its salinity. Therefore, to be able to compare the field data sets, soil EC and moisture were analyzed in the three salt marshes at approximately the same time, in early spring, when soil humidity was sufficient to perform direct in situ measurements with a portable sensor. Dittrichia viscosa is a species adapted to a wide range of environmental stresses (Parolin et al., 2014), and is frequent in many different habitats, especially in those with anthropic influence. It has also been reported from salt meadows and saline tamarisk thickets (Korakis and Gerasimidis, 2006), and seems to tolerate relatively high concentrations of NaCl in artificial experimental conditions, yet it is not considered as a true halophyte (Curadi et al., 2005;Maciá-Vicente et al., 2012). When comparing the distribution of the two species in the selected salt marshes, soil EC appeared as the major restrictive ecological factor for D. viscosa. At low and moderate salinities, as in the first salt marsh, it was by far more abundant than I. crithmoides, and practically invaded the whole area. In the second salt marsh, where both species were present, D. viscosa was localized mostly on the edges, at lower soil EC, while in the central and more depressed zone, with higher salinity, I. crithmoides was more abundant. This latter species was the only one of the two growing in the third, more humid and saline salt marsh. The simplest interpretation of the field data would be to assume that I. crithmoides is an "obligate" halophyte, which requires higher salt levels than D. viscosa in its natural Catalase (CAT), glutathione reductase (GR), and superoxide dismutase (SOD) specific activity, after 3-week and 6-week treatments with the indicated salt concentrations, and 3 weeks of water stress. Values shown are means ± SD (n = 5). For each enzyme, different letters (lowercase for D. viscosa and capital letters for I. crithmoides) in a column indicate significant differences between treatments according to Tukey test (α = 0.05). Asterisks (*) indicate significant differences between the two species for the same treatment. WS, water stress. environment. However, we consider "obligate" as a misleading term, as it may suggest that the plants necessarily require salt for optimal growth. In fact, I. crithmoides plants grew better in the absence than in the presence of NaCl, as shown here and has also been recently reported by Pardo-Domènech et al. (2015). Moreover, when comparing I. crithmoides cultivated on pots with different substrates, optimal growth was found on salt-free and nutrient-rich substrates, such as peat and garden soil, and not on soil sampled in the salt marsh where seeds were collected (Grigore et al., 2012). These findings indicate that in non-saline or low to moderate salinity areas I. crithmoides is outcompeted by other species, such as D. viscosa; only at higher salinities, the species becomes truly competitive with respect to D. viscosa, which is less salt tolerant. It is generally accepted that plant tolerance to abiotic stresses, including drought and salinity, is mostly dependent on the activation of a series of conserved response mechanisms, such as the control of ion homeostasis and the accumulation of specific osmolytes to ensure cellular osmotic balance, or the activation of antioxidant systems to counteract oxidative stress which occurs in such stressful conditions (Flowers et al., 1986;Hare et al., 1998;Zhu, 2001;Flowers and Colmer, 2008). These basic mechanisms are not specific for stress tolerant species, but shared by all plants, and the wide range of tolerance in different species is mostly attributed to the relative efficiency of those mechanisms; in other words, to quantitative rather than qualitative differences in the responses to stress. Studies on related taxa with different degrees of tolerance to stress could be extremely useful for a Table S1. better understanding of the relative contribution of different stress responses to stress tolerance in a given species or group of related taxa . D. viscosa and I. crithmoides are genetically related, since they belong to taxonomically close genera-D. viscosa was formerly included in the Inula genus, as I. viscosa-although they show different ecological behavior. Therefore, according to the aforementioned ideas, the same mechanisms of tolerance should operate in both species, albeit with different efficiency. All our results indicate that this is, indeed, the case: although quantitative differences were noticed, D. viscosa and I. crithmoides showed the same patterns of growth inhibition under salt and water stress conditions, and the physiological and biochemical responses to stress were similar in the two species, as discussed below. Apart from growth inhibition, leaf photosynthetic pigments appear to be reliable abiotic stress markers, since reductions in chlorophylls (a and b), and carotenoid contents have been reported to correlate closely with the degree of salinity or drought affecting plants of several species (e.g., Sairam et al., 2002;Jaleel et al., 2009;Hernández et al., 2015;Schiop et al., 2015). In our experiments, a decrease in the levels of these compounds was registered in all stress treatments in the two species, but the reduction was more pronounced in D. viscosa, which is clearly more affected by salt and water stress than I. crithmoides. A general response to salt stress in plants, which may contribute to tolerance, is based on the control of ion transport. Accumulation of inorganic ions in the aerial part of the plants is an advantageous mechanism to increase osmotic pressure, more economical in terms of energy consumption than only the synthesis of organic solutes for osmotic adjustment (Raven, 1985). Toxic Na + and Cl − ions are maintained at low cytosolic concentrations through ion sequestration in vacuoles by selective uptake, according to the widely accepted "ion compartmentalization hypothesis" (Flowers et al., 1986;Glenn et al., 1999). This strategy is used preferentially by glycophytes (within their limits of resistance) and dicotyledonous halophytes, whereas salt-tolerant monocots cope with high salinity in the soil mostly by limiting Na + transport to the leaves. Both, I. crithmoides and D. viscosa plants accumulated Na + (and Cl − ) in their leaves, in response to increasing NaCl concentrations in the pots, although significantly higher levels were measured in I. crithmoides; this species is succulent, which facilitates the accumulation of ions in the vacuoles. These results point to a higher efficiency in the mechanism of transport of toxic ions from the roots to the leaves, which would be extremely meaningful in conditions of high salinity, conferring I. crithmoides a clear advantage over D. viscosa in colonizing saline environments. The accumulation of sodium is generally associated with a decrease of potassium levels in plants, mostly due to the competition for the same binding sites. Na + interferes with K + transport by using its physiological transport systems (Greenway and Munns, 1980;Flowers et al., 1986) and by inducing a depolarization of the plasma membrane, triggering the activation of outward-rectifying K + channels and consequently the loss of K + (Shabala et al., 2003(Shabala et al., , 2005. Maintaining a relatively high cellular K + level under salt stress is another fundamental mechanism of tolerance, described in some halophytes, such as Thellungiella halophila, a salt-tolerant relative of the glycophyte Arabidopsis thaliana (Volkov et al., 2003). Decreasing K + levels with increasing external Na + concentrations were detected in D. viscosa, especially after 6 weeks of salt treatments, but it is particularly interesting to note the significant increase of K + contents observed in the more salt-resistant I. crithmoides at high (450-600 mM) NaCl concentrations-under lower salinity conditions, a decrease in K + was also detected in this species, as expected-suggesting the activation of potassium transport to the leaves. This could partly compensate the accumulation of sodium, thus contributing to salt tolerance in I. crithmoides by avoiding a drastic reduction of K + /Na + ratios. Accumulation of high concentrations of toxic ions in the vacuole requires the synthesis of compatible solutes in the cytoplasm, to maintain osmotic balance. The higher tolerance to stress of I. crithmoides, as compared to D. viscosa, appear to be partly dependent on the accumulation of higher levels of specific osmolytes, providing better osmotic adjustment under stress. Glycine betaine has been reported as the major osmolyte in I. crithmoides, accumulating at very high concentrations in response to controlled salt treatments in the laboratory (Pardo-Domènech et al., 2015), and also under stressful environmental conditions in the field (Gil et al., 2014). To our knowledge, up to now there are no published reports on the mechanisms of response to abiotic stress in D. viscosa. In the present study, we measured similar levels of GB in control plants of D. viscosa and I. crithmoides and observed significant increases in response to the salt treatments, as well as-to a lesser extentin response to water stress. Yet, under the same conditions, GB accumulation was stronger in I. crithmoides than in D. viscosa. Salt-induced accumulation of some specific soluble sugars, namely Ara, Fru, and Glu, was also detected in the two species, reaching levels that would significantly contribute to cellular osmotic adjustment-especially in D. viscosa, due to its weaker accumulation of GB. Here again, the relative increases over the control plants and the absolute contents of the three sugars measured in the presence of high salt concentrations were bigger in the more salt tolerant I. crithmoides. Interestingly, these sugars do not appear to be involved in the responses to drought of the latter species, whereas their levels increase significantly in water-stressed D. viscosa plants. The possible role of soluble sugars in the mechanisms of abiotic stress tolerance in plants is often difficult to assess, due to their multiple additional functions in the cell as major energy sources, precursors of metabolic compounds and signaling molecules. Nevertheless, there is still much evidence for the contribution of soluble carbohydrates to salt and drought tolerance (reviewed in Gil et al., 2013), as we have observed in the two species investigated here. Proline is one of the commonest osmolytes in plants, and is synthesized in response to many different stressful conditions, such as salinity, drought, cold, high temperature, nutritional deficiencies, heavy metals, air pollution, or high UV radiation (Hare and Cress, 1997;Grigore et al., 2011;Boscaiu et al., 2013). It is well known that Pro levels increase in conditions of abiotic stress in many plant species, therefore Pro accumulation is a general response to stress; yet-as for other osmolytesthis does not mean that Pro is necessarily involved in stress tolerance mechanisms, as not always higher Pro levels correlate with increased tolerance (e.g., Lutts et al., 1996;Ashraf and Foolad, 2007;Chen et al., 2007). This seems to be the case in our experiments, since D. viscosa showed accumulation of Pro to higher levels than the more tolerant I. crithmoides. In any case, Pro contents, even under the strongest stress conditions tested, were too low to contribute significantly to osmotic adjustment in either species, thus ruling out a direct role of this compound in stress tolerance in the investigated taxa. Different abiotic stresses, including salinity and drought, cause oxidative stress in plants as a secondary effect, by inducing a large increase in the amount of reactive oxygen species (ROS) (Van Breusegem and Dat, 2006). When in excess, ROS cause cellular damage by oxidising proteins, membrane lipids and DNA (Apel and Hirt, 2004;Halliwell, 2006). Consequently, another general response to abiotic stress in plants is based on the activation of enzymatic and non-enzymatic antioxidant systems, the latter including many flavonoids and other phenolic compounds. There is overwhelming evidence that these secondary metabolites participate in the responses of plants to practically all types of abiotic stress (Winkel-Shirley, 2002;Treutter, 2005Treutter, , 2006Gould and Lister, 2006;Pollastri and Tattini, 2011), due to their strong antioxidant character and ROS scavenging activity. In agreement with its higher tolerance, under the same stressful conditions I. crithmoides showed lower levels of oxidative stress than D. viscosa-as established from measurements of MDA contents in the plants-both in the presence of increasing NaCl concentrations, and after the water stress treatment. Since D. viscosa is relatively more affected by oxidative stress, it should be expected that these plants need to activate stronger antioxidant systems than I. crithmoides as a defense mechanism against salt or water stress. This is indeed the case, at least regarding measurements of total antioxidant activity in the plant extracts-by the DPPH radical scavenging assay-accumulation of antioxidant secondary metabolites such as total phenolic compounds and flavonoids, and the specific activity of the antioxidant enzyme superoxide dismutase (SOD), all of which showed significantly higher values in D. viscosa than in I. crithmoides under all tested conditions. These results partly confirm previous (field) data from our laboratory, indicating that in their natural habitat highly salt-tolerant species, including I. crithmoides, use very efficient mechanisms of response to salinity-based on the control of ion transport and the accumulation of specific osmolytes-thus avoiding the generation of oxidative stress and the need to activate antioxidant systems (Gil et al., 2014;Bautista et al., 2016). Summarizing, determination of growth parameters and several biochemical stress markers in I. crithmoides and D. viscosa plants subjected to controlled salt and water stress treatments, the results of individual experiments and the joint analysis of all variables by PCA, indicated that these two genetically related species use the same physiological mechanisms to respond to stress. These mechanisms are mostly based on the transport of toxic ions from the roots to the leaves-where they are presumably sequestered in vacuoles-and the accumulation of specific osmolytes (GB and the sugars Ara, Fru, and Glu) for osmotic adjustment. Differences between the two species are quantitative: ion transport and compartmentalization in vacuoles is more efficient I. crithmoides than in D. viscosa, and the osmolytes (especially GB) accumulate at higher levels in the former species, thus explaining the (slightly) higher stress tolerance of I. crithmoides. The possible activation of K + transport to the leaves under strong salinity conditions may also contribute to the higher salt tolerance in this species. The results obtained in controlled experiments in the greenhouse corresponded to the relative distribution of the two species in nature, as only I. crithmoides was found in areas with the highest salinity. Still, D. viscosa proved to be quite resistant to salt stress, tolerating low and moderate salinities in the field. Coming back to the main issue of this paper, whether D. viscosa may be a threat for Mediterranean salt marsh species, the answer is negative, if we refer only to true halophytes growing in areas with high soil salinity-such as I. crithmoides-because its mechanisms of adaptation to salinity are less efficient. Yet D. viscosa can endanger less competitive taxa, since it is quite resistant to lower salinities. From an ecological point of view, the lower salinity makes D. viscosa more competitive, as it characterizes an advanced stage of anthropically influenced or degraded communities (Inulo viscosae-Oryzopsietum miliaceae). I. crithmoides is a characteristic species from salt marsh communities (Inulo crithmoidis-Tamaricetum boveanae) with possible sub-nitrophillous behaviour, so it coexists in the salt marshes with D. viscosa, but cannot leave the salt marsh. The consequence is that, although D. viscosa could not directly cause the disappearance of I. crithmoides, it is a powerful chamaephyte from a serial scrub that will be stronger in this habitat when suffering land degradation. However, salt marshes in the area of study shelter only a few rare halophyte species, including several endemics of the genus Limonium. Much more diverse flora, with a higher rate of endemicity, is located on the borders of the salt marshes, where D. viscosa is extremely competitive. Therefore, management programs of protected areas in the region should include not only the control of alien invasive species, but also the native-invasive D. viscosa. Moreover, the invasive character of D. viscosa in disturbed environments should be taken into account when using this species to restore degraded and polluted areas, or as an intermediate species after invasions by exotic taxa. In relation to habitat preservation, measures should not go only in the direction of removing certain species, but in the sense of preserving soil quality and establishing a buffer zone around the selected area. In the light of our results, the use of D. viscosa should be limited, as it could be stabilized as invasive and cause loss of biodiversity and degradation of plant communities. All the aforementioned problems should be considered, especially in areas included within the priority habitat of Coastal lagoons and Mediterranean salt steppes (Limonietalia), under the European legislation, where this study has been carried out. As mentioned above, native-invasive species pose a new challenge for land management in the context of global conservation, so that more detailed studies should be undertaken on this and other native species that may be compromising the conservation of biodiversity. AUTHOR CONTRIBUTIONS MA, JC performed the biochemical assays and the analysis of the data, contributing also to manuscript preparation. OB carried out ion measurements and collaborated in the elaboration of the maps. ED was in charge of the greenhouse work. ML was responsible for sugar quantification by HPLC and, together with MA, performed the DPPH-quenching activity measurements. MD was responsible for the field work and contributed to the elaboration of the maps and figures. OM contributed to the field work and manuscript preparation. OV participated in the general organization and supervision of the work and the interpretation of the results, and was responsible for the final version of the manuscript. MB conceived and designed the study, and participated in writing the manuscript. All authors have read and approved the final version of the manuscript. FUNDING Work in the UPV laboratories was partly funded by a grant to OV from the Spanish Ministry of Science and Innovation (Project CGL2008-00438/BOS), with contribution from the European Regional Development Fund. ACKNOWLEDGMENTS We are indebted to Francisco Collado, from the "Oficina Técnica de la Devesa de El Saler" (La Albufera Natural Park) for stimulating discussions regarding the expansion of Dittrichia viscosa on the territory of the Park and its negative effects on the salt marshes located there. MA was a recipient of an Erasmus Mundus pre-doctoral scholarship financed by the European Commission (Welcome Consortium). OB, ED acknowledge the Erasmus fellowship program for supporting their stays in Valencia.
2016-05-12T22:15:10.714Z
2016-04-18T00:00:00.000
{ "year": 2016, "sha1": "d95cca4b7d6c2aec4d6412550f9e0808b1d36b7a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00473/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95851551378737c44ae82b01b105e6f7c657cb2a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266859231
pes2o/s2orc
v3-fos-license
A Novel Machine Learning-Based Prediction Method for Early Detection and Diagnosis of Congenital Heart Disease Using ECG Signal Processing : Congenital heart disease (CHD) represents a multifaceted medical condition that requires early detection and diagnosis for effective management, given its diverse presentations and subtle symptoms that manifest from birth. This research article introduces a groundbreaking healthcare application, the Machine Learning-based Congenital Heart Disease Prediction Method (ML-CHDPM), tailored to address these challenges and expedite the timely identification and classification of CHD in pregnant women. The ML-CHDPM model leverages state-of-the-art machine learning techniques to categorize CHD cases, taking into account pertinent clinical and demographic factors. Trained on a comprehensive dataset, the model captures intricate patterns and relationships, resulting in precise predictions and classifications. The evaluation of the model’s performance encompasses sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve. Remarkably, the findings underscore the ML-CHDPM’s superiority across six pivotal metrics: accuracy, precision, recall, specificity, false positive rate (FPR), and false negative rate (FNR). The method achieves an average accuracy rate of 94.28%, precision of 87.54%, recall rate of 96.25%, specificity rate of 91.74%, FPR of 8.26%, and FNR of 3.75%. These outcomes distinctly demonstrate the ML-CHDPM’s effectiveness in reliably predicting and classifying CHD cases. This research marks a significant stride toward early detection and diagnosis, harnessing advanced machine learning techniques within the realm of ECG signal processing, specifically tailored to pregnant women. Introduction Cardiovascular disease is a significant medical condition that affects heart performance and leads to complications such as coronary artery disease and impaired vascular function [1].These difficulties can result in myocardial infarction and cerebrovascular accidents.According to a survey, heart disease annually impacts an estimated 620,000 individuals in the United States [2].While heart disease can affect both genders, males are more vulnerable.Statistics from 2010 show that a quarter of all fatalities were attributed to heart disease.In the United States, there are approximately 738,000 cases of heart attacks, with 528,000 of these cases being initial occurrences.The remaining 220,000 individuals experience subsequent episodes.Symptoms of heart disease include chest tightness, pain and discomfort, shortness of breath, ankle swelling, neck and abdominal pain, rapid heartbeat, dizziness, cardiac arrest, fainting, changes in skin color, ankle irritation, weight loss, and fatigue.The manifestation of symptoms depends on the type of cardiovascular ailment, which include but are not limited to arrhythmia, myocardial infarction, heart failure, congenital coronary artery disease, mitral valve insufficiency, and dilated cardiomyopathy. Congenital heart disease (CHD) refers to a group of structural abnormalities in the heart that occur during the prenatal stage of development [3].These congenital abnormalities manifest during the prenatal period and affect the morphology and physiology of the heart, leading to various cardiovascular complications.CHD is a common congenital anomaly around the world, imposing a significant health burden on affected individuals and medical systems alike.The global incidence of CHD shows significant variation, with an estimated occurrence of approximately 1% of all live births [4].In the United States, it is believed that around 40,000 newborns are affected by CHD each year.The severity of the condition can vary, ranging from individuals who experience minimal to no symptoms to those who require immediate medical attention.CHD encompasses a diverse range of anomalies, including structural malformations in the heart valves, walls, and vasculature.Examples of frequently encountered instances of CHD include atrial septal defects, ventricular septal defects, and Tetralogy of Fallot [5]. It is widely recommended that all pregnant women worldwide undergo fetal evaluation and ultrasound between 18 and 24 weeks of gestation [6].This procedure involves detailed imaging, including ultrasound scans of the heart, which has the potential to identify over 90% of severe congenital heart conditions.However, despite the widespread use of fetal ultrasound technology, the prevalence of fetal detection for genetic cardiovascular diseases within the community ranges from 30 to 50% [7].The hypothesis suggests that the main reason for the significant disparity in diagnoses is insufficient and inconsistent proficiency in analyzing fetal cardiac images [8].This is primarily due to the complexity of detecting a small and rapidly beating fetal heart as well as the relatively low awareness of congenital coronary artery disease among healthcare providers, given its low incidence.Although clinical quality assurance efforts focused on a single center and conducted on a small scale have shown promising results in improving CHD detection rates by up to 100%, the sustainability and scalability of such programs present significant challenges.To address this, experiments were conducted to determine if utilizing machine learning (ML) analysis of images could improve the evaluation rates typically observed in community medicine [9].This was achieved by training the ML model on data from a limited number of clinically relevant imaging studies. Machine learning has been demonstrated to be proficient in detecting intricate image patterns and successfully applied in adult cardiovascular ultrasound technology [10].It has even surpassed the performance of physicians in tasks involving the classification of views, using small and downsampled databases.However, despite its widespread usage across various domains, the application of machine learning in the context of CHD or fetal ultrasound still requires further refinement.The use of deep learning in medical scenarios that are inherently rare presents inherent limitations, irrespective of the volume of training data available.The hypothesis posits that by utilizing input data curated based on clinical recommendations, specifically by selecting five exclusive cardiac examination viewpoints, the algorithms would be capable of identifying diagnostic indicators even when dealing with databases of limited size.The identified research gaps are as follows: • There is a need for further research on the utilization of data derived from the Internet of Medical Things (IoMT) in diagnosing cardiovascular disease. • More investigation is needed to explore the potential of utilizing Long Short-Term Memory (LSTM) architecture and attention systems for the detection of cardiovascular disease.• There is a lack of comprehensive research on the concurrent utilization of CNN-BiLSTM-AM (Convolutional Neural Networks, Bidirectional Long Short-Term Memory, Attention Mechanisms) to achieve precise diagnostic outcomes in the context of cardiovascular disease. • It is necessary to conduct a thorough assessment and comparison of various models for identifying signs of cardiovascular disease using the Heart Disease UCI and Cardiovascular Disease Dataset databases. The research presented in this study makes significant contributions to the field of cardiovascular disease diagnosis by leveraging data from the Internet of Medical Things (IoMT).Firstly, it addresses the crucial challenge of timely identification and detection of cardiovascular diseases by utilizing IoMT data, aiming to enhance the accuracy and efficiency of diagnosis, which in turn enables prompt intervention and improves patient outcomes.Secondly, the study explores the potential of leveraging IoMT data for accurate diagnosis, providing insights into the utilization of this emerging technology in the field.Thirdly, it investigates the integration of Long Short-Term Memory (LSTM) design and Attention Mechanisms to enhance disease detection, paving the way for advanced techniques to improve diagnostic accuracy.Moreover, the study introduces a novel diagnostic model that combines Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (BiLSTM), and Attention Mechanism (AM) techniques, offering a comprehensive approach to enhance the classification accuracy of cardiovascular diseases.Lastly, a comprehensive assessment and comparison of various models is conducted using widely recognized databases such as the Heart Disease UCI and Cardiovascular Disease Dataset, contributing to the understanding and advancement of IoMT data, LSTM design, and advanced neural network models in the field of cardiovascular disease diagnosis, with the ultimate goal of improving patient outcomes. The research paper is structured into several sections to provide a clear and organized presentation of the study.Section 2 focuses on the literature review, discussing the current issues and challenges associated with the classification of congenital heart disease (CHD).Moving on to Section 3, the Machine Learning-based Congenital Heart Disease Prediction Method (ML-CHDPM) is introduced, outlining the methodology, algorithms, and presenting the results obtained from applying ML-CHDPM to CHD datasets.Section 4 then presents the software results and performance assessments, analyzing the software implementation of ML-CHDPM and evaluating its performance through metrics such as accuracy, precision, recall, and F1-score.Finally, Section 5 offers concluding remarks, summarizing the key findings of the study and highlighting potential directions for future research in CHD detection methods.By following this structured format, the research paper ensures a coherent flow of information, enabling readers to easily navigate through the literature review, methodology, results, and conclusion, thereby comprehending the contributions and implications of the study. Background and Literature Survey Various conventional techniques in machine learning have been employed to address the challenges associated with manually analyzing electrocardiogram (ECG) signals in Coronary Heart Disease (CHD).The conventional machine learning approach involves several steps, including preprocessing, feature extraction, feature selection, and categorization processes.Differentiating between normal and CHD signals based on their distinctive characteristics is a time and resource-intensive task.The robustness of the features obtained is significantly impacted by the quality of the underlying data.Preprocessing steps, such as noise elimination and R-peak identification, are essential to extract crucial attributes needed for effective categorization.This research suggests leveraging machine learning to improve the efficiency of an automated CHD diagnosis method, aiming to overcome the limitations associated with traditional machine learning approaches.Machine learning algorithms play a crucial role in acquiring and recognizing unique features from input ECG signals.The goal is to enhance the accuracy and effectiveness of the diagnostic process for CHD through the utilization of advanced machine learning techniques. In their study, Xu et al. [11] presented a novel methodology for the automated classification of pediatric Congenital Heart Disease (CHD) through the analysis of heartbeats.The researchers conducted an extensive extraction of diverse features from normal heart signals, encompassing characteristics derived from the time domain, frequency domain, and wavelets.Employing machine learning methodologies, particularly random forest and support vector machines, the proposed approach demonstrated promising outcomes.The results revealed a commendable accuracy rate of 87.5% in effectively categorizing CHD cases.Additionally, the specificity and sensitivity values were noteworthy, standing at 89.7% and 85.2%, respectively.These findings underscore the efficacy of the devised method in reliably identifying pediatric CHD through the analysis of heartbeats, showcasing its potential as a valuable diagnostic tool in this medical context. Ng et al. [12] designed an automated framework aimed at classifying perioperative hazards in patients with complex Congenital Heart Disease (CHD) by leveraging retinal images.The authors introduced an innovative feature extraction method that harnessed both color-based and texture-based characteristics obtained from retinal images.Subsequently, these extracted features were employed in risk classification through the application of machine learning, specifically utilizing a support vector machine.Results from the implemented framework demonstrated a notable predictive accuracy, achieving an impressive rate of 84.9% in effectively identifying perioperative risks in patients diagnosed with complex congenital heart disease.This research highlights the potential of utilizing retinal images and advanced machine learning techniques as a valuable tool for automating the identification of perioperative hazards, thereby contributing to enhanced patient care and risk management in the context of complex CHD cases. Kobel et al. [13] conducted a thorough assessment of the Apple Watch iECG's effectiveness in detecting Congenital Heart Disease (CHD) in children.The study involved obtaining iECG measurements from pediatric patients, including those with and without CHD.A meticulous comparative analysis was performed by juxtaposing the iECG data against conventional ECG records.The outcomes of this investigation suggest a promising role for the Apple Watch iECG as a potential screening tool for CHD in children, revealing a sensitivity of 92% and an accuracy of 93% in identifying the condition.In a separate study led by van Genuchten and colleagues [14], the physical capacity of children diagnosed with CHD was evaluated.A cohort of pediatric patients with CHD underwent exercise tests to assess their peak oxygen uptake (VO 2 peak).The research findings unveiled a significant revelation-children with CHD exhibited diminished exercise capacity compared to their healthy counterparts, as evidenced by lower VO 2 peak measurements.These results underscore the considerable impact of CHD on the ability of pediatric populations to partake in physical activities, shedding light on the broader implications of the condition on the overall well-being of affected individuals. Kavitha et al. [15] introduced an innovative approach termed Multilayer Deep Detection Perceptron (MLDDP) for the identification of testicular deviations, both with and without Congenital Heart Disease (CHD).This practical method utilized a Multilayer Deep Learning framework that incorporated multiple layers of perceptrons to discern anomalies associated with CHD.Upon evaluating the proposed MLDDP on a provided dataset, it demonstrated an exceptional detection accuracy of 95.4% in precisely identifying testicular deviations, irrespective of the presence of CHD.The success of MLDDP underscores the potential of machine learning techniques in advancing the diagnosis of CHD-related conditions.In a distinct study, Liu et al. [16] concentrated on the computer-aided analysis of heart sounds in pediatric patients diagnosed with left-to-right shunt CHD.The researchers introduced a methodology leveraging machine learning techniques, specifically employing a Convolutional Neural Network (CNN), to extract pertinent features from heart sound signals and accurately classify the presence of left-to-right shunt CHD.The outcomes were noteworthy, with a precision rate of 90.8% and a region under the receiver operating characteristics curve of 0.935, indicating the method's efficacy in identifying and categorizing CHD with left-to-right shunt.These findings underscore the potential of machine learning-based analyses in supporting medical professionals in diagnosing specific types of CHD, showcasing the promising intersection of technology and healthcare.Ge et al. proposed an innovative method for identifying Pulmonary Hypertension (PH) associated with Congenital Heart Disease (CHD) by incorporating time-frequency domain analysis and machine learning (ML) characteristics [17].The researchers integrated time-frequency analysis techniques into an ML framework to extract relevant features from echocardiographic data.Through their approach, the method achieved an impressive precision level of 91.6% in accurately identifying pulmonary hypertension linked to congenital heart disease.This study demonstrates the efficacy of combining advanced signal processing techniques with machine learning approaches to enhance the identification and characterization of specific cardiac conditions, specifically focusing on the challenging context of pulmonary hypertension in the presence of congenital heart disease. Steeden et al. [18] delved into the exploration of utilizing artificial intelligence (AI) in the assessment of Congenital Heart Disease (CHD).The authors undertook a thorough examination of AI-based methodologies, specifically machine learning (ML), applied to tasks such as image analysis, risk estimation, and detection within the realm of CHD.Their comprehensive analysis provided insightful perspectives on the potential of AI in augmenting the assessment and treatment of individuals with CHD.By shedding light on the various applications of AI in the context of CHD, the study contributes to the evolving landscape of medical technology and its role in advancing cardiac care and diagnostics. Alici-Karaca et al. introduced a Convolutional Neural Network (CNN) with a lightweight architecture designed to precisely classify cases of radiation-induced liver disease, as detailed in their research publication [19].While their study does not directly focus on congenital heart disease, it underscores the application of machine learning in medical image analysis.The featured lightweight CNN successfully achieved an impressive classification accuracy rate of 93.1% when tasked with identifying radiation-induced liver disease.This result highlights the versatile capacity of machine learning to be applied across diverse medical conditions, showcasing its potential in aiding accurate diagnoses beyond the specific context of congenital heart disease. Qiao et al. presented the Residual Learning based Diagnostic System (RLDS), an innovative diagnostic system employing residual learning, designed for cases of fetal Congenital Heart Disease (CHD) [20].This system utilized residual learning, a type of machine learning, to extract distinctive features from images of fetal echocardiography for discrimination.Remarkably, the diagnostic accuracy of the RLDS for fetal CHD reached an impressive 96.5%.Additionally, the system offered interpretability by generating attention maps and assigning importance scores to features, enhancing the understanding of the diagnostic process for medical professionals.This research underscores the potential of incorporating machine learning, specifically residual learning, in creating advanced diagnostic tools for fetal CHD with the added benefit of interpretability. The following Table 1 summarizes key findings from various studies investigating the use of machine learning (ML) in cardiovascular health, focusing on congenital heart disease (CHD) and related conditions.Each entry includes the reference number, author, study objective, and identified limitations.This compilation offers a succinct overview of the objectives pursued by researchers, shedding light on both the potential and challenges associated with ML applications in the diagnosis and assessment of cardiovascular health. Based on recent research, several limitations have been identified within the domain of congestive heart failure detection and machine learning applications in electrocardiogram (ECG) diagnosis systems: 1. Scope for Improvement in Existing Methods: -The current machine learning methods utilized for congestive heart failure detection are widely acknowledged to have room for enhancement.Further research and innovation are needed to refine and optimize these methodologies for improved accuracy and reliability.[16] Computer-aided analysis of heart sounds in pediatric patients with left-to-right shunt CHD using CNN Limited to specific type of CHD; Dependency on quality of input heart sound data Ge et al. [17] Identification of Pulmonary Hypertension associated with CHD using time-frequency domain analysis and ML Limited to Pulmonary Hypertension; Generalization to other types of CHD Steeden et al. [18] Exploration of AI-based methodologies for CHD assessment Limited discussion on specific limitations; Generalizability to different AI methodologies Alici-Karaca et al. [19] CNN for classifying radiation-induced liver disease Reduction of Training Limited to radiation-induced liver disease; Dependency on image quality Qiao et al. [20] Introduction of RLDS for diagnosing fetal CHD using residual learning Limited to fetal CHD; Dependency on the quality of fetal echocardiography images Proposed Methodology This research introduces an innovative automated detection methodology engineered to enhance the accuracy of congenital heart disease (CHD) detection.This novel approach leverages a sophisticated amalgamation of Convolutional Neural Networks (CNNs), Bidirectional Long Short-Term Memory (BiLSTM) networks, and Attention Mechanisms (AMs).Each component plays a pivotal role in fortifying the model's ability to accurately identify CHD cases.The CNN component, a fundamental pillar of this methodology, is meticulously designed to focus on the salient characteristics inherent in the input data, with particular emphasis on capturing details within the line of sight.This feature engineering approach proves to be exceptionally advantageous in the context of CHD detection, as it empowers the model to discern intricate patterns and subtle nuances within the data, thus augmenting its diagnostic capabilities.Complementing the CNN, the BiLSTM component is seamlessly integrated into the model architecture.The BiLSTM networks are instrumental in capturing temporal dependencies within the data.Specifically, they conduct an in-depth analysis of preprocessed electrocardiogram (ECG) signals, which are pivotal in CHD diagnosis.By considering the temporal dynamics of the data, the BiLSTM networks enhance the model's ability to adapt and make accurate predictions, especially when dealing with time-series data like ECG signals. Furthermore, the model incorporates the indispensable Attention Mechanisms (AMs), a pivotal element in refining its predictive accuracy.These AMs introduce a mechanism to emphasize the significance of past time series data and state information features, allowing the model to incorporate historical context into its final predictions.This strategic utilization of AMs significantly influences the model's adaptive capabilities and contributes to more precise CHD predictions.The culmination of these components results in a robust CHD prediction model, driven by advanced machine learning techniques.Figure 1 visually encapsulates the architecture of this model, highlighting the distinctive roles played by a CNN, BiLSTM, and AM in the pursuit of accurate CHD detection.This comprehensive approach signifies a significant advancement in the field, with the potential to revolutionize the accuracy and efficiency of CHD diagnosis. Data Source Collection from IoMT Data for this study are drawn from the burgeoning field of the Internet of Medical Things (IoMT), a paradigm depicted in Figure 2.For this innovative approach, an array of sensors is strategically deployed and meticulously placed on the cardiac muscle, serving as vigilant sentinels to capture and scrutinize electrocardiogram (ECG) signals.Additionally, the research explores the potential utilization of alternative sensors designed to gather electroencephalogram (EEG) indications.It is imperative to emphasize that this investigation maintains a laser focus on the automated identification of congenital heart disease (CHD).Consequently, the study exclusively relies on the rich dataset derived from ECG signals.While the availability of alternative sensors for EEG data collection is acknowledged, their utilization remains beyond the scope of the current research.This rigorous and specialized focus on ECG signals underscores the study's commitment to delivering precise and effective solutions for CHD diagnosis. Data Source Collection from IoMT Data for this study are drawn from the burgeoning field of the Internet of Medical Things (IoMT), a paradigm depicted in Figure 2.For this innovative approach, an array of sensors is strategically deployed and meticulously placed on the cardiac muscle, serving as vigilant sentinels to capture and scrutinize electrocardiogram (ECG) signals.Additionally, the research explores the potential utilization of alternative sensors designed to gather electroencephalogram (EEG) indications.It is imperative to emphasize that this investigation maintains a laser focus on the automated identification of congenital heart disease (CHD).Consequently, the study exclusively relies on the rich dataset derived from ECG signals. While the availability of alternative sensors for EEG data collection is acknowledged, their utilization remains beyond the scope of the current research.This rigorous and specialized focus on ECG signals underscores the study's commitment to delivering precise and effective solutions for CHD diagnosis.In this study, our primary focus is on pregnant women, recognizing the exceptional significance of detecting congenital heart disease (CHD) during pregnancy.CHD is a complex and potentially life-threatening condition, and early detection is paramount for effective management.This importance is magnified during pregnancy, where the health and well-being of both the expectant mother and the developing fetus are intricately linked.Our research dataset comprises two distinct groups, both consisting exclusively of pregnant women.The first group includes 15 pregnant patients who have been medically diagnosed with CHD.The second group consists of 18 pregnant individuals who are considered to be in good health, as they exhibit normal sinus rhythm (NSR).Additionally, within the CHD group, we have identified a subset of pregnant patients who are dealing with severe congestive heart failure, further underscoring the critical nature of early CHD detection during pregnancy.To gather data for analysis, we meticulously collected electrocardiogram (ECG) signals from these pregnant individuals over an extensive monitoring period, approximately spanning 20 h.Each data entry in our dataset is composed of a pair of ECG channels, recorded at a frequency of 250 Hertz.This comprehensive data collection process ensures that we have access to a wealth of information that can shed light on the cardiac health of pregnant women. The significance of our study becomes apparent in several key aspects: 1. Maternal and Fetal Health: Pregnancy imposes unique physiological demands on a woman's cardiovascular system.Detecting CHD during pregnancy is not only about the mother's health but also about ensuring optimal fetal development.In this study, our primary focus is on pregnant women, recognizing the exceptional significance of detecting congenital heart disease (CHD) during pregnancy.CHD is a complex and potentially life-threatening condition, and early detection is paramount for effective management.This importance is magnified during pregnancy, where the health and well-being of both the expectant mother and the developing fetus are intricately linked.Our research dataset comprises two distinct groups, both consisting exclusively of pregnant women.The first group includes 15 pregnant patients who have been medically diagnosed with CHD.The second group consists of 18 pregnant individuals who are considered to be in good health, as they exhibit normal sinus rhythm (NSR).Additionally, within the CHD group, we have identified a subset of pregnant patients who are dealing with severe congestive heart failure, further underscoring the critical nature of early CHD detection during pregnancy.To gather data for analysis, we meticulously collected electrocardiogram (ECG) signals from these pregnant individuals over an extensive monitoring period, approximately spanning 20 h.Each data entry in our dataset is composed of a pair of ECG channels, recorded at a frequency of 250 Hertz.This comprehensive data collection process ensures that we have access to a wealth of information that can shed light on the cardiac health of pregnant women. The significance of our study becomes apparent in several key aspects: 1. Maternal and Fetal Health: Pregnancy imposes unique physiological demands on a woman's cardiovascular system.Detecting CHD during pregnancy is not only about the mother's health but also about ensuring optimal fetal development.The health of both mother and child is intertwined, making accurate detection a matter of utmost importance. 2. Timely Intervention: Identifying CHD in pregnant women at an early stage allows for timely medical interventions and the implementation of tailored healthcare strategies. These interventions can potentially prevent complications that might arise during pregnancy or childbirth. 3. Advanced Technology: Our study leverages advanced machine learning techniques, tailored specifically to the pregnant population.By training our model on this exclusive dataset of pregnant women, we can capture intricate patterns and relationships that are highly relevant to this demographic.4. Unique Dataset: A distinctive feature of our study is the exclusive focus on pregnant women's datasets.This focus bridges a critical gap in research and ensures that our findings are directly applicable to the population of pregnant individuals. In conclusion, our research is poised to make a substantial contribution to the field of prenatal care.The advanced machine learning model we have developed, trained on data exclusively from pregnant women, demonstrates its effectiveness in accurately predicting and categorizing CHD cases within this specific population.Our aim is to enhance prenatal care, safeguarding the health and well-being of both expectant mothers and their unborn children. Materials The ECG signals used in this study were obtained from publicly available databases, specifically the PhysioBank repository.The severity of congenital heart disease (CHD) indications was classified using the New York Heart Association (NYHA) measure.According to this classification system, Category 1 represents a mild condition with no discernible restriction on exercise, while Category 2 indicates a benign condition with slight movement restrictions.In Category 3, there is a moderate level of physical activity with significant limitations, and Category 4 signifies a severe impairment that completely restricts physical activity. The ECG data related to CHD used in this study are classified under the Category 3 and 4 groups.Four distinct databases, namely Categories A, B, C, and D, were employed in this study.Categories A and B consist of complete ECG information that needs to be balanced, whereas Categories C and D contain equal amounts of ECG information.From the complete dataset, an average of thirty thousand ECG information points were randomly extracted for Categories C and D. Preprocessing Preprocessing of the ECG data in this research study involved several steps to ensure data quality and consistency.The databases used, namely Fantasia, ECG, and MIT-BIH Normal Sinus, had different sampling rates of 250 Hz and 128 Hz, respectively.To maintain uniformity, the signals obtained from the National Solar Radiation Database were upsampled to 250 Hz. To facilitate further analysis, the ECG records were segmented into 2-s intervals without simultaneous R-peak identification.Each segment consisted of 500 samples, corresponding to a duration of 2 s.Before being inputted into the system, the ECG signals underwent regularization through Z-score standardization.This standardization process involved adjusting each data point to have a mean of zero and a standard deviation of one.Applying these preprocessing techniques ensures that the ECG data are standardized and ready for subsequent analysis.These steps help enhance the accuracy and reliability of the findings by addressing sampling rate discrepancies and normalizing the data for consistent comparison and evaluation. LSTM Architecture LSTM is a variant of the artificial neural network architecture that incorporates the dual memory systems, known as long-term and short-term memory.Time-series information is prevalent in various fields and is often addressed using LSTM as a solution.LSTM is a recurrent neural network architecture that is equipped with trainable memory cells.The LSTM cell consists of two distinct states: the long-term cell status (c_t) and the short-term cell status (h_t).These states represent the intermediate results of the cell at a given time period, denoted by t.LSTM cells possess three modifiable gates: the input gate, output gate, and forget gate.These gates control the flow of information into and out of the LSTM cell.One of the remarkable features of LSTM cells is their ability to retain information about specific values over multiple time periods.The gates serve as cellular components that regulate the flow of information within the cell.During training, the LSTM cells learn to determine which data are relevant to retain and which to discard.In summary, LSTM architecture is designed to handle time-series data by incorporating long-term and shortterm memory components.The modifiable gates enable the LSTM cells to control the flow of information, allowing for the retention and utilization of relevant data during training. Within the architecture of Long Short-Term Memory (LSTM) cells, they play a pivotal role in serving as conduits for the seamless transmission of data.These data transmissions are meticulously regulated by gates, which act as gatekeepers, determining the permissibility of information based on the current cellular context.Specifically, the neural network architecture governing the behavior of the forgetting gate within the LSTM cell is structured as a fundamental single-layer design.The structural configuration of the LSTM cell is visually depicted in Figure 3, providing an insightful overview of its inner workings.To elucidate the activation process of the forgetting gate, Equation (1) comes into play: The notation x p represents the sequential input information, while h p−1 denotes the result of the preceding LSTM block.C p−1 signifies the memory of the previous LSTM block.The bias vector is represented by b p , the weight vectors are denoted by W, and the activating function is ∝.The activating functions that are most commonly utilized are tanh and sigmoid.The mathematical expression representing the sigmoid activating function is presented in Equation (2). The input gate (i) is a component that utilizes the recently introduced memory's hyperbolic tangent activating function and the preceding memory blocks.The operations are executed using Equations ( 3) and (4). x p shows the sequential input information, h p−1 shows the result of the preceding LSTM block.C p−1 denotes the memory of the previous LSTM block, and C p denotes the memory of the LSTM block.The bias vector is shown by b p,c , b p,i , the weight vectors are shown by W, and the activating function is ∝.The LSTM block's results designate the exit gate.The LSTM results are computed using Equations ( 5) and (6). x p shows the sequential input information, h p−1 shows the result of the preceding LSTM block.C p denotes the memory of the LSTM block, and the output layer is marked o p .The bias vector is shown by b p,o , the weight vectors are shown by W, and the activating function is ∝. Attention Mechanisms The utilization of attention systems is prevalent in diverse domains, including but not limited to natural language processing and image detection.The process in question resembles the visual system observed in the human brain.Its objective is to accentuate more crucial data while screening out irrelevant data not pertinent to the current task.Numerous empirical investigations have demonstrated the efficacy of utilizing this process to enhance the discernment of data, and as such, it has been incorporated into the suggested structure.The system of attention calculations is established in Equations ( 7) to (10). The weight matrix is denoted as , while the bias vector is represented by .The symbol ∝ is employed in evaluating the resemblance of and the context vector .The concealed layer is denoted , the transpose function is denoted , and the computation result is denoted . Training Procedure of CHD Diagnosis System The training procedure comprises the following subsequent primary stages. Step 1: The data input necessary to develop the proposed ML-CHDPM system is obtained. Step 2: Data standardization is a technique utilized to enhance the efficacy of a framework in the presence of significant variations in the input information.The z-score approach is incorporated into the input normalization process, as outlined in Equation (10). The symbols ( ) and denote the input information's normalized value and standard deviation. ( ) and denote the input information and its corresponding average. Step 3: Layers of the proposed model are linked with weights and biases to initialize the networks. Attention Mechanisms The utilization of attention systems is prevalent in diverse domains, including but not limited to natural language processing and image detection.The process in question resembles the visual system observed in the human brain.Its objective is to accentuate more crucial data while screening out irrelevant data not pertinent to the current task.Numerous empirical investigations have demonstrated the efficacy of utilizing this process to enhance the discernment of data, and as such, it has been incorporated into the suggested structure.The system of attention calculations is established in Equations ( 7)- (10). The weight matrix is denoted as w p , while the bias vector is represented by b p .The symbol ∝ p is employed in evaluating the resemblance of v p and the context vector v k .The concealed layer is denoted H p , the transpose function is denoted T, and the computation result is denoted c. Training Procedure of CHD Diagnosis System The training procedure comprises the following subsequent primary stages. Step 1: The data input necessary to develop the proposed ML-CHDPM system is obtained. Step 2: Data standardization is a technique utilized to enhance the efficacy of a framework in the presence of significant variations in the input information.The z-score approach is incorporated into the input normalization process, as outlined in Equation (10). The symbols S v(x) and S id denote the input information's normalized value and standard deviation.I pd(x) and I md denote the input information and its corresponding average. Step 3: Layers of the proposed model are linked with weights and biases to initialize the networks. Step 4: The computational process within the CNN layer is systematically transmitted via the convolutional and pooling layers that are present within the CNN layer.Within this stratum, the process of obtaining features is achieved from the input information, and the resultant output values are ascertained. Step 5: The computation of the hidden layer associated with the BiLSTM layer is contingent upon the output information generated by the CNN layer, which is utilized to ascertain its resultant value. Step 6: The AM layer is employed to compute the output information of the BiLSTM layer to ascertain its output value. Step 7: The computation of the model's results is contingent upon the result value of the AM layer. Step 8: The calculation of error involves determining the disparity between the genuine value of the data grouping and the established computation value generated by the result of the layer. Step 9: Determine whether the categorization process has reached its termination condition.Within this particular context, it is deemed that the criteria for dismissal are achieved under three circumstances: firstly, when the rate of forecasting error falls below the designated threshold value; secondly, upon completion of a certain number of phases; and thirdly, when the weight is lower than the particular threshold value.The training process is deemed to be concluded upon fulfilling any previous circumstances. Step 10: The training procedure reverts to Step 4 to resume the training procedure, following the update of biases and weights for each layer and the propagation of the computed error in the reverse direction. The diagnosis algorithm for CHD detection is shown in Algorithm 1.The "Train Model" algorithm takes an input training dataset and outputs a trained model.The algorithm consists of several steps.Firstly, the input data are derived and then standardized for preprocessing.Weights and biases are initialized to set up the model.The training loop begins, where the forward pass is performed to compute the hidden layer output, followed by applying an activation function to generate the output.The final output is computed, and the error is calculated by comparing it with the desired training data.Termination criteria are checked, and if the error meets the criteria, the loop is exited.Otherwise, weights and biases are updated to optimize the model.This loop continues until the termination condition is satisfied.Finally, the trained model is returned as the output of the algorithm. Classification Method of CHD Accurately identifying and categorizing CHD pathology is contingent upon the training procedure and its successful culmination.The procedural details of the detection procedure are explicated as follows. Step 1: The first step considers the input information necessary to identify and categorize CHD. Step 2: The input information is standardized. Step 3: The ML-CHDPM system that had undergone training is utilized to potentially process the standardized information as input to determine its corresponding output value. Step 4: Restoring data normalization to its initial value is accomplished using Equation (11). S id denotes the input information's standard deviation.O v(n) and I md denote the output result and the input information, respectively. Step 5: The resultant value is obtained as the outcome of a categorization issue that distinguishes ECG commands into typical and CHD disease indications. The process is shown below. • Input Sequences: The LSTM block receives sequential data representing the clinical and demographic features pertinent to CHD.The input sequence can be denoted as where each x i reflects the input at a specific time step i. • Input Embedding: It is common practice to utilize an embedding layer before feeding the input sequence into the LSTM block.The layer in question facilitates the mapping of input values onto a continuous vector space, enabling the capture of connections between various input characteristics.The input sequence X is transformed into a novel illustration using an embedding layer.This new sequence illustration is denoted as E and comprises individual elements e 1 , e 2 , • • • , e n . • LSTM Architecture: The LSTM block comprises memory cells that retain information throughout the sequential inputs.The LSTM architecture should consist of three gates, namely the input, forget, and output gates, which regulate the data flow within each cell. • Hidden State Initialization: The LSTM block necessitates an initial concealed state h 0 and the beginning cell state c 0 at the onset of the sequences.Usually, these states start as vectors consisting of zeros or learned variables. • LSTM Computation: The LSTM block sequentially procedures the input sequence, iteratively modifying the concealed state and cell state at each period step.a. The computation of gates involves the input gate (i n ), forget gate ( f n ), and output gate (o n ) for every time step x.These gates are calculated based on the present input (e n ) and the before hidden state (h n−1 ).b. Updating the memory cell (c n ) involves the integration of the input gate, forget gate, and the last memory cell (c n−1 ).The information storage process is governed by the input gate, which selects the relevant data to be retained, and the forget gate, which decides the data to be disregarded.c. The computation of the hidden state (h n ) is contingent upon the revised memory cell (c n ) and the output gate (o n ) in the process of hidden state update.The latent state conveys pertinent data from the input sequence to the following temporal intervals.d. The computation of the output at a given time step x (y n ) involves the application of a linear modification to the hidden states (h n ), followed by the potential application of an activating function.Applying an activating function depends on the particular categorization task specifications. • Final Time Step: Upon completion of the input sequence preparation, the LSTM block generates a conclusive hidden state, denoted as h n , and a result corresponding to the final time step. • Classification Layer: The LSTM block's result is subsequently inputted into the categorization layer, which comprises one or more densely connected layers.The layers acquire the ability to establish a correlation between the output of the LSTM and the intended categorization labels, thereby encapsulating intricate patterns within the dataset. • Training and Optimization: The labeled database of CHD instances is utilized for training the LSTM block in conjunction with the categorization layer.The LSTM block and categorization layer variables are optimized using gradient descent and backpropagation through duration to reduce a suitable loss coefficient. • Inference: Upon completing the training process of the LSTM block, it becomes viable to classify novel and unobserved data.The LSTM block is utilized to process the input sequence, and subsequently, the anticipated CHD categorization is obtained by passing the result through the categorization layer. Training and Testing of the Model The Xavier initialization technique is commonly employed to initialize the weights of an algorithm.The ML-CHDPM algorithm utilized in the present research was updated using backpropagation with a batch limit of 10.The cross-entropy functioning is utilized for the assessment of network loss.The ML-CHDPM architecture was trained with specific variables to optimize its diagnostic efficiency.These variables include lambda (L1 normalization) set to 0.2, a learning rate 3 × 10 −4 , and a momentum value of 0.3.The variables hinder the overfitting of the information through normalization, facilitate data convergence via learning rate, and modulate the pace of learning through movement. The function f (i) is represented by the notation.At the same time, P x indicates the probability of transportation over the complete set of classes, and x signifies the total number of types.The present study employs a categorized ten-fold cross-validation approach (i).The ECGs of the four distinct groups have been partitioned into ten discrete sections.The model is trained using nine parts, while the remainder is reserved for testing.Each partitioned segment comprises a comparable proportion of the classroom target as the complete dataset.This study consists of ten repetitions. Model Evaluation To assess the effectiveness of the case segmentation approach, the Mask-RCNN results are subjected to six measurements for validation and evaluation.These indicators include categorization loss, segmentation losses, identification loss, Jaccard indicators, Dice coefficient resemblance for segmentation, and mean average accuracy for identifying objects. The Jaccard resemblance coefficients, known as the DCS, are utilized to assess the resemblance and variation of sets of specimens.In this instance, the objective is to eval-uate the efficacy of predictive pictures accompanied by comprehensive truth labeling.Equation ( 14) depicts the DCS. The equation for calculating the expected outcomes, denoted by i x , in a given number of operates, M, is dependent on the corresponding truth label, j x .The DCS's pixel index, which falls within the range of [0, 1], quantifies the likelihood of correspondence between the anticipated and actual images. The quantitative evaluation of the Mask R-CNN approach, which was trained and verified, was conducted using mean average precision (mAP).The mAP is commonly used to evaluate object detection theories.The mAP values for the different groups were calculated and subsequently averaged.While the model could identify multiple objects, the definitive classification of said objects was occasionally ascertainable.Even with the accuracy of the anticipated classification for an object or examples, the output metric necessitates an evaluation of the model's spatial localization performance within the image.The widely utilized mAP is represented by Equation (15). The variable m cl represents the aggregate of distinct classes, while ∑ N−1 x=0 m xy denotes the overall count of pixels belonging to class x.The time is denoted t x . This section outlines a comprehensive methodology for the timely identification and diagnosis of cardiovascular ailments by utilizing data derived from the IoMT.The implementation uses an LSTM design and Attention Mechanism to effectively capture the data's temporal relationships and significant features.The precision of disease categorization is further improved by developing a diagnosis model utilizing ML-CHDPM.This section comprises a comprehensive assessment and juxtaposition of diverse models using the Heart Disease UCI and Cardiovascular Disease databases, thereby underscoring the efficacy of the proposed approach. Simulation Analysis and Outcomes The simulation analysis section provides a comprehensive overview of the experiments conducted to evaluate the effectiveness of the proposed strategy.The article describes the datasets used and the simulation measurements employed, highlighting the findings and conclusions attained.It demonstrates the efficacy of the proposed approach in identifying and assessing cardiovascular disease. Datasets Our research methodology is intricately woven around the utilization of diverse datasets, each carefully chosen to address specific facets of our investigative objectives.The Heart Disease UCI dataset [21][22][23][24] serves as a foundational element, standing out with its comprehensive array of 76 attributes that encompass a wide range of patient-related information.From this dataset, we identified a subset of attributes crucial to our analysis.To uphold the privacy of individuals, meticulous anonymization measures were implemented, replacing any personally identifiable information, such as patient identities and security numbers, with synthetic values.Our focus within this dataset is specifically directed towards the subset of attributes deemed functionally significant for our research objectives. In contrast, the Cardiovascular Disease Dataset [25] emerges as a cornerstone in the execution of our proposed hybridized technique.This dataset comprises a voluminous repository of 70,000 individual patient records, each intricately characterized by 11 distinct features.Despite the initial dataset presenting an extensive pool of 210 factors, we conducted a rigorous curation process to select only the most relevant eight features for our analytical framework.Within this dataset, 93 individuals received a diagnosis of coronary artery disease, while 116 individuals were identified as free from cardiovascular disease.The diagnostic categorization is represented by a binary variable, where 0 indicates the absence of coronary artery disease, and 1 signifies its presence.The deliberate selection and curation of features within this dataset are pivotal, aligning meticulously with our research objectives and ensuring a nuanced exploration of factors integral to our study. A crucial shift in our research direction involves an exclusive focus on 5-min ECG data sourced from pregnant women.This strategic refinement aligns our study more precisely with its overarching objectives, allowing for an in-depth analysis of short-term heart rate variability (HRV) patterns.This adjustment is particularly critical in our pursuit of refining early detection methods for cardiovascular conditions during pregnancy, underscoring the dynamic nature of our research and ensuring a targeted exploration of factors specifically relevant to our evolving goals.The inclusion of these datasets forms the bedrock of our proposed methodology, providing the necessary breadth and depth for the development and validation of our novel hybridized technique. Beyond these primary datasets, we also leverage the Fantasia dataset-a pivotal component contributing to the diversity and richness of our analysis.Fantasia, a publicly available dataset, comprises a curated collection of electrocardiogram (ECG) recordings obtained from a diverse group of subjects.Notably, the dataset includes recordings from individuals with various cardiac conditions, offering a broad spectrum of cardiac signals that align with the scope of our research objectives. The MIT-BIH Arrhythmia Database further enhances the breadth of our analysis.Widely recognized and utilized in cardiac research, this database includes annotated longterm ECG recordings from a variety of subjects, capturing a range of cardiac arrhythmias and conditions.The meticulous annotations within this dataset provide a valuable resource for the development and validation of algorithms aimed at detecting and classifying arrhythmias, aligning seamlessly with the goals of our study. Together, these datasets contribute to the richness and diversity of our research, providing a robust foundation for the development, validation, and refinement of our proposed methodologies. Parameter Settings and System Configuration The Bi-LSTM system was configured with 8 epochs, a concealed layer size of 150, and a dropout rate of 0.2.The Adam optimization algorithm was employed for network optimization, using a fixed iteration count of 50 and a population size of 50.The proposed model for coronary heart disease (CHD) identification was implemented on a computational system equipped with an Intel i7 processor, 16 GB memory, and a 6 GB graphics card. Metrics The analysis utilizes the following metrics: true positive (T po ), true negative (T ne ), false positive (F po ), and false negative (F ne ).These metrics are employed in the analysis section to assess the accuracy of the predictions.Accuracy measures the count of correctly predicted outcomes for a given set of input specimens, calculated using Equation (16). Precision is a reliable metric in scenarios where false positives are high.Its computation is shown in Equation ( 17): The sensitivity or recall metric is determined by dividing the number of true positives by the total of true positives and false negatives.The recall is shown in Equation (18). The measurement of specificity pertains to instances wherein the true state of the condition is absent, particularly in the context of negative attack categorization.The calculation is performed according to Equation (19). The false positive rate (FPR) is a metric that quantifies the proportion of misclassified samples that were predicted as negative, despite belonging to the positive category.It is calculated using Equation (20). The false negative rate (FNR) is a metric that quantifies the proportion of misclassified samples that belong to the positive class.It is calculated using Equation (21). Experimental Results In the results section of our study, we aim to provide a detailed exposition of how each component of our innovative framework contributes to enhancing the accuracy of congenital heart disease (CHD) detection.This analysis is essential for a comprehensive understanding of the role played by each element in achieving our research objectives. Our framework integrates three key components: Convolutional Neural Networks (CNNs), Bi-directional Long Short-Term Memory (BiLSTM) networks, and Attention Mechanisms (AMs).Each of these components plays a pivotal role in fortifying the model's ability to accurately identify CHD cases. The CNN component, a foundational pillar of our methodology, excels at feature extraction by focusing on salient characteristics within the input data.In the results section, we delve into how CNNs effectively capture intricate patterns and subtle nuances within the data.This detailed feature engineering process significantly enhances the model's diagnostic capabilities, ultimately leading to improved CHD detection.Complementing the CNN, our model incorporates BiLSTM networks, which are instrumental in capturing temporal dependencies within the data.In this section, we elucidate how BiLSTM networks conduct an in-depth analysis of preprocessed electrocardiogram (ECG) signals, a critical aspect of CHD diagnosis.We highlight how considering the temporal dynamics of the data empowers the model to adapt and make accurate predictions, especially when dealing with time-series data like ECG signals. Furthermore, we emphasize the role of Attention Mechanisms (AMs) in refining predictive accuracy.In the results section, we elaborate on how AMs dynamically assign varying degrees of importance to past time series data and state information features.This strategic use of AMs enhances the model's adaptive capabilities, contributing to more precise CHD predictions.By providing a detailed account of how each part of our framework contributes to the results, we offer a nuanced understanding of their individual and collective impact on CHD detection.This analysis underscores the significance of our innovative approach and its potential to transform the field of CHD diagnosis.In this section, the simulation analysis and results using the proposed method are compared with those of existing techniques. The accuracy results of various approaches, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across multiple iterations from 0 to 200, are visually depicted in Figure 4.The mean accuracy values for these different methods are as follows: MLDDP (91.07%),CNN (91.39%),RLDS (91.23%),RF (89.04%),NB (89.19%), and our novel ML-CHDPM method introduced in this study (96.51%).It is evident that the ML-CHDPM outperforms alternative methodologies, demonstrating its ability to achieve significantly higher levels of accuracy.This observed improvement can be attributed to the successful integration of the LSTM framework, the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques empower our approach to effectively extract and leverage essential characteristics from the input data, resulting in enhanced precision of predictions and an exceptional overall efficacy in diagnosing cardiovascular ailments. contributes to the results, we offer a nuanced understanding of their individual and collective impact on CHD detection.This analysis underscores the significance of our innovative approach and its potential to transform the field of CHD diagnosis.In this section, the simulation analysis and results using the proposed method are compared with those of existing techniques. The accuracy results of various approaches, including MLDDP, CNN, RLDS, RF, NB and our proposed ML-CHDPM, across multiple iterations from 0 to 200, are visually depicted in Figure 4.The mean accuracy values for these different methods are as follows MLDDP (91.07%),CNN (91.39%),RLDS (91.23%),RF (89.04%),NB (89.19%), and our novel ML-CHDPM method introduced in this study (96.51%).It is evident that the ML-CHDPM outperforms alternative methodologies, demonstrating its ability to achieve significantly higher levels of accuracy.This observed improvement can be attributed to the successful integration of the LSTM framework, the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques empower our approach to effectively extract and leverage essential characteristics from the input data, resulting in enhanced precision of predictions and an exceptional overall efficacy in diagnosing cardiovascular ailments.Precision results for multiple methods, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across a range of iterations from 0 to 200, are graphically presented in Figure 5.These precision values for each method were meticulously calculated yielding the following results: MLDDP at 81.56%, CNN at 81.21%, RLDS at 82.38%, RF at 80.31%, NB at 80.97%, and our innovative ML-CHDPM at 89.14%.It is noteworthy that the ML-CHDPM model demonstrates a significant enhancement in precision when compared to alternative approaches.Consistently outperforming its counterparts, our model achieves an average precision rate of 89.14%.This exceptional precision improvement can be attributed to the strategic incorporation of the LSTM architecture, the Attention Precision results for multiple methods, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across a range of iterations from 0 to 200, are graphically presented in Figure 5.These precision values for each method were meticulously calculated, yielding the following results: MLDDP at 81.56%, CNN at 81.21%, RLDS at 82.38%, RF at 80.31%, NB at 80.97%, and our innovative ML-CHDPM at 89.14%.It is noteworthy that the ML-CHDPM model demonstrates a significant enhancement in precision when compared to alternative approaches.Consistently outperforming its counterparts, our model achieves an average precision rate of 89.14%.This exceptional precision improvement can be attributed to the strategic incorporation of the LSTM architecture, the Attention Mechanism, and the tailored training process within the ML-CHDPM diagnostic system.These techniques empower our approach to adeptly acquire and leverage informative characteristics from the input data, resulting in more accurate predictions and heightened precision in diagnosing cardiovascular ailments. Mechanism, and the tailored training process within the ML-CHDPM diagnostic system These techniques empower our approach to adeptly acquire and leverage informativ characteristics from the input data, resulting in more accurate predictions and heightened precision in diagnosing cardiovascular ailments.Figure 6 presents the recall results expressed as percentages across a range of itera tions from 0 to 200, evaluating various methods, including MLDDP, CNN, RLDS, RF, NB and our proposed ML-CHDPM.The mean recall rates for these methods are as follows MLDDP (94.27%),CNN (93.25%),RLDS (94.02%),RF (92.56%),NB (92.97%), and the in novative ML-CHDPM (99.19%).Remarkably, the ML-CHDPM model showcases a signif icant enhancement in recall compared to alternative methodologies.It consistentl achieves the maximum recall rate, averaging at an impressive 99.19%.These exceptiona results can be attributed to the proficient utilization of the LSTM framework, the strategi incorporation of the Attention Mechanism, and the tailored training regimen within th ML-CHDPM diagnostic model.These techniques enable our methodology to adeptl grasp and leverage pertinent data from the input, resulting in enhanced prognostic preci sion and heightened identification of cardiovascular ailments.Figure 6 presents the recall results expressed as percentages across a range of iterations from 0 to 200, evaluating various methods, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM.The mean recall rates for these methods are as follows: MLDDP (94.27%),CNN (93.25%),RLDS (94.02%),RF (92.56%),NB (92.97%), and the innovative ML-CHDPM (99.19%).Remarkably, the ML-CHDPM model showcases a significant enhancement in recall compared to alternative methodologies.It consistently achieves the maximum recall rate, averaging at an impressive 99.19%.These exceptional results can be attributed to the proficient utilization of the LSTM framework, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques enable our methodology to adeptly grasp and leverage pertinent data from the input, resulting in enhanced prognostic precision and heightened identification of cardiovascular ailments. CHDPM diagnostic system.These techniques empower our approach to adeptly acquire and leverage informative characteristics from the input data, resulting in more accurate predictions and heightened precision in diagnosing cardiovascular ailments.Figure 6 presents the recall results expressed as percentages across a range o iterations from 0 to 200, evaluating various methods, including MLDDP, CNN, RLDS, RF NB, and our proposed ML-CHDPM.The mean recall rates for these methods are as follows: MLDDP (94.27%),CNN (93.25%),RLDS (94.02%),RF (92.56%),NB (92.97%), and the innovative ML-CHDPM (99.19%).Remarkably, the ML-CHDPM model showcases a significant enhancement in recall compared to alternative methodologies.It consistently achieves the maximum recall rate, averaging at an impressive 99.19%.These exceptiona results can be attributed to the proficient utilization of the LSTM framework, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques enable our methodology to adeptly grasp and leverage pertinent data from the input, resulting in enhanced prognostic precision and heightened identification of cardiovascular ailments.are as follows: MLDDP (83.85%),CNN (83.16%),RLDS (83.93%),RF (82.05%),NB (82.34%), and our innovative ML-CHDPM method introduced in this study (90.11%).Notably, the ML-CHDPM model showcases a substantial enhancement in specificity when compared to alternative approaches.It consistently achieves the highest level of specificity, with an average of 90.11%.This improved efficacy can be attributed to the implementation of sophisticated methodologies, including the LSTM framework, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques enable our method to efficiently acquire and utilize pertinent characteristics from the input data, resulting in enhanced prognostic precision and superior identification of cardiovascular ailments.Figure 7 illustrates the specificity results presented in percentage values across a range of iterations from 0 to 200, employing various methods including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM.The mean specificity values for each of these methods are as follows: MLDDP (83.85%),CNN (83.16%),RLDS (83.93%),RF (82.05%),NB (82.34%), and our innovative ML-CHDPM method introduced in this study (90.11%).Notably, the ML-CHDPM model showcases a substantial enhancement in specificity when compared to alternative approaches.It consistently achieves the highest level of specificity, with an average of 90.11%.This improved efficacy can be attributed to the implementation of sophisticated methodologies, including the LSTM framework, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These techniques enable our method to efficiently acquire and utilize pertinent characteristics from the input data, resulting in enhanced prognostic precision and superior identification of cardiovascular ailments.Figure 8 provides a comprehensive view of the false positive rate (FPR) outcomes achieved by various techniques, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across different iterations ranging from 0 to 200.The mean FPR for each method is calculated as follows: MLDDP (15.69%),CNN (15.36%),RLDS (15.49%),RF (15.97%),NB (15.87%), and our innovative ML-CHDPM method introduced in this study (9.08%).Remarkably, the ML-CHDPM model showcases a significant reduction in the false positive rate compared to alternative methodologies.It consistently achieves the lowest rate of false positives, averaging at just 9.08%.This enhanced performance can be attributed to the incorporation of sophisticated methods, including the utilization of LSTM, the strategic integration of the Attention Mechanism, and the tailored training procedure within the ML-CHDPM diagnostic model.These methodologies enable our approach to effectively identify and minimize erroneous positive predictions, ultimately resulting in improved precision and reliability in the diagnosis of cardiovascular ailments.Figure 8 provides a comprehensive view of the false positive rate (FPR) outcomes achieved by various techniques, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across different iterations ranging from 0 to 200.The mean FPR for each method is calculated as follows: MLDDP (15.69%),CNN (15.36%),RLDS (15.49%),RF (15.97%),NB (15.87%), and our innovative ML-CHDPM method introduced in this study (9.08%).Remarkably, the ML-CHDPM model showcases a significant reduction in the false positive rate compared to alternative methodologies.It consistently achieves the lowest rate of false positives, averaging at just 9.08%.This enhanced performance can be attributed to the incorporation of sophisticated methods, including the utilization of LSTM, the strategic integration of the Attention Mechanism, and the tailored training procedure within the ML-CHDPM diagnostic model.These methodologies enable our approach to effectively identify and minimize erroneous positive predictions, ultimately resulting in improved precision and reliability in the diagnosis of cardiovascular ailments. Figure 9 provides a comprehensive view of the false negative rate (FNR) outcomes achieved by various techniques, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across a range of iterations spanning from 0 to 200.The false negative rates for each method are presented as follows: MLDDP (12.99%),CNN (13.56%),RLDS (13.34%),RF (12.79%),NB (12.89%), and our innovative ML-CHDPM (8.04%).Notably, the ML-CHDPM model demonstrates a substantial reduction in the false negative rate compared to alternative methodologies.It consistently achieves the lowest rate of false negatives, with an arithmetic mean of just 8.04%.This enhanced performance can be attributed to the integration of sophisticated methodologies, including the utilization of LSTM, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These methods enhance the efficiency of our proposed approach in precisely identifying and categorizing instances of cardiovascular ailments, resulting in a decreased occurrence of erroneous negative results and an overall improvement in diagnostic performance.Figure 9 provides a comprehensive view of the false negative rate (FNR) outcomes achieved by various techniques, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across a range of iterations spanning from 0 to 200.The false negative rates for each method are presented as follows: MLDDP (12.99%),CNN (13.56%),RLDS (13.34%),RF (12.79%),NB (12.89%), and our innovative ML-CHDPM (8.04%).Notably, the ML-CHDPM model demonstrates a substantial reduction in the false negative rate compared to alternative methodologies.It consistently achieves the lowest rate of false negatives, with an arithmetic mean of just 8.04%.This enhanced performance can be attributed to the integration of sophisticated methodologies, including the utilization of LSTM, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These methods enhance the efficiency of our proposed approach in precisely identifying and categorizing instances of cardiovascular ailments, resulting in a decreased occurrence of erroneous negative results and an overall improvement in diagnostic performance.The simulation findings indicate that the ML-CHDPM approach performs better than alternative methodologies across multiple evaluation metrics.The method under consideration shows an average accuracy of 95.94%, precision of 89.21%, recall of 97.35%, Figure 9 provides a comprehensive view of the false negative rate (FNR) outcomes achieved by various techniques, including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM, across a range of iterations spanning from 0 to 200.The false negative rates for each method are presented as follows: MLDDP (12.99%),CNN (13.56%),RLDS (13.34%),RF (12.79%),NB (12.89%), and our innovative ML-CHDPM (8.04%).Notably, the ML-CHDPM model demonstrates a substantial reduction in the false negative rate compared to alternative methodologies.It consistently achieves the lowest rate of false negatives, with an arithmetic mean of just 8.04%.This enhanced performance can be attributed to the integration of sophisticated methodologies, including the utilization of LSTM, the strategic incorporation of the Attention Mechanism, and the tailored training regimen within the ML-CHDPM diagnostic model.These methods enhance the efficiency of our proposed approach in precisely identifying and categorizing instances of cardiovascular ailments, resulting in a decreased occurrence of erroneous negative results and an overall improvement in diagnostic performance.The simulation findings indicate that the ML-CHDPM approach performs better than alternative methodologies across multiple evaluation metrics.The method under consideration shows an average accuracy of 95.94%, precision of 89.21%, recall of 97.35%,The simulation findings indicate that the ML-CHDPM approach performs better than alternative methodologies across multiple evaluation metrics.The method under consideration shows an average accuracy of 95.94%, precision of 89.21%, recall of 97.35%, specificity of 90.57%, false positive rate of 9.43%, and false negative rate of 2.65%.The findings suggest that the proposed approach exhibits a higher level of proficiency in precisely identifying and categorizing occurrences of cardiovascular ailments. Conclusions and Future Scope The global prevalence of CHD has made it a prominent public health issue, highlighting the need for efficient diagnostic techniques to facilitate timely detection and intervention-the present study aimed to investigate the effectiveness of the ML-CHDPM approach in diagnosing and categorizing cardiovascular diseases.The objective was to improve the precision and effectiveness of CHD diagnosis using machine learning methodologies.The study identified several significant concerns regarding the diagnosis of CHD, which encompass the requirement for precise classification models, the selection of relevant features, and the capacity to manage voluminous and intricate datasets.To address the challenges, the ML-CHDPM methodology was introduced.This approach integrates machine learning methods with an extensive feature selection technique.The ML-CHDPM approach demonstrates several noteworthy characteristics.The process facilitates the recognition of pertinent diagnostic features within intricate datasets, resulting in enhanced classification precision.The efficacy of the ML-CHDPM approach was assessed through rigorous simulations and experiments.The findings indicated that this approach outperformed other methodologies.The study's findings suggest that the method demonstrated superior performance across six key metrics: accuracy, precision, recall, specificity, FPR, and FNR.The method achieved an average accuracy of 94.28%, precision of 87.54%, recall of 96.25%, specificity of 91.74%, FPR of 8.26%, and FNR of 3.75%. The findings of the research have significant implications for both the academic community and society at large.This study makes a valuable contribution to medical informatics by demonstrating the potential of machine learning techniques in diagnosing cardiovascular disease.The ML-CHDPM approach, as suggested, constitutes a valuable augmentation to the current corpus of knowledge, stimulating additional investigation and ingenuity in this field.The ML-CHDPM approach provides substantial advantages in its societal impact.The precise and prompt identification of CHD facilitates immediate intervention and enhances the overall health outcomes of patients.The efficacy and dependability of the approach will assist healthcare practitioners in making well-informed judgments, resulting in improved disease control and decreased healthcare expenditures. Even with the encouraging outcomes, it is crucial to recognize the constraints of this study.The research was centered on a particular dataset, which constrained how the results could be applied to other contexts.Additional verification through the utilization of more extensive and heterogeneous datasets is imperative.The practical and ethical aspects necessitate careful deliberation for deploying the suggested approach in real-life clinical environments.It is recommended that further research be conducted to perform comparative analyses with other advanced machine learning algorithms to provide additional validation for the superior performance of the ML-CHDPM technique.Interpretability methods could be used to augment the model's transparency and dependability by providing valuable insights into its decision-making process.It is necessary to conduct actual clinical trials to assess the efficacy and practicality of the approach in real healthcare environments. Figure 1 . Figure 1.Work process of the proposed ML-CHDPM.Geometric symbol for classification and line for process flow. Figure 1 . Figure 1.Work process of the proposed ML-CHDPM.Geometric symbol for classification and line for process flow. Figure 3 . Figure 3.The structure of the LSTM. Figure 3 . Figure 3.The structure of the LSTM. Figure 7 Figure 7 illustrates the specificity results presented in percentage values across a range of iterations from 0 to 200, employing various methods including MLDDP, CNN, RLDS, RF, NB, and our proposed ML-CHDPM.The mean specificity values for each of these methods
2024-01-09T16:49:28.419Z
2024-01-02T00:00:00.000
{ "year": 2024, "sha1": "06a31bee8807e477b3f42712403198b395df0263", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7080/12/1/4/pdf?version=1704201481", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4367d19094d1ccfaa8e37b6aecc27ee28d494d1c", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
54754507
pes2o/s2orc
v3-fos-license
Function Spaces with Bounded L p Means and Their Continuous Functionals and Applied Analysis 3 Inequality (10) gives a bound for the norm of g: 󵄩󵄩󵄩󵄩g 󵄩󵄩󵄩󵄩Mp ⩽ 󵄩󵄩󵄩󵄩f 󵄩󵄩󵄩󵄩Mp 󵄩󵄩󵄩󵄩μ 󵄩󵄩󵄩󵄩M. (14) The convolution defined in this way is a linear continuous operator fromM toM, with the norm bounded by ‖μ‖ M . Since L(R) embeds isometrically into the space of finite Borel measures on R, we can convolve every ρ ∈ L with functions inM: Introduction Families of Banach spaces of locally functions whose means satisfy various boundedness conditions on finite intervals were introduced in [1][2][3] and references therein as a natural environment to extend the notion of almost periodic functions originally introduced in [4][5][6]. All the spaces of bounded -means contain , but usually they consist of functions that are not small at infinity and have norms defined by the asymptotic behaviour of their integral means.Therefore a relevant part of the information carried by these functions is at infinity, where they may become large.Which consequences does this fact yield for convolution operator, and for continuous functionals on these spaces?Should we expect the same behavior that is typical of spaces?This paper presents old and (some) new results and proofs for this question: it is aimed to show that bounded -mean spaces behave as spaces on the issue of completeness but (some or all of them) are completely different for what concerns isometric properties of translations, convolution operators, separability, representation theorems for continuous linear functionals, and extreme points of the unit balls. For this goal, we focus our attention onto three significant families of locally spaces on R: Marcinkiewicz spaces M , consisting of functions whose values on finite intervals are irrelevant (they can be changed without changing the norm, which is indeed only a seminorm), Stepanoff spaces S , whose norm depends only on the maximum content of functions on all finite intervals, and finite -mean spaces, where the -means with respect to intervals [−, ] are bounded with respect to , for sufficiently large (say ⩾ 1). We give an expanded and revised presentation of some known results, mostly taken from the fundamental paper [7], and prove some new ones.The results on Marcinkiewicz spaces and bounded -mean spaces are taken from [7]; the duality results for Stepanoff spaces in Section 6 are new and so are the examples of discontinuous asymptotic functionals on S and .The analysis of the integral representation of continuous functionals on (Theorem 29) follows again [7] very closely but provides many more details, gives slightly more general statements, and improves several steps.Also the description of extreme points of the unit ball of (Section 7.1) is taken from [7]; the results on extreme points for the unit ball of M (Section 7.2) are a greatly expanded and somewhat improved revision of the approach of [7], Inequality (10) The convolution defined in this way is a linear continuous operator from M to M , with the norm bounded by ‖‖ . Since 1 (R) embeds isometrically into the space of finite Borel measures on R, we can convolve every ∈ 1 with functions in M : In particular, As for the spaces introduced before, also is complete.However, translations are not isometries here (see also [16, inequality (30)]). Proof.Without loss of generality, let > 0. Choose 0 < < and let be the characteristic function of the interval [ − , +].Suppose that ⩽ +.Then, for ⩾ , the -mean (1/2) ∫ − |()| is largest for = +, and its maximum value ‖‖ is (/( + )) 1/ .On the other hand, the translate is a characteristic function centered at 0, and its norm in is 1 if ⩾ and (/) The constant in this inequality is maximum for = and its largest value is 1 + /. Stepanoff Spaces. Stepanoff functions, introduced in [1], are those measurable functions whose -norm on intervals of length, say, 1, is bounded.The supremum of these norms, sup ∈R ∫ +1 || , defines a norm for the Stepanoff space S , 1 ⩽ < ∞.The Stepanoff space S ∞ coincides with ∞ and is not considered in this paper.It is immediate to prove (see Corollary 9 below) that the M -norm is bounded by the S -norm, and therefore the Stepanoff space embeds into the Marcinkiewicz space. More generally, for every > 0, one could introduce the following - Now let ∈ Z be such that Since the Stepanoff norms are clearly invariant under translations, we can limit attention to positive and .Now, Therefore the limit exists; it is infinite if and only if ‖‖ () S is infinite for some, hence for all, . The Weyl norm defines a normed space called the Weyl space .This was one of the first bounded mean spaces introduced in order to extend the definition of almost periodic functions [2].However, it was proved in [10] that the Weyl space is not complete; therefore it will not be considered in this paper. Completeness It is easily seen that the spaces and S are complete.Instead, it is considerably more difficult to show that M , hence M , are complete.The proof given here essentially reproduces the ingenious argument given by Marcinkiewicz in [9], except for correcting minor computational mistakes. In particular, For the sake of simplicity, for every ∈ M and ∈ R, we rewrite the norm in , defined in (18), as follows: Then, for every ∈ M , ‖‖ M = inf >0 (); therefore ( ) is a Cauchy sequence. Let us choose a sequence such that We claim that the sequence converges to the following function : Indeed, for ⩾ 1 define and observe that and so for some constant depending only on .Therefore by the first inequality in (38).The same argument for > yields Hence, again by the first inequality (38), Finally, by the same argument, and so Now, by ( 44), ( 46), (48), and (50); one has Therefore, the lim sup with respect to satisfies the inequality and the claim is proved, hence the theorem. Inclusions and Banach Space Structure For the goal of understanding duality, it is appropriate to discuss first the inclusions between all these spaces, and their structure. Inclusions. First of all, it is obvious from the inclusions between spaces over compact sets that ⊂ if 1 ⩽ ⩽ ⩽ ∞, and the inclusions are continuous.The same is true for the spaces and S , and for the Banach quotient M .All these families of spaces coincide with ∞ (R) when = ∞, and obviously ∞ embeds continuously in all of them, but in this paper we do not consider the special case = ∞. Let us come to more interesting inclusions.It is easy to see that S embeds continuously in and M , as follows.Proof.Recall that I is the null space of the semi-norm; hence it is obviously closed in M .If ∈ I converges to in the norm of M , then, by the first inequality of Lemma 8, → also in the seminorm of M ; hence ∈ I since I is closed in this seminorm. Remark 11.The embedding of S in is proper; it is not an isomorphism, or, equivalently, the norm of S cannot be bounded by a multiple of the -norm.Indeed, we have already observed that translations are isometries on S .Instead, ‖ ()‖ tends to 0 as → ±∞ for every with compact support. As a consequence of Corollary 9, one has a continuous embedding ⊂ M .Therefore the embedding is projected onto the Banach quotient M ; one has /I ⊂ M .It turns out that these two quotients coincide.This has been proved in [17,Proposition 2.2(ii)]; here we give a slightly different proof. Proposition 12. M = M /I is isometrically isomorphic to /I .Proof.We have already observed that the latter quotient is embedded in the former, and, for every ∈ , one has ‖‖ M ⩽ ‖‖ /I .Let denote the coset of mod I .We only need to show that the coset of every ∈ M contains a function in and the norms are equal. Let ∈ M and > We also recall that the null space I of the Marcinkiewicz semi-norm, endowed with the norm of , is a Banach space.Now the following result, proved in [16, Theorems 2 and 3], is easy. (iii) If , are conjugate indices and 1 < ⩽ ∞, then is the dual space of (it will follow from Theorem 60(iii) that 1 is not a dual space). (iv) Moreover but / = 2 − , so Therefore the closed subspace of M generated by the sequence { } is isomorphic to ℓ ∞ , and the same argument also works for M . The Dual Spaces of 𝑀 𝑝 and M𝑝 The Riesz representation theorem shows that all continuous linear functionals on spaces can be represented as (integrals versus) functions in , and so they depend mostly on the values of the functions on compact sets.Our aim here is to show that, on spaces of locally summable functions, that can be large at infinity, some continuous functionals depend on asymptotic values and cannot be represented by functions in the usual integral sense (we shall see that most of them can be represented by integrals of means).Continuous linear functionals on Marcinkiewicz spaces have been studied in [7] on bounded -mean spaces, in [16].We present these results here and construct interesting examples of functionals that are not represented by functions; in the next Section, we extend these results to Stepanoff spaces. Functionals on Seminormed Spaces. Let us consider the dual space of the Marcinkiewicz space M .This is a complete semi-normed space but not a Banach space.It is clear that its continuous linear functionals are precisely those that factor through the null space I of the semi-norm, that is, the dual of the Banach quotient M = M /I .Indeed, all continuous linear functionals on a seminormed complete space vanish on the null space of the semi-norm, because if ∈ does not vanish on , then () ̸ = 0 for some non-zero ∈ , but ‖‖ = 0 because is the null space of the semi-norm; hence there is no constant such that |()| < ‖‖ .The converse is obvious. Since every compactly supported function is in I , the dual of M does not contain non-zero functionals that can be represented as functions; that is, it consists of linear functionals that depend only on the asymptotic behaviour of Marcinkiewicz functions.Here are the two most natural ones, defined and continuous on a closed subspace of M and thereby extended to continuous functionals on the whole of M by the Hahn-Banach theorem: There are interesting instances of M -discontinuous functionals defined on appropriate subspaces of M .For instance, the functionals defined on the subspaces ± of functions vanishing at infinity, are discontinuous.The lack of continuity is equivalent to the fact that the subspaces ± are not closed in M ; the proof of this elementary fact will be given in Corollary 50. Here are some other interesting M -discontinuous functionals.For 0 ⩽ ⩽ 1, let as → +∞.It is clear that 1 = M and, for < 1, is contained in the closed subspace of M of the functions with semi-norm 0; in particular, these subspaces are not dense in Since is a subspace of the null space of the semi-norm, the only linear functional that is continuous in the semi-norm of M is the zero functional.We now exhibit some interesting nontrivial (hence discontinuous) linear functionals on .For simplicity, we first describe them in the case = 1: Clearly, Hölder's inequality shows that, for > 1, the correct way to define ± and ± is by replacing at the denominator with 1−(1−)/ . In the next sections we expand these ideas to achieve a more complete representation, developed in [7], where the above Hahn-Banach extensions are reinterpreted as Dirac measures on the points at infinity of a suitable Stone-Čech compactification. Uniformly Convex Normed Spaces Definition 19 (see [18]).A normed (or semi-normed) space is uniformly convex if, for every > 0 and all vectors , in the unit ball such that ‖ − ‖ ⩾ , there exists () > 0 such that (1/2)‖ + ‖ ⩽ 1 − ().The function () is called the modulus of convexity; its geometrical meaning is the infimum of the distance from the midpoint of and to the unit sphere (the boundary of the ball).Observe that is a nondecreasing function of . The following results are stated without proof in [7]. Lemma 20. (i) Let be a uniformly convex space and ℓ be a continuous linear functional on of norm 1 that attains its norm at a vector with ‖‖ = 1, in the sense that ℓ () = 1.Then, for every in the unit ball of with ‖ − ‖ ⩾ , one has |ℓ ()| ⩽ 1 − 2(). Proof.We can restrict attention to the bidimensional subspace of generated by and .The proof is illustrated in Figure 1.For simplicity, we have drawn the figure under the implicit assumption that the restriction of the -norm to this bidimensional space is the Euclidean norm, and indeed the spotted line that represents the hyperplane {ℎ : ℓ (ℎ) = 1} is drawn as perpendicular to the radius of the unit ball, but the only property that we are using is that all of the ball is on one side of this line, that is, we only use the fact that the ball is convex: that is, the triangular inequality. Part (i) follows by considering the segment in Figure 1, drawn from to the hyperplane {ℓ = 1} and orthogonal to this hyperplane, whose length is ℓ () − ℓ () = 1 − ℓ ().This segment is twice as long as its parallel segment drawn from the mid-point (+)/2, and in turn is longer than the distance between the mid-point and the unit sphere, hence longer than (). For part (ii), it is enough to observe that, whenever ‖‖ ⩽ ‖‖ < 1, the distance from /‖‖ and is larger than ‖ − ‖ and to apply part (i). The Dual of 𝑀 𝑝 : Integral Representation of Norm-Attaining Continuous Functionals.Now we describe the dual of the spaces of bounded -means, studied in [7,16]; in particular, we describe an integral representation, obtained in [7], for those continuous linear functions that attain their norm.All the forthcoming results on integral representation are taken from [7]; our proofs are more detailed and expanded than those in the original paper. We start with some easy comments on functionals that attain their norm. Lemma 22. (i) Let be a Banach space and its dual space.Then every element of , regarded as a functional on , attains its norm on some element ∈ .In particular, all continuous functionals on reflexive spaces attain their norms.(ii) Every real finitely additive finite Borel measure on a Borel space , regarded as a continuous functional on ∞ (), attains its norm.The norm is attained on a function that has modulus 1 on the support of the measure.The same is true for complex-valued finitely additive measures on R provided that they are absolutely continuous with respect to Lebesgue measure. (iii) If is not compact, finitely additive measures are continuous functionals on ∩ ∞ () (by restriction from functionals on ∞ (): so, not all continuous functionals on this space are given by countably additive measures.A real finitely additive measure attains its norm on ∩ ∞ () if and only if it is positive. (iv) Not every (countably additive) finite (real or complex) Borel measure on R, regarded as a continuous functional on (R), attains its norm, but it attains its norm if it is positive (up to multiplication by a constant). Proof.We first observe that, for every ∈ , there is ∈ such that ⟨, ⟩ = ‖‖.This is trivially true in the onedimensional subspace Ṽ generated by , for some linear functional ã on Ṽ; the requested element ∈ is the normpreserving Hahn-Banach extension of ã to a functional on the whole of . The real finitely additive finite Borel measures on a Borel space are the continuous dual of ∞ ().Let + and − be the characteristic functions of the supports of the positive and negative parts of , respectively.Then ‖‖ = ||() = ( + − − ).This proves the first half of part (ii), and it also proves parts (iii) and (iv); a real countably additive measure attains its norm if and only if it is positive (because it attains its norm only on the function + − − , which is discontinuous unless one of its two terms vanishes), or, slightly more generally, if it is a constant multiple of a positive measure. Let us show that an absolutely continuous measure on ∞ (R) attains its norm as a functional on ∞ .Indeed, if () = ∫ R ℎ for every measurable set and for some ℎ ∈ 1 , then In general, though, discrete non-positive measures on R do not attain their norm; for instance, let { } be an enumeration of the rationals and let = ∑ (−1) 2 − ; since Q is everywhere dense in R, there is no continuous function such that ( ) = (−1) , and so cannot attain its norm as a functional on (R). To finish the proof, let us provide examples of continuous functionals on ∩ ∞ (R) that are not represented by countably additive measures.Consider the closed subspace of ∩ ∞ of functions that have a finite limit for, say, → +∞.The limit lim → +∞ () is continuous on this subspace, and, by Hahn-Banach theorem, extends to a continuous functional on all of ∩ ∞ that vanishes on all compactly supported functions.Represented as a measure , this functional vanishes on all bounded sets but (R) = 1; therefore is finitely but not countably additive. Definition 23.For every locally function on R, let ♮ be the operator on defined by if () ̸ = 0, and ♮ = 0 otherwise (here as usual, for ∈ C, = , we write = phase ()). Proof.If ∈ , one has ♮ ∈ , because 1/ + 1/ = 1 implies that ( − 1) = , and so can be uniquely written as Proof.Recall that, for every Borel space , the dual space of ∞ () is ().By restriction, the dual space of ∩ ∞ ( ) is again ( ).More precisely, as ∩ ∞ is a norm-closed subspace of ∞ , its dual space is the quotient of ( ) = ( ∞ ( )) obtained by identifying two finitely additive measures that give rise to the same functional when restricted to ∩ ∞ ( ); apart from this equivalence, the dual of ∩ ∞ ( ) is isometrically isomorphic to ( ).Then the isometric isomorphism between ∩ ∞ ( ) and (( )) induces an isometry between the respective dual spaces (( )) and ( ). The characterization of extreme points, whose details are left to the reader, follows from this. On the basis of the isomorphism between (( )) and ( ), from now on, with abuse of notation, we shall write the measure in (( )) corresponding to ] ∈ ( ) again as ].The representing measure can be described more precisely for functionals attaining their norm, as follows. Theorem 29 (integral representation of functionals on attaining their norm).Let , be conjugate indices, with 1 < , < ∞, and ℓ a continuous functional on that attains its norm.Then, for some ∈ ( ) (notation as in Definition 26) and for some finite finitely additive positive measure on [1, ∞], one has for every ∈ .Moreover, ⩾ 0, ‖‖ = ‖ℓ‖, (, || ) = 1 on the support of and ℓ attains its norm on ♮ .Conversely, let ℓ be a functional as in (90), with respect to a finitely additive measure .Then this integral representation of ℓ is unique (except for the identification mentioned in the proof of Corollary 28), and ℓ is continuous on .Moreover, ℓ attains its norm if and only if the measure is positive, and (, || ) = 1 on the support of . Proof.Without loss of generality, assume ‖ℓ‖ = 1.By Lemma 27 the dual of is isometric to the space of countably additive measures on ( ); therefore, for some ] ∈ (( )) with ‖]‖ = 1 and for all ∈ , one has Let ∈ be a function on which ℓ attains its norm: As ‖ ♮ ‖ = 1, the integrand has modulus less than or equal to 1 for every in the unit ball of .On the other hand, As ‖‖ = 1, the measure must be positive.Moreover, (, || ) = 1 on the support of . Conversely, let us write where ] is the finitely additive Borel measure on given by ] = × .This integral representation is uniquely associated to an integral representation over ( ), of the form ℓ() = ∫ ( ) † ]. is also necessary.But the necessity holds in general also for complex spaces, as we have shown in the first half of the proof.That proof also shows that (, || ) = 1 on the support of and ℓ( ♮ ) = 1. It is important to remember that, in general, the Stone-Čech compactification of a cartesian product does not coincide with the product of the compactification (for an example, see [20,Chapter 6, problem 6N2]), and so the points of the compactification have not be written as pairs, as is done, for the aim of brevity, in the original reference [7,Lemma 4.3]. We now apply the Yosida-Hewitt decomposition theorem for finitely additive measures [21].We need the following definition. Definition 31.A Borel measure on a topological space is purely finitely additive (p.f.a.) if whenever ] is a nonnegative countably additive Borel measure on bounded by , in the sense that 0 ⩽ ] ⩽ ||, then ] = 0. The Yosida-Hewitt decomposition theorem [21, Theorem 1.24] states that, for every finitely additive Borel measure , there exists a unique pair of Borel measure 1 , 2 with 1 countably additive and 2 purely finitely additive, such that = 1 + 2 .If is nonnegative, then so are 1 and 2 .As a consequence one has the following.Lemma 32.If is a purely finitely additive positive measure vanishing on sets of Lebesgue measure zero, then () = 0 for every measurable function that vanishes at infinity.Proof.Since the finitely additive measures vanishing on sets of Lebesgue measure zero are the Banach dual of ∞ , then the restriction of such a measure to the space 0 of continuous functions vanishing at infinity yields a continuous functional on 0 , hence a countably additive measure.This restriction is dominated by , and so it must be zero if is purely finitely additive. Proof.Let l be a norm-preserving Hahn-Banach extension of ℓ to .By Theorem 29, we know that for some finitely additive measure and ∈ ( ) defined as in Proposition 25.Now the previous Lemma states that the purely finitely additive measure 2 in the Yosida-Hewitt decomposition = 1 + 2 satisfies the identity for every ∈ I , because lim → ∞ † (, ) = 0 by definition of null space. Since ‖ 1 ‖ ⩽ 1, the measure 1 must be positive and of norm 1. We now extend this result by proving that, on I , the condition that the functional attains its norm is not needed. Theorem 34.For 1 < < ∞, all continuous linear functionals on I (attaining their norm or not) can be represented as in (79). Then, by Lemma 22(i) (with = I and = = ( ) ) every element of = (I ) attains its norm as a functional on ( ) = .Now the statement follows from Corollary 33. Our next goal is to provide a similar integral representation for the dual of the Marcinkiewicz Banach quotient M = M /I , 1 < < ∞.For this we need to remind and extend some previous results. (ii) Regard the predual (I ) of as a subspace of ( ) ; then (iv) For every coset ∈ /I there exists ∈ belongs to I and so it is in the kernel of ℓ 2 ∈ I ⊥ .By all this and (111) we have It now follows from (114) and (116) that This proves part (iii).Part (iv) is a bit more technical.Indeed, in the terminology of [22], the statement of part (ii) says that for all ∈ (with this realization, all functions belong to or and we do not need to pay attention to equivalence classes modI , in accordance with the remark at the end of the statement). We only need to prove that the representing measure is supported at infinity.Recall from the proof of Theorem 29 that, in this integral representation, the function ≡ ♮ ∈ is built as in Definition 23 in terms of the function ∈ where ℓ attains its norm and has the property that = || ⩾ 0; of course is not the zero function except in the trivial case ℓ = 0. Suppose that, for some > 0, the interval [−, ] has positive -measure, and consider the truncation = [−,] .Take large enough so that is not identically zero (this is possible since is not identically zero; we are disregarding the trivial case ℓ = 0).Then ∈ I and (, ) = (, | | ) is a positive function on − ⩽ ⩽ .Therefore ℓ( ) = ∫ ∞ 1 (, ) () > 0. This contradicts the assumption that ℓ vanishes on I .These theorems on the integral representation of almost every continuous functionals shed light upon the examples of functionals on and M supported at infinity, which we built in Section 5.1.Those examples are all functionals of the type ℓ() = ∫ ∞ 1 (, ) () where the representing measure is supported at infinity.In other words, here is a finitely additive positive finite measure given by the restriction of a countably additive positive finite Borel measure supported in ( ) \ , where is the product space introduced in Definition 26 and ( ) is its Stone-Čech compactification.As observed at the beginning of Section 5.1, all continuous functionals on M vanish on the null space of the seminorm, hence they must depend only on asymptotic values, and so, if they have an integral representation of the type ℓ() = ∫ ∞ 1 (, ) (), the measure must be supported at infinity.Instead, functionals on can be represented by measures that have a part at finite (necessarily countably additive, by the Yosida-Hewitt representation theorem and Remark 38).In the next section we extend this analysis to Stepanoff spaces. Correlation Functionals. By the representation theorems proved in this section we know that M is not the dual space of M .However, the following construction of functionals, called correlation functionals in [11], shows that at least it is possible to embed M as a quotient space of the dual of M . Let be a separable linear subspace of M (one such subspace is I , by Remark 17).Let ∈ M and consider a sequence { } dense in .By Hölder's inequality in ([−, ], /2), one has lim sup Therefore there is an increasing unbounded sequence { 1, } such that the limit Clearly this functional vanishes on the null space I of the semi-norm; hence it defines a continuous functional on /I ⊂ M .By Hahn-Banach extension, we produce in this way a continuous functional of M that depends only on limits of means, even when these limits are not defined directly by integration. Summary and Open Problems. We have proved representation theorems for continuous functionals on and M (1 < < ∞) as integrals with respect to measures ] over ( ).For the norm-attaining functionals, the representing measure splits as the product of a finitely additive positive measure on [1, ∞) times a delta measure on the unit ball ( ), and so the representation becomes more specific: a average over ∈ [1, ∞) of -weighted means over intervals of length 2. In particular, the convex hull of these functionals contains those whose measure ], regarded as a finitely additive measure on , splits as a product. Is there a characterization of those functionals arising from measures that are not countably additive and are supported at infinity, as for instance the Banach limits? We have seen that the same problem is not interesting in the case = ∞, since the representation of continuous functionals on M ∞ = ∞ as finitely additive measures is already known.But can we prove interesting representation theorems for continuous functionals on M 1 ?Theorem 41. S is the dual space of the Banach space Duality for Stepanoff Spaces A continuous functional on S is represented by a function in the sense that if and only if ∈ T .In this case, the following quasi-isometry holds up to a factor 2: In other words, the subspace of the dual of S of functionals that can be represented by a function is the bidual of T . Proof.Let be a continuous functional on T .On all functions ∈ T with support in [, + 1], is represented by a function ℎ in with support in [, +1]: () = ∫ ℎ .Therefore, for all functions ∈ T with support in [−, ], () = ∫ where = ∑ −1 =− ℎ .Now observe that, by the way the norm in T is defined, compactly supported functions are dense in T .Therefore every continuous functional on T is represented by a locally function .Again by the way the norm is defined, it is clear that the norm of this functional is given by ‖ ‖ = sup ‖ ‖ ; in other words, by Lemma 8, the norm of is quasi-isometric with the norm of in S .Thus the dual of T is S . On the other hand, we have already observed that if a functional on S is represented by a function, then this function must belong to T and the correspondence is quasiisometric. Since S embeds continuously in , we know by Proposition 15 that the predual of embeds continuously in T .The following is an independent simple proof of the embedding of into T .Lemma 42.One has ‖‖ T ⩽ ‖‖ .Conversely, there is no > 0 such that ‖‖ ⩽ ‖‖ T . Proof.Let ∈ and, as before, = [,+1] .Let be the characteristic functions of the dyadic intervals introduced in the definition of .Hölder's inequality for ℓ 1 yields Therefore In these inequalities we have made use of Hölder's inequality, which is not an equality if ∈ T because then the sequence {‖ ‖ } cannot be constant.This yields the fact that the converse inequality does not hold. S 𝑝 as a Bidual. In analogy with the null space I of the Marcinkiewicz semi-norm, we now introduce a similar subspace in S . The next lemma is proved by the same argument of Theorem 41.As a consequence, S is the second dual of J . Lemma 46.J ⊂ I , and the inclusion is proper. Proof. The inclusion means that lim For the sake of simplicity, we first show that this is true for = 1.Indeed, as ranges from − to − 1; then, as the sequence goes to 0, so do its averages.In general, for ] .If we extract the th root of both sides this equality becomes an inequality: ] (because the ℓ norm is dominated by the ℓ 1 norm).Therefore the previous argument still applies. It is easy to show that the inclusion is proper: a function that belongs to 6.3.The Dual of S .We now turn our attention to the dual of S .As we did with M , we first exhibit some examples of linear functionals that depend only on asymptotic values, and then we prove a representation theorem.For this goal, we introduce some interesting subspaces of S , as follows. Definition 47. (i) ± is the subspace of S of all functions that have limits at ±∞; (ii) ± is the subspace of S of all functions such that the sequence ∫ The same argument yields the following. Corollary 50.The spaces ± of functions vanishing at infinity are not closed in M , and the functionals are not continuous in the semi-norm of M . The previous lemma shows that the spaces ± do not yield natural linear functionals that extend continuously to S .On the other hand, the spaces ± allow to construct continuous functionals on S which do not depend on values over finite intervals, that is, that do not belong to ⊗ ℓ 1 [, +1].This can be done as follows.Given a subspace of a Banach space and a continuous functional on (continuous in the -norm), denote by F its (many) Hahn-Banach extensions to .For instance, the Hahn-Banach extensions to ℓ ∞ of the continuous functional ({ }) = lim , defined on the subspace of convergent sequences, are usually called Banach limits.In the same way we can now define on S some Banach limits induced by the subspace .The following result is now clear.Proof.The only thing left to prove is the continuity of ± in the S -norm, which is obvious since It is obvious that ± () does not depend on values of on compact sets; hence it cannot be expressed as an integral of the type ∫ ∞ −∞ ()() . Remark 52.Since S embeds continuously in and embeds continuously in M , the limit functionals on M described in (70) are also continuous functionals on S and . A Summary of Duality and Inclusions. Let us summarize the inclusions between these families of spaces and their duals.We have shown that These embeddings are proper.By the usual embedding of topological vector spaces into their biduals: For the same reason, since (T ) = S , The last embedding yields the part of the dual of S consisting of functionals represented by functions.The embedding ( ) → (S ) encompasses the previous construction of Banach limit functionals depending only on asymptotic values.It is intriguing to exhibit explicit examples of continuous functionals on S that are not continuous on .For instance, an interesting subspace of ( ) is the bidual of its predual I ; not all these functionals are represented by functions (as functionals on ), because most functions in I are not small at infinity and do not belong to .For a similar reason, they are not represented by functions when they act on S .So here we have other exotic functionals on S ; we leave to the reader to verify that they are different from the Banach limits considered before. Integral Representation of Continuous Functionals on S .We now extend to the Stepanoff spaces the integral representation theorem for continuous functionals attaining their norms that we proved in Theorem 29 for and in Theorem 40 for M .The proof is similar; we resume it skipping many details. Definition 53.For ∈ S (R) and ∈ R, put The next statement follows immediately from Lemma 24.Proposition 55.Let , be conjugate indices, with 1 < , < ∞.Let be a -additive finite Borel measure on [1, ∞] and ∈ S (R).If is the functional on S given by then Definition 56.Denote by (S ) the unit sphere, that is, the subset of S of all functions of norm 1, by the cartesian product [1, ∞] × (S ), and by ( ) its Stone-Čech compactification. Lemma 57.For ∈ S , let us define a function ‡ on by Then ‡ is an isometric isomorphism from S to ∩ ∞ ( ) and therefore also from S to (( )). Proof.By Lemma 24 the function for every ∈ S . Proof.We follow the guidelines of the proof of the same result for in Theorem 29.Again, we can assume ‖ℓ‖ = 1, and, by Lemma 57, we know that, for some ] ∈ (( )) with ‖]‖ = 1 and for all ∈ S , Let ∈ S be a function on which ℓ attains its norm: ℓ() = 1.We have seen in ( 92) that ‖ ♮ ‖ = 1. Denote by the subset of where | † (, )| attains its maximum value 1.Consider the family Φ of all nets ( , ) in that converge to points of . As (155) The rest of the proof is as in Theorem 29. Extreme Points in the Unit Balls Compact convex sets in many Banach spaces (and more generally, in topological vector spaces) have plenty of extreme points.Indeed, the celebrated Krein-Milman theorem states that if the dual space separates points, then is the closed convex hull of its extreme points.In particular, this is what happens for the unit ball of spaces when > 1 (including the case = ∞, which is compact in the weak * topology by another well-known fact, the Banach-Alaoglu theorem).Therefore the Krein-Milman theorem applies to the unit ball of a normed (or semi-normed) space if the linear functionals that are weak * continuous separate points.On the other hand, the Hahn-Banach theorem shows that the dual of a locally convex space separates points.So, if is a normed space that is the dual of another normed space , then separates points on ; hence , regarded as a subspace of , separates points of and of course the functionals in this subspace are weak * continuous.Therefore the unit ball of a Banach space that is the dual of a normed space is the closed convex hull of its extreme points.This property generally fails if is not a dual space.For instance, if ] is a finite measure on a measure space which has no atoms, that is, such that every set with ]() > 0 splits as the disjoint union = 1 ∪ 2 with 0 < ]( ) < ](), then the unit ball of 1 (, ]) has no extreme points, because every of 1 -norm 1 is a proper convex combination of its (renormalized) truncations to two disjoint subsets of positive finite measure.Instead, the characteristic function of an atom is clearly an extreme point. In this section we study the extreme points of the unit balls of the other spaces considered in this paper.We follow again [7] for the spaces and M .Then we handle the easier case of S , never considered before. Remark 59.To simplify the presentation, we shall check extremality in the following form.A vector in the unit ball of a normed space is an extreme point of if and only if there does not exists ̸ = 0 in such that ± ∈ .Indeed, if such exists then is the mid-point of the chord connecting + and − ; hence it is not extreme in .Conversely, if is not extreme in then it is an interior point of some chord in , hence it is the mid-point of some other chord. As a consequence, the unit ball of a semi-normed but not normed space has no extreme points, since every of seminorm less than or equal to 1 is the average of ± , and ‖ ± ‖ ⩽ ‖‖ + ‖‖ = ‖‖ ⩽ 1 whenever ‖‖ = 0.This makes extremality a trivial issue on M .for some > 0 and some unbounded sequence , then is an extreme point of the unit ball ( ). Therefore is not an extreme point, once again by Remark 59.We are now ready to characterize the extreme points of the unit ball of M .Part (i) of the next theorem is a slightly more detailed proof of [7,Theorem 3.11]; parts (ii) extends results in [7, Theorems 3.8 and 3.10], where a clever argument is aimed to show that extremality in the unit sphere of M is critically related to the rate of speed of those subsequences ( , || ) that converge to their maximum limit 1.Our proof of part (ii) is a considerable revision of the argument in [7]. Theorem 65. (i) The unit ball ( M1 ) does not contain any extreme point. Proof of Theorem 65.We can as well restrict attention to the unit sphere, that is, to functions of norm Letting → +∞ we see that ‖ + ‖ ⩽ 1.By Remark 63, the same is true for the norm in M1 ; thus is not an extreme point of M1 .This proves (i). Theorem 5 . The Marcinkiewicz spaces M are complete.Proof.Let { } be a Cauchy sequence in the Marcinkiewicz semi-norm.Choose a subsequence { } such that − M ⩽ 2 − for every > . Spaces with Upper Bounded -Means.A related family of spaces are the bounded -mean spaces introduced again in [3] and studied in [7, 16].Their norm is defined as if is small enough.Therefore |‖‖| > |()| for almost every Lebesgue point : since almost every point is a Lebesgue point, |‖‖| > ‖‖ ∞ . − = inf { − ℎ : ℎ ∈ I } = + I /I .(118) This proves that any coset in the quotient /I has a representative such that ‖‖ /I = ‖‖ , hence (iv).(observe that, although the integrand involves functions instead of I cosets, the statement is well posed because ℓ vanishes on I by Corollary 36 and for large the integrand does not depend on the choice of coset representatives by Corollary 39).Proof.By the previous Corollary, ℓ is identified with a continuous functional on vanishing on I and attaining its norm.Then Theorem 29 yields the following integral representation: |⟨, 1 ⟩ { 1, } | ⩽ ‖ 1 ‖ M ‖‖ M .Let us now extract from { 1, } a subsequence { 2, } such that the limit⟨, 2 ⟩ { 2, } := lim Here again, one has |⟨, 2 ⟩ { 2, } | ⩽ ‖ 2 ‖ M ‖‖ M .We iterate this process to build a family of nested subsequences { , } that define limits ⟨, ⟩ { , } satisfying the Hölder inequality above for the Marcinkiewicz seminorms.Then the diagonal sequence { := , } gives rise to a limit ⟨, ⟩ := lim that exists for every and satisfies |⟨, ⟩| ⩽ ‖ ‖ M ‖‖ M .Therefore ⟨, ⋅⟩ is a continuous functional on the subspace of M generated by the sequence { }, hence on by density. Observe that (R) embeds continuously in S .Indeed, if = [,+1] , S = sup ≡ ⊗ ℓ 1 [, + 1], that is, the space of all = ∑ , where { } ∈ ℓ 1 and ∈ [, + 1] with 1/ + 1/ = 1 and ‖ ‖ = 1.T is a Banach space with respect to the norm given by the infimum of ∑ | | over all such representations.By the same argument, T embeds continuously in (R).Observe that both inequalities, hence both embeddings, are proper except for the cases S ∞ = ∞ (R) andT 1 = 1 (R).It is clear that the dual Banach space of S contains T .Indeed, the functions = ∑ , with { } ∈ ℓ 1 and ∈ [, + 1], define continuous functionals on S with norm ‖‖ (S ) = ∑ | |, since, for every ∈ S , 6.1.The Predual of S .Before considering (S ) , we need to examine the Banach space structure of S .byLemma8.More generally, every ∈ (R) with compact support defines a continuous functional on S by the rule () = ∫ ∞ −∞ () () .Indeed, if supp ⊂ [−, ], then () ⩽(127)It is clear that the restriction to [, + 1] of every function such that the functional is defined on S must belong to [, + 1].In general, however, functions whose support is not compact do not yield continuous functionals on S , unless they belong to T (we have already observed that T is properly contained in (R), except for = 1).For instance, choose a non-negative real sequence { ± is the subspace of S of all functions such that the sequence ‖‖ [,+1] has limits at ±∞. Remark 48.It is clear that is contained in and is contained in .The other inclusions are false.The function ∑ >0 1/ [,+1/] belongs to but not to .The function ∑ ∞ =−∞ (−1) [,+1] belongs to but not to .The function with values in [, + 1/2] and − in [ + 1/2, + 1] belongs to but not .A variant of and of is obtained by integrating with respect to any sequence of finite Borel measure over [, +1] instead of Lebesgue measure; the same inclusion properties hold for this variant.defined on ± , are not continuous in the norm of S .Proof.Let = [, + 2 −|| ], the characteristic function of and = ∑ =− .Each is in S and has compact support, hence it belongs to ± .It is immediately verified that the sequence converges in S to = ∑ ∞ =−∞ .This does not have limits at infinity; hence it does not belong to ± .For the same reason, the functional ± and have norm 1, as in the proof of Theorem 29 it follows by Hölder's inequality (145) that if {( , )} ∈ Φ, then the interval [− , ] must verify the condition For every ∈ S , ∈ and for every net {( , )} in Φ that converges to , one has ‡ () = lim ‡ ( , ♮ ), and so ‡ ∞ ⩽ sup The functional l attains its norm at ♮ /‖ ♮ ‖ [ , +1] .(iv)Also lim |( , )| = 1 = lim |( , ♮ )| = lim ‖‖ [ , +1] .(v)Moreover lim sup ( , − ♮ ) = 0.This follows as in the proof of Claim 3 in Theorem 29, by using now the uniform convexity of the spaces [ , + 1].By applying again Hölder's inequality (145) to the identity that we have just proved in point (V) above, we finally obtain lim ( , ) = lim ( , ♮ ) for every ∈ S .Hence, for every ∈ and every net converging to , f () = lim ( , ♮ ) . [7]ed in[7, Theorem 3.8]) is well defined, but (, || ) is not, because it depends on the coset representative.Instead, we need to project it to the quotient.In the next proofs, we shall consider representatives in the equivalence classes modulo I and make use of the following simple remark.Remark 63.For every 1 ≤ < ∞, the M (semi-)norm of a function.The M (semi-)norm of a function is equal to the M norm of its equivalence class modulo I .Indeed, for ∈ I , one has ‖ + ‖ M ⩽ ‖‖ M + ‖‖ M = ‖‖ M , and‖ + ‖ M ⩾ ‖‖ M − ‖‖ M = ‖‖ M .We need a preliminary lemma that clarifies some comments in our reference ([7], Remark at page 161).Let ‖‖ M = 1 and for 0 < < 1 write There exist two unbounded sequences of positive numbers , , with < +1 for every , such that the connected components of are the open intervals ( , +1 ).By passing to suitable increasing subsequences, which we still denote by { } and { }, we have < < +1 for every , and the disjoint union ⋃ ∞ =1 [ , ] is contained in the complement of : Proof.Since → (, || ) is continuous, is open, hence a countable union of open intervals.Because ‖‖ M = 1, is not all of the real line.As the union of overlapping open intervals is again an open interval, is a disjoint countable union of intervals that we write as = ⋃ ( , +1 ) for suitable < +1 .Then the complement is given by = ⋃ [ , ] (here < ).Again as ‖‖ M = 1, is not compact; therefore, by passing to a subsequence, we may assume that ⊃ ⋃ [ , ] with , monotonically increasing and unbounded.
2018-12-09T14:25:54.463Z
2014-02-19T00:00:00.000
{ "year": 2014, "sha1": "dc91d1a2b7ebeee31810942c18d2569cf46c2782", "oa_license": "CCBY", "oa_url": "https://projecteuclid.org/journals/abstract-and-applied-analysis/volume-2014/issue-SI25/Function-Spaces-with-Bounded-L-p-Means-and-Their-Continuous/10.1155/2014/609525.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "dc91d1a2b7ebeee31810942c18d2569cf46c2782", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
201676055
pes2o/s2orc
v3-fos-license
Inequities in consistent condom use among sexually experienced undergraduates in mainland China: implications for planning interventions Background Since pre-exposure prophylaxis (PrEP) is mainly prescribed to high-risk uninfected individuals, consistent condom use (CCU) continues to be recommended as an inexpensive, feasible, practical and acceptable way to prevent the general population from acquiring and transmitting HIV through sexual intercourse. The objective of this cross-sectional study was to compare the relative importance of various determinants of CCU among sexually experienced undergraduates in mainland China so as to assess and subsequently to suggest ways to eliminate inequities in its use. Method From September 10, 2018, to January 9, 2019, an anonymous self-administered online questionnaire was voluntarily completed by 12,750 participants distributed across 30 provinces in mainland China (except for Tibet). The present analysis was restricted to 2054 sexually experienced undergraduates. Pearson’s chi-square test and Logistic regression models were chosen to analyze the factors associated with CCU. Results The overall rate of CCU was 61.3% [95% confidence interval (CI) = 59.2–63.4%]. CCU was inequitably distributed since enabling factors exerted greater effects than predisposing and need variables. Compared with heterosexual men, heterosexual women [adjusted odds ratio (AOR) = 0.78, 95% confidence interval (CI):0.64–0.96)], non-heterosexuals men (AOR = 0.64, 95% CI:0.45–0.92) and women (AOR = 0.68, 95% CI:0.47–0.99) were less prone to using condoms consistently. Those with more resources [i.e., higher levels of self- efficacy for condom use (AOR = 2.86, 95% CI:2.35–3.49) and being knowledgeable of the national AIDS policy (AOR = 1.50, 95% CI:1.23–1.82)], and those with lower need for condoms [i.e., late initiation of sexual activity (AOR = 1.34, 95% CI:1.09–1.64) and single sexual partner (AOR = 1.68,95% CI:1.21–2.33)] were more likely to be consistent condom users. Conclusions In order to increase consistency of condom use and simultaneously reduce the remaining inequities, a comprehensive intervention measure should be taken to target heterosexual women, non-heterosexual men and women, and those with higher need for condoms, improve their condom use self- efficacy and raise their awareness of the national AIDS policy. Background A tolerant attitude toward premarital sex and risky sexual behaviors are increasing the number of annually reported new cases of HIV infection among Chinese young students [1,2]. According to the most recent figures available from the national HIV surveillance system, 527 young students aged 15-24 years were reported to be infected with HIV in 2008, and then the number reached 3236 in 2015. During the period between 2011 and 2015, the annual growth rate of new infections stood at a stunning 35%, with 65% of infections occurring in college students between18 and 22 years of age. Although the number of HIV diagnoses declined in 2016 and 2017, more than 3000 new cases were still identified each year. The predominant mode of HIV transmission among Chinese young students is through sexual intercourse, especially through male-to-male sexual contact, accounting for up to 81.6% of new HIV infections in 2014 [3]. Therefore, undergraduate college students, who are typically between the ages of 18 and 22 years in China, are vulnerable to contracting HIV as soon as they initiate sexual activity [2]. Consistent condom use (CCU) continues to be recommended as an inexpensive, feasible, practical and acceptable way to prevent the general population (e.g., college students) from acquiring and transmitting HIV through sexual intercourse, because pre-exposure prophylaxis (PrEP) is mainly prescribed to high-risk uninfected individuals such as commercial sex workers, men who have sex with men (MSM), and drug abusers [4], and also because PrEP alone does not protect against other sexually transmitted diseases (STDs) and unintended pregnancies [5]. In China, previous studies about condom use focused predominantly on most-at-risk populations such as female sex workers and their clients, STD clinic attendees, MSM, drug abusers, people living with HIV (PLHIV), and rural-to-urban migrants. In recent years, due to the vulnerability of unmarried sexually active college students to unintended pregnancy, HIV infection, and other STDs [2], a growing body of research has examined the frequency of condom use and its associated factors to develop appropriate interventions for this vulnerable population. It is noted that the number of studies examining condom use among college students in China is limited to six studies [6][7][8][9][10][11] and five quantitative surveys [6][7][8][9][10] suggest condom use is relatively low, ranging between 28% [9] and 68% [6], except for one qualitative survey [11]. Furthermore, these studies also indicated that condom use was associated with gender [7,9], education backgrounds of parents [9], attitudes towards condom use [7,8,10,11], condom use self-efficacy [7,8,11], partners numbers [9,10], condom use at sexual debut [9], condom unavailable when having sex [10], and lack of sexual education from school and families [7,11]. Currently, the Health Department of the Chinese Government is continuing its free condom initiative, with the goal of increasing the frequency of condom use and ensuring equity of access to condom use. Thus, it remains a question whether consistent use of condoms as one type of preventive services or preventive behaviors is equitably distributed. As described above, previous research has identified a set of variables associated with consistent condom use among college students in China. However, most of these associations have been explored with the Chi-square test or/ and through non-hierarchical logistic regression models, and few methodologically rigorous studies have systematically assessed inequities in consistent use of condoms with the guidance of explicit theoretical framework. Conceptual framework In order to assess the magnitude of inequities and their determinants in consistent condom use and how these inequities might be narrowed or even eliminated, the Andersen's behavioral model (ABM), which assumes a sequence of predisposing, enabling and need variables contributing to an individual's health service use and health behavior adoption [12], was chosen as the conceptual framework of this study. The theoretical model, initially developed in the late 1960′ and experienced several rounds of revisions, has been successfully used to guide numerous studies on health service use and health behavior adoption, due to its diversity in the conceptualization and measurement of its components and also because of its capability to measure the presence of inequities and suggest ways to achieve equity of access to health services. Factors predisposing individuals to use condoms include social-demographic characteristics {e.g., age [13], gender [7,9,14], sexual orientation [14,15], race [13][14][15], marital status [13], education level [13,15], education backgrounds of parents [9]}, and attitudinal-belief variables such as HIV-related knowledge [8,13] and attitudes towards condom use [7,8,10,11,14,17]. Factors inhibiting and promoting use of condoms (i.e., enabling factors) can be measured with multiple indicators such as their own personal resources {e.g., income [15] and self-efficacy of condom use [7,8,11,14,16,17]} and the availability of health services in their community of residence such as condom available when having sex [10] and exposure to a community-level HIV prevention intervention [13,16]. Need factors, representing the most immediate cause of condom use, comprise the objective and professional evaluation of need {i.e., risky sexual behaviors [27] such as having more sexual partners [2,9,10,14], having casual sex partners [2], and early initiation of sexual activity [2,21]} and the subjective assessment of need such as self-perceived risk of HIV infection [16], which is linked to an individual's sexual history such as more frequent multiple partnerships [18]. Condom use is equitably distributed if it is primarily determined by need factors, while inequity occurs when use depends largely on enabling variables. "Mutability" reflects the extent that a given factor can be changed. Generally, social-demographic variables and need factors are not easily changed, while enabling variables and attitudinal-belief variables are relatively mutable [12]. Therefore, mainly informed by the Andersen's behavioral model and drawing on previous empirical studies on condom use, a conceptual framework was first developed for the present study, illustrated by Fig. 1. Hierarchical logistic regression models were then chosen to estimate their respective contributions of predisposing, enabling and need variables to consistent condom use so as to assess and subsequently to suggest ways to achieve equity in its use among sexually experienced undergraduates in mainland China. Hypotheses As a typical type of discretionary behavior, consistent condom use is hypothesized to be mainly influenced by enabling factors, and inconsistent condom users are mainly heterosexual women and non-heterosexuals, and those with "fewer enabling resources" and "more needs". Participants and settings The data used in this paper were taken from a crosssectional survey on knowledge, attitude and risk behavior towards HIV/AIDS among college students, carried out from September 10, 2018 to January 9, 2019. The survey involved a total of 12,750 participants distributed across 30 provinces in mainland China (except for Tibet). All statistical analyses in this study were confined to 2054 sexually experienced undergraduates, who must meet the following four inclusion criteria: (a) age 18-25 years (Here it should be pointed out that the age range in this study was widened to 18 to 25 years, given the fact that some students might upgrade from junior college students or enroll later than the compulsory school attendance age); (b) full-time undergraduates currently registered at one university in mainland China; (c) had ever experienced or currently engaged in any form of sex; and (d) answered the questionnaire no later than January 9, 2019. The protocol was approved by the academic ethics and moral supervision committee from Hubei University of Science and Technology (HUST). Convenience sampling and snowball sampling were used to select the participants. More specifically, on the basis of convenience, diversification and comprehensiveness of majors, college students from HUST were first invited to complete the online questionnaire distributed via the website "www. sojump.com"for credits and even earning the honor of "outstanding volunteers", and they were also encouraged to recruit future participants from among their acquaintances. In order to get a more geographically diverse sample, Wechat, Sina Weibo, and QQ space were also chosen as the platforms to distribute online questionnaires. After signing electronic informed consent voluntarily, participants completed the questionnaire anonymously and were also promised that all the information they provided would be treated confidentially. Design and content of the questionnaire The self-administrated questionnaire (See Addition file 1) was developed by the Department of Preventive Medicine from HUST and pilot tested with a convenience sample of 50 respondents drawn from the selected population, and has been verified as a valid and reliable measurement tool by our previous large-scale study [37]. This study aimed to identify factors predicting consistent condom use {defined as using condoms during every act of sexual intercourse [14,15]}, measured by first asking undergraduates whether they had ever experienced or currently engaged in any form of sexual intercourse. Those who admitted to having sexual intercourse were then asked to choose frequency of condom use during sexual intercourse in the past 6 months: never, once in a while, sometimes, every time. Students who reported to use condoms every time when they had sex were categorized as "consistent condom use" and coded as "1". The remaining respondents were classified as "inconsistent condom use" and coded as "0" ( Table 1). All the independent variables influencing consistent condom use in this study were classified into predisposing, enabling and need factors based on the Andersen's behavioral model and also treated as dummy variables because their effects on consistent condom use were found to vary in a non-linear manner (Table 1). Predisposing factors Nine predisposing factors in this study included gender, grade, residential area, major, father's highest educational level, mother's highest educational level, parents' marital status, sexual orientation, and beliefs about condom use. Parents' marital status was treated as a dummy variable (0 = stable, 1 = unstable, including divorced, separated, remarried, and widowed). Beliefs about condom use was measured by asking the question "Can correct and consistent use of condoms reduce the risk of HIV transmission?", and three possible responses were provided: Yes, No, or I do not know. In the analysis, "I do not know" and "No" responses were combined and scored as 0 (incorrect), while "Yes" responses were scored as 1 (correct). Enabling factors Four enabling factors in this study included monthly living expenditures, condom use self-efficacy, knowledge of local volunteer organization and awareness of the national AIDS policy. Condom use self-efficacy [14] was measured using an 8-item scale that assessed participants' ability to negotiate with any sexual partner about the use of condoms in a variety of circumstances (e.g., "Can you use a condom even if sexual partner does not want to?"). For all items above, "unsure" and "no" were combined and scored as "0″, while "yes" was scored as "1″. Scores for the 8 questions were summed to form a self-efficacy index ranging from 0 to 8, with higher scores indicating higher levels of self-efficacy (Cronbach's alpha = 0.88). In the final analysis, the sum score of this scale was dichotomized at the maximum value of 8 scores into high (1) or low (0) self-efficacy for condom use. The two variables, including knowledge of local volunteer organization and awareness of the national AIDS policy, were used to measure exposure to the HIV preventive intervention [25,26]. Since 2003, China has introduced the "Four Frees and One care" policy [19], including free anti-retroviral drugs for those who cannot afford to pay, testing, prevention of mother-to-child transmission, and free schooling of orphans, and care and economic assistance to the households of people living with HIV. Awareness of the national AIDS policy was measured by asking "Do you know the Four Frees and One Care policy?" and three possible responses (Yes/No/Not sure) were presented. In this analysis, "Not sure" and "No" were combined and scored as 0 (Unaware), while "yes" was scored as 1 (Aware). In order to control the HIV epidemic on campus, a growing body of universities have recently established student organizations to distribute free condoms and offer free voluntary HIV counselling and testing (VCT) services. In this study, subjects were classified into two groups (without or with knowledge of local volunteer organization) according to their responses to the Yes/No question "Do you know the student organization in your university to provide free VCT service?" Need factors Three HIV high-risk behaviors, including early sexual initiation, multiple sexual partners and casual sex, were used to reflect the objective and professional evaluation of need for condoms. Multiple and casual sexual partners have been recognized as a risk factor for HIV infection among sexually active male college students in China [2]. Age at first sexual debut is also included in this study because early initiation of sexual activity has been escalating among young students [1,2] and also because of its well-established relationships with lifetime multiple sexual partners [2,20,29]. The question used to derive this variable was, "How old were you when you had sexual intercourse for the first time?" In the final analysis, the variable was grouped into two categories: late or early sexual initiation {defined as having first sexual intercourse under 18 years of age [21]}. The number of sexual partners was measured by asking the question "During the past six months, with how many different people have you had sexual intercourse?" and were classified into two groups [22,23]: single partner (having only one or no sexual partner) or multiple (i.e. two or more) partners. Partner type was measured by asking the participants to choose their main sexual partners from the four following options: girlfriend/boyfriend, commercial sex worker, one night stand, others (please specify). Participants who reported to have sex only with their girlfriends or boyfriends were categorized as "regular sex" and coded as "1". The remaining respondents were classified as "casual sex" and coded as "0" ( Table 1). The subjective assessment of one's need for consistent condom use was indicated by self-perceived risk of HIV infection [16,18,23]. College students were asked to estimate their risk of acquiring HIV infection, with five choices provided: no risk, not sure, small, moderate, or great risk [18]. Since only 77 respondents (3.7%) perceived themselves to be at moderate (1.1%) or great risk (2.6%), "moderate or great risk" were combined and "no risk" was chosen as the reference group when performing multivariate logistic regression analyses: (1) Not sure versus no risk; (2) low risk versus no risk; and (3) moderate/high risk versus no risk. Data analysis The data collected via the website "www.sojump.com"were double-cleaned and analyzed independently by two authors using the Chinese version of SPSS 20.0. The statistical analysis was conducted in the following three stages. Firstly, a description was made of all the variables included in the Andersen's behavioral model. Secondly, variables were screened as candidate predictors for the regression model on the basis of the results of bivariable analysis. For categorical variables the chi-squared test was used, and for ordered categorical variables the chi-squared test for trend was implemented. Finally, those variables that were statistically significant in the bivariate analysis were further entered into multivariate logistic regression model using backward LR method. To identify the relative importance of factors associated with CCU, each set of factors was entered into the logistic regression models in a hierarchical manner [24], with need factors entered first, followed by enabling factors and predisposing factors. This method not only enabled examination of the effects of enabling factors after ruling out the potential confounding effects of need factors, but also made it possible to assess the effects of predisposing factors after considering both need and enabling factors. The change in the − 2 log likelihood associated with predisposing, enabling or need factors entered determined their relative contributions to consistent condom use and how well the model fitted the data when additional variables were added to the model, with smaller values indicating a better fit. Only variables with P values less than 0.05 were retained in the final model. The adjusted odds ratio (AOR) and 95% confidence interval (CI) were also reported. In terms of need-for-condom factors, nearly one-third (30.1%) of respondents had their first sexual intercourse under 18 years of age, and around one-tenth of sexually experienced college students admitted to having multiple sexual partners in the past 6 months (8.7%) or having sex with casual partners (10.2%). However, only 3.7% perceived themselves to be at moderate/high risk of HIV infection. Bivariable analysis The results of the bivariable analysis were shown in Table 1. Thirteen independent variables, as indicated in Table 1, were found to be significantly associated with consistent condom use. More specifically, those who used condoms consistently tended to be high-grade, medical students, their parents remaining in a stable marital relationship, heterosexual women and non-heterosexuals, having a correct knowledge of condom use, lower living expenditure, high levels of self efficacy, more knowledgeable of the national AIDS policy and local volunteer organization, late sexual initiation, single sexual partner, regular sex, and self-perceived lower risk of HIV infection. No statistically significant differences were reported between consistent and inconsistent condom users in grade, residential area, father's education, and mother's education. Hierarchical logistic regression The value of − 2 log likelihood was significantly reduced when each set of factors (i.e., four need factors, four enabling factors, and five predisposing factors) was introduced into the logistic regression model, with enabling factors producing the largest reduction. These results indicated that consistent condom use (CCU) was inequitably distributed, since it was mainly influenced by enabling resources rather than by need factors of sexually experienced college students. Model 1 included only the four need factors and indicated that Age at first sex, Partner number and Risk perception were significantly associated with consistent condom use (CCU). When the four enabling factors were added to the model (Model 2), Condom use self-efficacy and Awareness of the national AIDS policy were statistically significant, while Risk perception lost its significance. The final model, Model 3, had the best overall model fit as indicated by the − 2 Log Likelihood. When combining all the 13 significant variables (i.e., four need factors, four enabling factors, and five predisposing factors) into the final logistic regression model (Model 3) and adjusting for potential confounding factors, only one predisposing factor (Sexual orientation), two enabling factors (Condom use self-efficacy and Awareness of the national AIDS policy) and two need factors (Age at first sex and Partner number) were significantly associated with CCU. Compared with heterosexual men, heterosexual women (AOR = 0.78, 95% CI: 0.64-0.96), non-heterosexuals men (AOR = 0.64, 95% CI:0.45-0.92) and women (AOR = 0.68, 95% CI:0.47-0.99) were less prone to using condoms consistently. Those with more enabling resources (i.e., higher levels of self-efficacy for condom use and being knowledgeable of the national AIDS policy) and those with lower need for condoms (i.e., late sexual initiation and single sexual partner) were more likely to be consistent condom users, when compared with their respective counterparts. More specifically, having higher levels of self-efficacy increased one's odds of CCU by 1.86 times (95% CI:2.35-3.49). The odds of CCU for those being knowledgeable about the national AIDS policy were 1.50 times of those being ignorant of the policy (95% CI:1.23-1.82). Those who initiated their first sexual intercourse at age 18 or older were 1.34 times (95% CI: 1.09-1.64) more likely to use condoms consistently, compared with those who initiated their first sexual intercourse under the age of 18 years. Similarly, compared to those having multiple sexual partners, the likelihood of using condoms consistently were 1.68 times (95% CI:1.21-2.33) for those with only one single partner (See Model 3 in Table 2). Main findings of this study In our sample, only 61.3% reported to use condoms consistently during every sexual encounter. This underscores the urgent need for implementing the "100% condom use program (CUP)" to reduce the vulnerability of unmarried sexually active college students to unwanted pregnancy, HIV infection and other STDs [2,3]. Based on the Andersen's behavioral model, hierarchical logistic regression revealed that CCU was inequitably distributed since its use was primarily influenced by enabling factors. Heterosexual women, non-heterosexual men and women, and those with less enabling resources (i.e., lower self-efficacy for condom use and being ignorant of the national AIDS policy) and with higher need for condoms (i.e., early sex initiation and multiple sexual partners) were less likely to be consistent condom users. Comparisons with previous studies The concentration of inconsistent condom use among sexually experienced undergraduates in mainland China with earlier initiation of sexual intercourse and multiple sexual partners is especially notable. The findings from this study support previous research [2,26,29] that have found that individuals with early sexual debut were more likely to engage in risky sexual behaviors such as multiple sexual partners and inconsistent condom use and thus contributing to increased risks of unintended pregnancies, HIV infection and other STDs. Self-efficacy was defined as an individual's perception of his or her ability to successfully engage in a selected health behavior. Consistent with previous research [7,8,11,14,16,17,26], perceived self-efficacy of condom use was identified as a significant enabling factor that assists respondents in using condoms consistently. This could be due to the fact that individuals with higher condom use self-efficacy emerged as perceiving less difficulty in making condom requests and actually using condoms more consistently, compared with those reporting lower condom use self-efficacy. Our study indicated that individuals who were knowledgeable about the national AIDS policy were more likely to use condoms consistently. This finding is not surprising as it fits the general pattern of positive intervention [25,26]. Those with knowledge about the national AIDS policy often gain a better insight into symptoms of HIV disease, know more about the availability of free condoms and use this information more effectively to access to sexual health services. Consistent with a previous study on the relationship between treatment optimism, safer sex burnout and consistent condom use [33], sexual orientation was found to be statistically significant, with those identifying themselves as heterosexual women, non-heterosexual men and non-heterosexual women less likely to be consistent condom users than heterosexual men (AOR = 0.78, 0.64 and 0.68, respectively). The increased vulnerability of heterosexual females was mainly attributable to social gender roles and gender differences [26,28], especially in China where women had less power to negotiate safe sex [9] and male partners were already identified as a major barrier to condom use for those women who desired to use this male-controlled device [28]. The lower rates of consistent condom use reported among non-heterosexuals in this study might be due to the fact condoms in China were mainly used to prevent pregnancy, rather than for STDs prevention [11,32], or because non-heterosexuals, especially MSM would choose other safer sex practices such as PrEP, serosorting and strategic positioning (commonly called "seroadaptive behaviors") as alternative strategies to consistent condom use [33][34][35]. For example, one study in Taiwan demonstrated that serosorting had emerged as a harm reduction approach for HIV-positive MSM to reduce the risk of HIV transmission with unprotected sex [35]. Limitations Several limitations should be considered when interpreting the findings of this study. First, since this study relied on a cross-sectional design to measure consistent condom use and its influencing factors simultaneously, it was difficult to infer a cause and effect relationship observed between these variables. Second, despite considerable efforts to obtain a more geographically diverse sample, we mainly adopted the method of convenience sampling and snowball sampling to select respondents, thus leading to the overrepresentation of respondents from Hubei Province and limiting generalizability or external validity of the results. Third, this survey relied on participants' selfreported condom use which may suffer from social desirability bias, thus leading to overestimates of condom use [30]. For example, using prostate specific antigen as a gold standard biomarker, Liu and colleagues [31] have found that 26-46% of mid-aged female sex workers in China over-reported condom use with all sex partners including husbands, boyfriends or clients. However, an anonymous and confidential online self-administered survey, rather than the faceto-face interviewer-administered questionnaire, was carried out to minimize potential biases. Implications of the study In spite of the stated limitations, the findings from our study have several implications for the design and implementation of HIV prevention interventions on college campuses. To the best of our knowledge, ours is the first to employ the Andersen's behavioral model as theoretical framework, combined with hierarchical regression models, to examine equitable distribution of consistent condom use among sexually experienced undergraduates in mainland China. Our findings suggested that equal use of condoms for equal need-forcondoms had not been achieved and that inconsistent condom users were mainly those with higher need for condoms (i.e., earlier initiation of sexual intercourse and multiple sexual partner) and those with less enabling resources (i.e., lower condom use self-efficacy and being ignorant of the national AIDS policy. Furthermore, heterosexual women, non-heterosexual men and women were less likely to use condoms consistently than heterosexual men. Consequently, four main types of intervention aimed at reducing inequities in consistent condom use are recommended. (1) Target students with higher need for condoms: A tendency for early entry to sexual initiation to have multiple sexual partners and subsequently to use condoms inconsistently would be labeled as "equitable" and "immutable". Therefore, the "100% Condom Use Program" should be immediately promoted among students exhibiting these characteristics, while our long-term goals should be set to delay sexual intercourse and decrease the number of sexual partners. In addition, our results also indicated that nearly one-third of respondents had initiated their first sex below the age of 18 years (the minimum age requirement for admission to a university). Therefore, sex education should be started earlier and more sexual and reproductive health services should be provided to help adolescents to make wise sexual decisions. Fortunately, the Chinese government has already realized the urgency of this issue and began to take action to solve it. (2) Enhance condom-use self-efficacy: The enabling variable which has the strongest effects on CCU is self-efficacy for condom use. Its effects are judged "mutable" and "inequitable". In order to promote consistent use of condoms and also achieve its equity, the first and foremost intervention is to improve their skills for negotiation of condom use. Also, the intervention should focus on the individuals' perceived barriers to negotiating with sexual partners about condom use and develop appropriate strategies to make these individuals feel at ease with the negotiating process [14]. (3) Improve awareness of the national AIDS policy: Awareness of the national AIDS policy was judged "mutable" and "inequitable". Consequently, continued effort should be made to recruit and train student volunteers to propagate the national AIDS policy (i.e., "Four Frees and One care" policy) and expand sex education program to make free condoms to be available for students. (4) Focus on heterosexual women and nonheterosexuals: Heterosexual women and nonheterosexuals, especially non-heterosexual men with decreased odds of consistent condom use would be labeled as "inequitable" and "immutable". It cannot be denied that consistent and correct condom use remains the goal of preventing unintended pregnancies, HIV infection and other STDs at the population level [14]. However, due to their vulnerability and difficulty in using condoms consistently, future prevention efforts should include combine multiple intervention strategies [36] and continue to be focused on high-risk groups, especially MSM. For example, interventions targeting at MSM for HIV prevention should include but not be limited to encouraging abstinence, routine testing and treatment of STDs, and taking PrEP. Conclusions While other studies have described consistent condom use as an infrequently adopted protective behavior, ours is the first to examine equitable distribution of consistent condom use through the application of hierarchical logistic regression in conjunction with the Andersen's behavioral model both in the general population and among most-at-risk populations. Our findings suggested the overall level of consistent condom use has remained low among sexually experienced undergraduates in China. Furthermore, consistent condom use was inequitably allocated, since enabling factors exerted greater effects than predisposing and need variables. Inconsistent condom users were mainly heterosexual women and non-heterosexuals, and those with "fewer enabling resources" and "more needs". In order to increase consistency of condom use and simultaneously reduce the remaining inequities, a comprehensive intervention measure should be taken to target heterosexual women and non-heterosexual undergraduates and those with higher need for condoms, improve their condom use self-efficacy and raise their awareness of the national AIDS policy.
2019-08-31T14:19:35.901Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "947245f782ed6b84a892c4ab3045185a6c5087f7", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-7435-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "947245f782ed6b84a892c4ab3045185a6c5087f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219562951
pes2o/s2orc
v3-fos-license
Orexin and Alzheimer’s Disease: A New Perspective Orexin’s role in human cognition has recently been emphasized and emerging evidences indicate its close relationship with Alzheimer’s disease (AD). This review aimed to demonstrate recent research on the relationship between orexin and AD. Orexin’s role in stress regulation and memory is discussed, with significant findings related to sexual disparities in stress response, with potential clinical implications pertaining to AD pathology. There are controversies regarding the orexin levels in AD patients, but the role of orexin in the trajectory of AD is still emphasized in recent literatures. Orexin is also accentuated in the context of tau pathology, and orexin as a potential therapeutic target for AD is frequently discussed. Future directions with regard to the relationship between orexin and AD are suggested: 1) consideration for AD trajectory in the measurement of orexin levels, 2) the need for objective measure such as polysomnography and actigraphy, 3) the need for close observation of cognitive profiles of orexin-deficient narcolepsy patients, 4) the need for validation studies by neuroimaging 5) the need for taking account sexual disparities in orexinergic activiation, and 6) consideration for orexin’s role as a stress regulator. The aforementioned new perspectives could help unravel the relationship between orexin and AD. INTRODUCTION Orexin (also named hypocretin) is a neuropeptide, and their function is implemented by two G-protein-coupled receptors. Two subtypes exist, orexin A and orexin B, and they are synthesized by neurons located in lateral hypothalamus and perifornical areas. 1 Orexin has been described in many literatures as a key neuropeptide regulating fundamental brain functions, including arousal, appetite, stress and cognition, 2-6 playing a major role in the homeostatic control of our body state. 7 Orexin neurons project to various brain regions, including prefrontal cortex, hippocampus, thalamus, hypothalamus, locus coeruleus, raphe nuclei, parabrachial nuclei, central gray and nucleus tractus solitaries, and the aforementioned sites have crucial functions regarding cognitive processes and their consequences. 8 Orexin has not only been emphasized as an integral research topic in sleep research, but also in neuropsychiatry. 4,9 Indeed, abundant research reported orexin's in- tegral role in human cognition and its relationship with Alzheimer's disease (AD). Therefore, in this review, we will demonstrate what has been reported to be the role of orexin in the pathology of AD, and provide future research directions pertaining to this field. OREXIN AS A STRESS REGULATOR AND ITS RELATIONSHIP WITH COGNITION With regard to the relationship between orexin and cognition, recent emphasis on orexin' s role as a stress regulator should not be dismissed. [10][11][12] Indeed, a recent study indicated that rats adopted a passive coping behavior after social defeat stress through orexin activation, which induced recognition memory impairments. 13 Orexinergic blockade in the hippocampus reversed stress-induced anxiety behaviors and memory deficits in rats, which suggested potentials of hippocampal orexins as therapeutic targets to minimize stress-induced impairments. 14 A sexual disparity was noted with regard to orexin's role in stress response, a female rat displaying more vulnerability to repeated stress, demonstrating prominent impairments in habituation and cognitive flexibility, with increased orexin expression when compared with male rats. 15 In rat models, orexin-A was closely related to the expression of corticotroping releasing factor (CRF), 16 and when infusion of orexin-B receptor antagonist was implemented, it significantly reduced stress-induced adenocorticotropin (ACTH). 17 Lower orexin levels were correlated with increased resilience to social stress, 18 while a contrasting result indicated that orexin increased stress resilience by reducing depressive behaviors. 11 Orexin-A antagonist inhibited a stress response via fronto-hippocampal circuit, including amygdala. 19 Orexin is also involved in reinforcing norepinephrine-mediated long term potentiation in dentate gyrus, exerting its critical role in attention and memory. 20 The hippocampal-amygdala interactions were modulated by orexinergic receptors, with orexinergic receptor blockade in the basolateral amygdala induced long-term potentiation in the dentate gyrus. 21 Therefore, a mediating role of orexin could exert great influence on the process of emotional learning. 21 Indeed, orexin was closely related to fear memory, with negative associations between orexin A level and fear extinction. 22 Independent of orexin's role as a stress regulator, orexin's role in cognition and memory and learning is frequently discussed. Orexin-deficient mice exhibited slowed decreased spatial-cue related working memory, when compared to control mice. 23 In a rat model, both types of orexin exerted their effects on spatial learning and memory through orexin 1 receptor, which extends their projection to hippocampal formation. 24 Orexin might exert a considerable influence on attention as well, with results on attention-deficit and hyperactivity disorder (ADHD) patients, showing decreased serum orexin A levels in human subjects. 25 The relationship between cholinergic transmission and orexin has also been discussed frequently. Activation of basal forebrain by orexinergic neurons, and their critical role in arousal and attention are well-known. [26][27][28] Orexinergic nerouns and both orexin receptor subtypes are distributed in the basal forebrain, and fine modulation of cholinergic system is mediated through orexinergic control 27,28 Such control is suggested to be mediated by gamma oscillations. 29 Orexin A is purported to directly influence cholinergic neurotransmission in hippocampus, with lack of response to orexin A noted in aged rats. 30 Differentiating disparate olfactory cues is another important role of orexinergic innervation in the basal forebrain. 31 As with regard to its relationship with medial prefrontal cortex (MPFC), orexinergic receptor signaling was purported to be crucial in potentiating cognitive motivation to eat. 32 A recent report proposed that high fat diet fed, orexin-deficient mice exhibit impaired cognition and increased microglial activation when compared with controls. 33 Moreover, hypocretin in MPFC reinforces cortical arousal and attention closely related to limbic states. 34 Indeed, orexin is closely associated with dopamine efflux within prefrontal cortex, specifically through dopaminergic neurons in ventral tegmental area. 35,36 Orexinergic neurons are lost with aging, 37 and its role in agerelated cognitive decline is increasingly emphasized. Age-re-lated cognitive decline and orexin are discussed in consideration for orexin' s role in appetite. Age-related loss of orexinergic neurons could contribute to impairments in orexinergic modulation of hippocampal neurochemistry. 38 Despite the aforementioned studies that imply important clinical implications, most recent results on the relationship between orexin and cognition have been limited to animal studies. Further human studies are needed to confirm the exact role of orexin in regulating various cognitive pathways. A CRUCIAL LINK BETWEEN OREXIN AND ALZHEIMER'S DISEASE A link between orexin and AD has been mostly discussed in the context of explaining the bidirectional relationship between AD and sleep disturbance. 39 A recent research suggested that amyloid deposition affects memory by its effect on sleep, but how amyloid affects sleep and consequent memory disruption remains elusive. 40 Orexin may be an important mediating factor that could explain this ambiguity. However, contradicting results exist regarding the orexin levels in AD patients. A postmortem study concluded that reduced cerebrospinal fluid (CSF) orexin A levels and orexinergic neuronal cell numbers were found in advanced AD patients. 41 Meanwhile, orexin level was increased in moderate to severe AD patients, when compared with mild AD or healthy controls. 42 Hippocampal overexpression of orexin did not have any association with amyloid beta (Aβ) deposition, when Aβ aggregation increased when orexinergic neurons were rescued in amyloid precursor protein (APP)/Presenilin1 (PS1) transgenic mice. 43 Moreover, there was no difference in orexin levels between AD patients and healthy controls, and the orexin levels did not reach statistical significance with regard to apolipoprotein E 4 (ApoE4) genotype. 44 In one study with consideration for a diurnal variation of orexin levels, orexin levels did not differ between AD patients and the control group, and lower Aβ levels were associated with a higher amplitude of orexin circadian rhythm. 45 In a recent study on investigation of the relationship between partial sleep deprivation and CSF parameters pertaining to AD-related neurodegeneration, there was a significant increase noted in orexin levels after partial sleep deprivation, while other parameters remained insignificant. 46 The role of orexin with regard to the importance of sleepwake cycle in the trajectory of AD is frequently demonstrated. Orexin has an integral role in promoting wakefulness in mammals, with cessation of its neuronal firing during sleep. 3 Increased research is directed to understanding the sleep-wake cycle and its control of Aβ levels in brain interstitial fluid. 47 Soluble Aβ has been purported to be a major culprit for inducing neurotoxic effects resulting in synaptic loss and dysfunc-tion, and the sleep-wake cycle is predicted to play a major role in the clearance of soluble Aβ. 48 Orexinergic signaling has been suggested as a major pathway for hippocampal circadian oscillation that is suspected to exert a considerable influence on the expression of AD-risk genes that play a major role in production and transport of Aβ. 49 Excessive orexinergic signaling is purported to be a culprit for causing sleep-wake cycle disruption that hastens Aβ deposition. 50 In line with this proposition, interstitial fluid Aβ levels increased with orexin infusion and decreased with orexin antagonist in one study. 51 There was a study where orexin disrupted degradation of Aβ by microglia. 52 Orexin is also discussed in the context of tau pathology, 53 with orexin levels proportionally increased with total tau protein (T-tau) levels in AD patients. Moreover, orexin showed a positive relationship with phosphorylated tau (P-tau) levels in cognitively normal elderly subjects, even when controlled for years of education and ApoE4 status. 42 In a mouse model, orexin knockdown resulted in inhibition of T-tau and P-tau levels, with adenosine A receptor showing a close expression level with orexin. 54 Tau proteins usually represent brain neuronal injury subsequent to AD pathology, and how orexin exerts its influence on tau-mediated neuronal injury remains ambiguous. Many possible mechanisms were proposed through which orexin exerts pernicious effects that hasten AD pathology. In an AD mice model, orexin A aggravated mitochondrial impairment. 55 Genetic vulnerability might be another culprit, with one study suggesting orexin receptor 2 gene could be a risk factor for AD. 56 With regard to neuronal injury and orexin, a few of recent findings present a potential candidate for future research directions. Orexin has been suggested as one important potential neuropeptide in the modulation of the metaplasticity in the brain, with evidences for fine modulation of long-term potentiation and long-term depression in animal models. 57 Recently, orexinergic innervations were discovered in the adult rat subventricular zone, which is the site for active neurogenesis. 58 OREXIN AS A POTENTIAL THERAPEUTIC TARGET FOR ALZHEIMER'S DISEASE Orexin, with key previously published findings on its relationship with AD, has been discussed as a potential therapeutic target for AD. Orexin A and B treatment both regulated the hippocampal oscillator, which is suggested to play a major role the circadian control of Alzheimer's disease-risk genes. 49 With regard to emphasis on understanding of orexin in consideration for its role in sleep modulation, 43 orexin antagonist is an interesting target of research in patients at risk or suffering from AD. Suvorexant, an orexin antagonist approved by FDA with indications on insomnia patients, can be utilized in future clinical trials to unravel Aβ dynamics. 47 Indeed, almorexant, another type orexin antagonist, reduced interstitial Aβ levels during the light period. 51 Delivery methods of therapeutic effects of orexin are another matter in discussion. Orexin gene delivery through lentiviral vector was suggested as a study modality to explore the relationship between orexin and AD, and such method could be applied to the development of new therapeutic agents. 59 Intranasal delivery is another option, with a recent review demonstrating intranasal administration of orexin effectively increasing neuronal activation in an animal model. 60 FUTURE DIRECTIONS IN RESEARCH ON THE RELATIONSHIP BETWEEN ALZHEIMER'S DISEASE AND OREXIN Several important considerations are necessary in the future research regarding the link between AD and orexin. First, orexin level changes with regard to AD trajectory will be conducive to understanding the interactive link between orexin and AD pathology. Indeed, a recent research on change of CSF tau, neurofilament light and YKL-40 levels compared between groups in different phases of AD trajectory provided a valuable insight into a potential CSF biomarker that could be utilized in clinical settings. 61 The same approach could be applied to orexin measured from CSF, thus understanding how orexin affects the AD pathology longitudinally. Second, quantification of orexin levels, along with objective sleep studies including polysomnography and actigraphy, should be used to accurately reflect the sleep-wake cycle and sleep patterns among elderly population. Third, cognitive profiles and longitudinal observation of narcolepsy patients, with consideration for the incidence of AD in this particular group will be integral to understanding the link between AD and orexin. In sleep research, orexin has remained a central research arena for narcolepsy, with key findings that reported its major relationship with pathophysiology of narcolepsy. Mutations in orexin receptor gene were discovered in canine narcolepsy cases, 62 and subsequent reports demonstrated deficient orexin level in narcolepsy patients. 63 In a recent report, when compared with controls, type 1 narcolepsy patients exhibited reduced amyloid deposition. 64 Moreover, neurodegenerative CSF biomarkers, including Aβ, T-tau and P-tau levels were reduced in narcolepsy patients, when compared with healthy individuals. 65 Whether narcolepsy patients' orexin-deficient status is a protective factor for AD should be explored in further studies. Fourth, orexin level studies should be implemented along with validation studies with utilization of neuroimaging modalities. Indeed, a recent paper published an article defining cutoff values of CSF biomarkers defined by positron emission tomography (PET) status. 66 Orexin levels measured in different phases of AD trajectory, with their relationship with PET and neural networks manifested by magnetic resonance imaging (MRI) will provide a novel insight into the relationship between AD and orexin. Fifth, the relationship between orexin and female hormones will provide an interesting research arena in exploring the link between AD and orexin. Female sex is a well-known risk factor for AD, and searching for gender-specific predictors of AD is a major research field that deserves more clinical attention. 67 As mentioned before earlier in this article, a sexual disparity was noted in orexin expression levels, demonstrated in rats, with female displaying more vulnerability to stress and increased orexin activation. 15 Neuroprotective effects of estrogen, inherent vulnerability of female brain to AD, and a recent research findings of estrogen receptor localization of neurofibrillary tangles are all intriguing topics under discussion, 68,69 but most previous studies on the effects of estrogen replacement therapy on AD prevention are disappointing. 70 The relationship between sleep and estrogen has been reported frequently, [71][72][73][74] and in consideration for the integral role of orexin during wake and sleep states, investigations on the mediating role of orexin in AD pathology and sleep in female patients are needed. Inhibitory effects of estrogen on orexin A was reported in one study, 75 and orexin A was reported to be responsible for progesterone secretion in a sheep model. How female hormones, sleep and AD pathology interact could be a new pathway to understanding AD trajectory. Lastly, orexin plays a critical role as a stress regulator, and the relationship between orexin and cognition is often in the context of stress. A recent review reported accumulated evidences supporting oxidative stress evoked by prolonged wakefulness, increasing neuronal oxidative damage. 76 Orexin a wake promoter, and the relationship between reactive oxygen specifies and orexin levels in normal controls without AD pathology, preclinical dementia, mild cognitive impairment, AD patients will be conducive to unravelling the orexin's role as a stress regulator and its impact on cognition. CONCLUSION In this review, we have presented recent, updated evidences for the relationship between orexin and AD pathology. Many previous literatures suggested sleep as a modifiable risk factor for AD, [77][78][79][80] with emphasis on early intervention for those in the AD trajectory. However, still, there is no clear-cut model to understand the role of sleep in the course of AD pathology. Orexin, a major neurotransmitter regulating sleep, stress and appetite, may have a direct or indirect effect that could help uncover the relationship between sleep and AD. Further well-designed prospective studies are needed to explicate the link between orexin, sleep and cognition, adopting more diverse research methods and multifaceted understanding of orexin.
2020-06-11T09:11:16.973Z
2020-06-11T00:00:00.000
{ "year": 2020, "sha1": "349b71c0c06f121d3c49ccf0c48329b51c34db86", "oa_license": "CCBYNC", "oa_url": "https://www.psychiatryinvestigation.org/upload/pdf/pi-2020-0136.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72dd9d8971edf3dabf48ea78619d4c5c1bf788e9", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259284614
pes2o/s2orc
v3-fos-license
Prognostic survival biomarkers of tumor-fused dendritic cell vaccine therapy in patients with newly diagnosed glioblastoma Dendritic cell (DC)-based immunotherapy has been applied to glioblastoma (GBM); however, biomarkers informing response remain poorly understood. We conducted a phase I/IIa clinical trial investigating tumor-fused DC (TFDC) immunotherapy following temozolomide-based chemoradiotherapy in patients with newly diagnosed GBM and determined prognostic factors in patients receiving TFDC immunotherapy. Twenty-eight adult patients with GBM isocitrate dehydrogenase (IDH) wild-type (IDH-WT) were enrolled; 127 TFDC vaccine injections (4.5 ± 2.6 times/patient) were administered. Patients with GBM IDH-WT had a respectable 5-year survival rate (24%), verifying the clinical activity of TFDC immunotherapy, particularly against O6-methylguanine-DNA methyltransferase (MGMT) unmethylated GBM (5-year survival rate: 33%). To identify novel factors influencing overall survival (OS) in GBM IDH-WT treated with TFDC immunotherapy, clinical parameters were assessed and comprehensive molecular profiling involving transcriptome and exome analyses was performed. MGMT promoter methylation status, extent of tumor resection, and vaccine parameters (administration frequency, DC and tumor cell numbers, and fusion ratio) were not associated with survival following TFDC immunotherapy. Old age and pre- and post-operative Karnofsky performance status were significantly correlated with OS. Low HLA-A expression and lack of CCDC88A, KRT4, TACC2, and TONSL mutations in tumor cells were correlated with better prognosis. We validated the activity of TFDC immunotherapy against GBM IDH-WT, including chemoresistant, MGMT promoter unmethylated cases. The identification of molecular biomarkers predictive of TFDC immunotherapy efficacy in GBM IDH-WT will facilitate the design of and patient stratification in a phase-3 trial to maximize treatment benefits. Supplementary Information The online version contains supplementary material available at 10.1007/s00262-023-03482-8. Introduction Glioblastoma (GBM) is the most common malignant primary brain tumor, accounting for 56.1% of all gliomas [1]. The median overall survival (OS) of GBM patients following surgical intervention, radiotherapy, and concomitant and adjuvant temozolomide (TMZ) is approximately 14.6 months [2]. Cancer immunotherapy is a broad modality comprising several technologies, including immune checkpoint inhibitors (ICIs), that have emerged as an attractive approach to cancer treatment, particularly in patients with dismal prognosis. Extensive research has been conducted on the application of immunotherapy to treat GBM, with a particular focus on targeting immune checkpoint molecules, cytokines, tumor-associated macrophages (TAMs), and dendritic cells (DCs) [3]. Recently, a phase III immunotherapy clinical trial for GBM treated with DC vaccinations achieved extended patient survival compared with an externally controlled cohort [4]. DCs mediate innate and induce adaptive immune responses [5] and are considered the most potent and versatile antigen-presenting cells, readily initiating T-cell responses [6]. CD8 + cytotoxic T lymphocytes require crosspriming by DCs to initiate a productive T-cell response to tumor cells and viruses [7]. DCs are considered promising tools for cancer immunotherapy. DC-based immunotherapy has been described as a potential therapeutic strategy to improve clinical outcomes for GBM patients. Various approaches, including pulsing DCs with glioma tissue or peptides, have been attempted [4,8]; in the past, we conducted a clinical trial using a unique methodology of the tumor-fused DC (TFDC) [9][10][11][12]. To create the TFDC vaccine, patient-derived tumor cells were cultured and expanded. The advantage of using cultured tumor cells for DC activation is that the required amount of tumor-associated antigens (TAAs) can be obtained efficiently, even for small tumor biopsy specimens. In TFDCs, the cytoplasm of DCs and whole-tumor cells is integrated without nuclear fusion, allowing retention of the functions of both cell types, including co-expression of tumor-derived whole TAAs and DC-derived major histocompatibility complex (MHC) class I/II molecules [13]. Generally, TFDCs process various antigenic peptides from whole-tumor cells, which are loaded on MHC class I molecules on the cell surface, not needing to take up exogenous antigens. Subsequently, the antigenic peptide-MHC class I complexes can stimulate CD8 + T cells [13]. Thus, TFDC immunotherapy is expected to facilitate more effective antigen presentation than other DC-based immunotherapies. We have previously described the safety, feasibility, and mechanisms of TFDC therapy, including cytoplasmic accumulation of tumor antigen following TMZ-based chemoradiotherapy as well as the immunological and clinical responses in GBM patients [11]. In our previous phase I/IIa study, 22 patients with newly diagnosed and 10 patients with recurrent GBM underwent TFDC immunotherapy combined with TMZ, with a median OS of 30.5 and 18.0 months, respectively [11]. We only observed transient grade 1 toxicity of injection-site reactions [11]. This clinical trial demonstrated that TFDC immunotherapy is a safe and effective treatment method for patients with GBM and highlighted the need to identify biomarkers that predict patient response to the TFDC vaccine. Immunomodulatory factors within the tumor microenvironment (TME), including PD-L1 expression, T-cell infiltration, tumor mutational burden (TMB), and HLA expression, have been widely reported to correlate with immunotherapeutic responses [14][15][16][17][18]. However, the biomarkers predicting response to immunotherapy against GBM may differ from those against other cancers. Gromeier et al. [17] reported that a low TMB is associated with favorable clinical outcomes following ICI or oncolytic virus therapy efficacy in patients with malignant gliomas and may therefore represent a novel method to stratify patients for cancer immunotherapy [17]. Additionally, Zhang and colleagues have reported the potency of machine learning algorithms to validate the predictive capacity of immune cell-related long non-coding RNAs for prognosis and immunotherapy response in patients with GBM [19]. Therefore, the primary aim of the current study was to explore novel factors predicting the response to TFDC-based immunotherapy with TMZ in patients with GBM, via comprehensive molecular profiling analyses. Patients Inclusion and exclusion criteria for this study were the same as previously described [11]. In the current study, patients with GBM IDH-WT were eligible. Sixty patients who underwent TFDC vaccination from January 2006 to December 2016 were enrolled; of these, seven with a history of TFDC vaccination were excluded. Subsequently, 53 patients who were newly vaccinated with TFDC vaccines were screened, and three blinded pathologists (NF, KG, and MS) diagnosed all surgical specimens based on the World Health Organization (WHO) 2016 classification. Twenty-five patients were excluded because of the presence of a recurrent tumor (n = 6), rare type of tumor according to pathological re-diagnosis (n = 2), diagnosis of anaplastic astrocytoma (n = 3), or IDH mutant tumors (n = 14). Thus, 28 eligible patients, aged 21-74 years (median: 52 years), were enrolled and received a total of 127 vaccine injections (Fig. 1a). Of these, 19 patients were included in a previous study [11]. In the current study, we performed data fixation on December 31, 2020, with a follow-up period of 32.8 ± 21.8 (mean ± standard deviation (SD)) months. TFDC vaccination TFDC vaccines were generated from cultured tumor cells derived from surgical specimens and DCs from peripheral blood, as described previously [11,12]. The vaccine was subcutaneously administered in the cervical region, according to our protocol, as shown in Fig. 1b. To assess PD-1, CD3, CD8, and Foxp3 expression, the stained sections were surveyed under a low-power field (× 40), and five hot spots were selected. Positive cells in these areas were counted in a high-power field (× 400, 0.47 mm 2 ) [31]. For quantitative evaluation of HLA-A, the stained sections were screened in a low-power field (× 40), and a middle-power field (× 200) with the densest spot was assessed. HLA-A positive areas were determined using Fiji software (version 2.0.0-re-69/1.52p) [32]. Briefly, a brown channel was extracted from the image using "color deconvolution" and "H DAB" functions. The brown channel image can be thresholded from 30 to 150 and measured as HLA-A-positive areas. PD-L1 expression was scored as a percentage of tumor cells expressing PD-L1: 3 + , ≥ 50%; 2 + , ≥ 5% and < 50%; 1 + , ≥ 1% and < 5%, and 0, < 1% [31]. This assessment was performed by JT and NF in a blinded manner. Nucleic acid extraction from paraffin-embedded tissues and tumor cells Two 10-μm FFPE sections were used for DNA extraction with the Allprep DNA/RNA FFPE kit (QIAGEN, Venlo, the Netherlands). Cryopreserved tumor cells and patient-derived PBMCs were thawed immediately before use. DNA and RNA were extracted from subconfluently grown tumor cells and PBMCs using the DNeasy Blood & Tissue kit and RNeasy plus mini kit or Allprep DNA/RNA mini kit (QIAGEN), respectively. Prior to use, the extracted DNA and RNA were stored in -20 °C and -80 °C freezers, respectively. Methylation-specific PCR for detecting MGMT promoter methylation A previously reported protocol [35] was slightly modified to detect DNA in FFPE sections. Briefly, DNA was treated with bisulfite using the EZ DNA Methylation-Gold kit (Zymo Research, Irvine, CA, USA). The primer sequences for MGMT [35] were forward: 5′-GAG AGA TTT GTG TTT TGG GTT TAG TG-3′ and reverse: 5′-CCT TCA ACC AAT ACA AAC CAA ACA A-3′. PCR was performed using 50-100 ng of bisulfited DNA with the AmpliTaq Gold™ 360 master mix (Applied Biosystems) as follows: denaturation at 94 °C for 10 min, followed by 35 cycles of denaturation (94 °C for 30 s), primer annealing (62 °C for 30 s), and extension (72 °C for 30 s), and a final extension at 72 °C for 10 min. CpGenome Universal Methylated DNA (Merck Millipore, Burlington, MA, USA) and CpGenome Universal Unmethylated DNA (Merck Millipore) were used as positive and negative controls. PCR products (10 μL) were analyzed on 2% agarose gels stained with ethidium bromide at a final concentration of 0.1 μg/mL. If neither band was detected, the result was recorded as "not detected." Whole-transcriptome sequencing and analysis For RNA quality control, the RNA integrity number (RIN) was determined using the Agilent 2100 bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). Samples with RIN values ≥ 8.6 were used for library preparation. Poly(A) RNA was extracted from 2 μg of total RNA using the Dynabeads mRNA DIRECT micro kit (Thermo Fisher Scientific, Waltham, MA, USA). Preparation of RNA libraries using the Ion Total RNA-Seq kit v2 (Thermo Fisher Scientific) and sequencing on Ion Chef and Ion Proton systems (Thermo Fisher Scientific) were performed. All RNA sequencing data were processed using the CLC Genomics Workbench (QIAGEN), and the reads were mapped to the ENSEMBL reference human genome GRCh37. All samples were divided into two groups based on patient median OS and analyzed for differences in gene expression; genes with a fold change > 2 and p < 0.05 were considered to be significantly differentially expressed. Gene Ontology (GO) analysis was performed using the GO Consortium resources (http:// geneo ntolo gy. org/). Genes included in a specific GO term were assigned to two groups based on the median expression levels to determine differences in survival between the high-and low-expression groups. Reverse transcription (RT)-quantitative PCR For mRNA expression analysis, total RNA from cultured tumor cells was reverse-transcribed using the PrimeScript RT master mix (TaKaRa Bio, Inc.). Real-time amplification was achieved using a QuantStudio 5 RT-PCR system (Applied Biosystems); three biological replicates were used. mRNA expression was analyzed using TaqMan gene expression assays (Applied Biosystems), with Hs02786624_g1 for GAPDH used as an internal control and Hs01058806_g1 for HLA-A as a target gene. PCR was performed at the following conditions: denaturation at 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s and 60 °C for 1 min. HLA-A mRNA expression was calculated using the 2 −ΔΔCt method. One case was used as a reference to evaluate the relative expression levels. Whole-exome sequencing Tumor DNA and genomic DNA derived from PBMCs were used for whole-exome sequencing. Exome libraries were prepared using the Ion AmpliSeq Exome RDY kit (Thermo Fisher Scientific). Sequencing was performed on Ion Chef and Ion Proton systems (Thermo Fisher Scientific). All exon sequencing data were processed using the CLC Genomics Workbench 21.0.5 (QIAGEN), and the reads were mapped to the ENSEMBL reference human genome GRCh37. The oncoplot was constructed with R 4.1.2, Rstudio v2021.09.01 + 372, and Maftools 2.10.0 [37] following conversion of variant calling files to mutation annotation 1 3 format files with vcf2maf v1.6.21 [38], Ensemble variant effect predictor 104.3 [39], and Miniconda. TMB was defined as the number of somatic nonsynonymous mutations per megabase in the target region of the exome panel. Statistical analysis Continuous data are expressed as the mean ± SD, and categorical data are expressed as numbers and percentages. KPS is expressed as the median and interquartile range (IQR). To compare characteristics among patients in subgroups with different MGMT promoter methylation status, such as unmethylated, methylated, and not detected, the Kruskal-Wallis rank test and Fisher's exact test were used, as appropriate. OS was calculated from the day of the initial surgery until the date of death due to any cause or until censored. The log-rank test was used to compare survival differences for each variable. Univariate and multivariate Cox proportional regression analyses were used to assess the association between OS and other variables. For evaluation of vaccine parameters, OS was defined from the day of the third vaccination or last vaccination, if participants received vaccines fewer than three times until the date of death owing to any cause. Multivariate analyses were performed on parameters that were estimated to be relevant by a consensus of the clinical team and statistical experts. Cases with missing data were omitted, and the remaining available data were analyzed. Statistical analyses were performed using STATA 14 (StataCorp LLC, College Station, TX, USA) or GraphPad Prism version 9 (GraphPad Software, Boston, MA, USA). All p-values were two-sided, and the significance level was set at p < 0.05. Association between MGMT promoter methylation status and patient OS The demographic data for all patients are presented in Table 1. The median survival times and 5-year survival rates of GBM IDH-WT were 26.0 months and 23.2%, respectively (Fig. 1c, Table 1). MGMT promoter methylation is an independent favorable prognostic factor in patients with GBM who receive TMZ and radiotherapy [40]. The 28 IDH-WT GBM tumors were stratified into three subgroups based on MGMT promoter methylation status (Fig. 1d). Kaplan-Meier analysis showed that the 5-year survival rates in the subgroups with unmethylated, methylated, and undetected MGMT were 33.3%, 16.7%, and 25.0%, respectively (Fig. 1d). The 2-year survival of patients with GBM with unmethylated MGMT is typically less than 10% [40]; thus, this excellent survival rate demonstrated the effectiveness of TFDC therapy for this typically chemoresistant subgroup of GBM. The patients with unmethylated MGMT in their tumors survived longer than those from the other two subgroups; however, no significant difference was detected compared with the methylated group and not-detected group (p = 0.814 and p = 0.738, respectively; Fig. 1d). The univariate Cox proportional hazards regression model determined that MGMT promoter methylation status was not a prognostic factor for patients with IDH-WT GBM (Table 2), whereas age and pre-/postoperative KPS were. After age and sex adjustments, pre-/postoperative KPS was not a prognostic factor ( Table 2). TFDC-based vaccine parameters were not significant prognostic factors (Supplementary Table S1). Low tumor HLA-A expression predicted better OS in GBMs treated with TFDC immunotherapy RNA sequencing was performed and analyzed in 15 of 28 GBM IDH-WT specimens. We divided specimens into two groups based on median survival time, and Kaplan-Meier analysis revealed a difference in survival between the groups (Supplementary Fig. S1). There was no significant difference in baseline characteristics between the groups (Supplementary Table S2). In total, 473 differentially expressed genes (DEGs) were identified between the groups (Fig. 2a). GO analysis for biological processes revealed 327 enriched GO terms. Among the 15 GO terms with the highest enrichment scores, 5 were associated with the MHC (Fig. 2b). To validate the gene expression data with next-generation sequencing (NGS), we used 14/15 tumor samples (one sample was not available because of the lack of RNA sample) to perform RT-PCR analysis of HLA-A expression. We confirmed that HLA-A expression levels detected via NGS and RT-PCR showed a strong positive correlation (p < 0.001; Fig. 2c). Twenty-eight GBM IDH-WT tumors were divided into the high-and low-HLA-A groups, as determined by NGS, with the mean relative expression levels of HLA-A being 1.72 ± 0.40 and 0.96 ± 0.41, respectively (p = 0.008; Fig. 2d). We next investigated the relationship between the MHC and clinical outcomes using the Cox regression model and Kaplan-Meier log-rank test (Fig. 2e-g). Low tumor expression of HLA-A, but not HLA-B or HLA-C, was significantly associated with favorable OS prognosis in GBM patients treated with TFDC immunotherapy (p = 0.014). This survival impact of low HLA-A was considered specific to TFDC immunotherapy, as analysis of the CGGA GBM dataset revealed no association of HLA-A, HLA-B, and HLA-C with survival ( Fig. 2h-j). Additionally, there were no significant associations between OS and HLA-DPA, HLA-DQA, or 3 HLA-DRA levels ( Supplementary Fig. S2). Cox regression analysis confirmed that low HLA-A expression was the only prognostic factor for survival in this cohort (Supplementary Table S3). Analysis of expression of HLA-A and TIL markers and OS We next used immunohistochemical staining of GBM IDH-WT surgical specimens (n = 28) to assess HLA-A expression and various TIL and immune markers, comparing the HLA-A high and low groups (Fig. 3a). HLA-A expression levels detected by NGS and IHC showed a positive correlation (p = 0.012; Fig. 3b). We divided 28 GBM cohorts into two groups according to median HLA-A IHC-positive areas. Patients in the HLA-A low-staining group tended to live longer than those in the high-staining group, according to the Kaplan-Meier analysis (Fig. 3c). The percentage of PD-L1 negative tumors (score 0) was 35.7% and 14.3% in HLA-A low-and high-staining groups, respectively, revealing no significant difference between the groups (Fig. 3d). Although there was no statistical significance, the survival curve of PD-L1-negative tumors (n = 7) shifted right compared with that of PD-L1-positive tumors (Fig. 3e). The mean ratios of Foxp3/CD3 and Foxp3/CD8 in HLA-A high-versus low-staining groups were 0.060 ± 0.067 versus 0.027 ± 0.036, and 0.100 ± 0.106 versus 0.046 ± 0.067, respectively. The Foxp3/CD3 and Foxp3/CD8 ratios were approximately two-fold higher in the high HLA-A staining group than in the low HLA-A staining group; however, the differences were not significant (Fig. 3f, g). There were 5.7 ± 9.8 versus 3.7 ± 5.8 tumor-infiltrating PD-1-positive cells in HLA-A high-versus low-staining groups, respectively, with no significant difference between the groups (Fig. 3h). Whole-exome analysis of GBM tumors identified prognostic gene variants in GBM IDH-WT treated with immunotherapy We performed whole-exome analysis using DNA from 14/28 GBM IDH-WT tumor specimens with matched PBMCs. All exome-seq cases (n = 14) were included in RNAseq cases (n = 15). Matched PBMCs were not obtained in one case. Figure 4a shows 55 genetic variants, each identified in more than three samples. The median TMB was 3. Table S4). The CCDC88A variants were found in three tumors, all in the short-survival group. TONSL variants were observed in 5/14 (36%) patients, all of which belonged to the high HLA-A expression group, as determined by NGS; the presence of a TONSL mutation was significantly associated with high HLA-A expression in tumor cells ( Fig. 4a; Supplementary Table S5). We next attempted to stratify survival using the eight genes and dividing the cohort into two groups based on the presence of variants. Kaplan-Meier analyses and the log-rank test revealed that CCDC88A, KRT4, TACC2, and TONSL variants significantly impacted poor prognosis (Fig. 4b-e; Supplementary Table S6). We examined whether these gene variants were associated with TMB. CCDC88A mutant tumors had a significantly higher TMB than did CCDC88A wild-type tumors (Fig. 4f). KRT4 and TONSL mutant tumors tended to have an increased TMB compared with their wild-type counterparts (Fig. 4g, i); however, TACC2 mutant tumors did not (Fig. 4h). The mean TMB was approximately twofold higher in the high HLA-A expression group than in the low HLA-A expression group; however, this difference was not significant (Fig. 4j). All three long-term survivors with GBM IDH-WT who lived over 5 years (B14, B32, and B37) were MGMT unmethylated, had low HLA-A expression, and had no variants in any of the four prognostic genes, CCDC88A, KRT4, TACC2, and TONSL (Figs. 1d, 4a). We, therefore, analyzed whether MGMT promoter methylation status was associated with HLA-A expression, TME features, the TMB, and genetic mutations, which were considered prognostic factors in this study. The HLA-A-positive area and Foxp3/CD8 ratio did not differ in the methylated and unmethylated groups. The Foxp3/CD3 ratio was twofold higher in the methylated group than in the unmethylated group ( Supplementary Fig. S4a-c). The number of PD-1-positive cells was tenfold higher in the unmethylated group; however, this difference was not significant (Supplementary Fig. S4d). The mean TMB was approximately two-fold higher in the methylated group than in the unmethylated group, but this difference was not significant (Fig. 4k). CCDC88A variants were only found in the methylated group (Fig. 4a, Supplementary Table S7). Discussion We have previously described the safety, effectiveness, and mechanisms of TFDC therapy combined with TMZ, demonstrating immunological and clinical responses in patients with newly diagnosed and recurrent GBM in a phase I/IIa trial [11]. One of the rate-limiting factors in TFDC-based immunotherapy was to secure constant production of the number and viability of harvested tumor cells. The median number of tumor cells in the TFDC immunotherapy used in this study was 0.6 × 10 6 . We previously reported that the efficiency of creating TFDCs is 61.6% with a 2:1 fusion ratio and only 7.2% at a 10:1 ratio [11]. Dhodapkar et al. found that 1.6 to 4.0 × 10 6 antigen-pulsed DCs induced an effective immune response in healthy adults [41]. Accordingly, we propose a DC count of 1.2 × 10 6 , tumor cell count of 0.6 × 10 6 , and fusion ratio of 2:1 as appropriate conditions for a TFDC vaccine. There are no biomarkers that can predict the long-term survival of patients with GBM after DC-based immunotherapy. The current study suggested that the level of HLA-A expression in GBM IDH-WT was a significant prognostic factor in patients treated with TFDC immunotherapy. Given that T cells in the localized tumor tissue as well as systemic peripheral blood play a pivotal and major role in regulation for anti-tumor immunity, we observed a trend of Foxp3/ CD3 and Foxp3/CD8 ratios being higher in the high HLA-A expression group than in the low HLA-A expression group. These results suggest that immunosuppressive TILs are more prevalent in tumors with high HLA-A expression. HLA-A expression in the tumor can influence the antitumor immune balance, which may impact tumor response to TFDC immunotherapy. Limited data are available on the impact of HLA expression on the prognosis for GBM. Schaafsma et al. reported favorable outcomes in patients with glioma with low HLA expression who were treated with ICIs; however, this was not observed in those with GBM [18]. In general, tumors with higher levels of HLA-A expression are regarded as immunologically "hot" and good responders to cancer immunotherapy. However, the present study demonstrated that GBMs with lower HLA-A expression were associated with prolonged survival. GBM with downregulated HLA-A expression is considered to be in a particularly immunotherapy-naïve state. Modification of DCs, following stimulation and appropriate antigen presentation, maintains an immunosupportive TME [18,42], and TFDC-based immunotherapy could provide a clinical benefit to immunologically naïve tumors because of the preferable immunological balance in HLA-A low-expression GBM. To the best of our knowledge, this is the first study analyzing genetic mutations in GBM patients treated with TFDC immunotherapy to evaluate the effects of gene variants on clinical outcomes. We identified mutations in CCDC88A, KRT4, TACC2, and TONSL as potential biomarkers for poor prognosis in GBM patients receiving TFDC-based immunotherapy. Patients with CCDC88A, KRT4, and TONSL mutant tumors tended to have a significantly higher TMB than those with wild-type tumors. Gromeier et al. [17] reported that a very low TMB correlated with longer survival of patients with recurrent GBM following treatment with an oncolytic virus or ICI, as evidenced by immunological engagement. Moreover, the Kyoto Encyclopedia of Genes and Genomes antigen processing and presentation score was significantly higher in non-hypermutational samples (< 10 mutations/ megabase) in IDH-WT gliomas [43]. Thus, particularly in IDH-WT tumors, a low TMB may be associated with a better immunotherapeutic effect. The current study suggests that TMB as well as mutations in CCDC88A, KRT4, TACC2, and TONSL could represent important prognostic factors in patients with newly diagnosed IDH-WT GBM treated with TFDC immunotherapy. However, the mechanism by which these gene mutations negatively impact prognosis remains unclear. In the current study, the 5-year survival of patients with GBM with unmethylated MGMT was 33.3%; this excellent survival rate demonstrated the effectiveness of TFDC therapy. However, whether MGMT status is a prognostic factor for GBM treated with DC-based immunotherapy remains controversial. Five clinical trials of DC-based immunotherapy demonstrated that MGMT methylation was an indicator of favorable prognosis in GBM patients [4,[20][21][22][23][24]; however, two trials presented contrasting results [25,26]. Further research is therefore needed. In conclusion, we showed the promising activity of TFDC-based immunotherapy in IDH-WT GBM and the association of low HLA-A expression and the absence of CCDC88A, KRT4, TACC2, and TONSL mutations in tumor cells of patients showing better prognosis. These findings can inform the selection of patients who will clinically benefit from TFDC-based immunotherapy, maximizing favorable prognosis and cost-effectiveness. Expanding upon additional aspects, exploring the correlation between (immunologic) TME under immune-supportive/suppressive conditions and the effectiveness of TFDC immunotherapy presents intriguing subjects for inquiry. We will develop further research to provide proper immune-monitoring which should encompass not only TILs, but also TAMs and myeloid-derived suppressor cells.
2023-06-30T06:16:41.083Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "7c5104fca2daea55eabe6ac1eea81bc48d14c738", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-023-03482-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "3c924fab1fb6129e54c6a1c6d3a95af681a9ca88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23110755
pes2o/s2orc
v3-fos-license
Postpartum changes in plasma viral load and CD 4 percentage among HIV-infected women from Latin American and Caribbean countries : the NISDI Perinatal Study Women infected with human immunodeficiency virus type 1 (HIV) receive antiretrovirals (ARVs) during pregnancy for prophylaxis (PR) [prevention of motherto-child transmission (MTCT) of HIV] or for the treatment (TR) of their own HIV infection. Concerns have been raised regarding the discontinuation of ARVs after delivery (among women who received ARVs for PR) in light of the results of the studies of structured TR interruption among HIV-infected individuals receiving ARVs for TR (Ananworanich et al. 2006, El-Sadr et al. 2006, Ruiz et al. 2007). For example, in the SMART trial, HIV-infected subjects with CD4+ counts above 350 cells/mm3 were randomly assigned to the continuous or episodic receipt of ARV therapy. The latter group only used ARVs when the CD4+ count decreased to below Women infected with human immunodeficiency virus type 1 (HIV) receive antiretrovirals (ARVs) during pregnancy for prophylaxis (PR) [prevention of motherto-child transmission (MTCT) of HIV] or for the treatment (TR) of their own HIV infection.Concerns have been raised regarding the discontinuation of ARVs after delivery (among women who received ARVs for PR) in light of the results of the studies of structured TR interruption among HIV-infected individuals receiving ARVs for TR (Ananworanich et al. 2006, El-Sadr et al. 2006, Ruiz et al. 2007).For example, in the SMART trial, HIV-infected subjects with CD4 + counts above 350 cells/mm 3 were randomly assigned to the continuous or episodic receipt of ARV therapy.The latter group only used ARVs when the CD4 + count decreased to below Financial support: NICHD (HHSN267200800001C/N01-DK-8-0001) + Corresponding author: victormelo@terra.com.brPresented in part at the 48th Annual ICAAC/46th Annual IDSA Meeting (Title: Postpartum Changes in Plasma Viremia and CD4 Percent in HIV-Infected Women from Latin American and Caribbean Countries the NISDI Perinatal Study, Washington DC, October 25-28 2008).Received 28 June 2010 Accepted 15 October 2010 250 cells/mm 3 and had a significantly higher rate of opportunistic infections and all-cause mortality (El-Sadr et al. 2006).The results of studies evaluating changes in the plasma HIV RNA concentration [viral load (VL)], the CD4 + lymphocyte percentage (CD4%) or absolute CD4 + lymphocyte count, or the HIV clinical disease stage in the postpartum (PP) period among HIV-infected women continuing or discontinuing ARVs after pregnancy have shown conflicting results (Cao et al. 1997, Melvin et al. 1997, Watts et al. 2003, Martin et al. 2006, Tungsiripat et al. 2007, Cavallo et al. 2010).To avoid the effect of a higher volume of distribution on absolute CD4 + lymphocyte counts, the CD4% should be used for monitoring T lymphocytes during pregnancy and PP (Miotti et al. 1992, Ekouevi et al. 2007).We analyzed data from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) International Site Development Initiative (NISDI) Perinatal Study to determine factors associated with a VL increase or a CD4% decrease among HIV-infected women who received ARVs during pregnancy, according to whether or not ARVs were discontinued after delivery. The goal of this study was to evaluate changes in plasma human immunodeficiency virus (HIV) RNA concentration [viral load (VL)] and CD4 + percentage (CD4%) during 6-12 weeks postpartum (PP) among HIV-infected women and to assess differences according to the reason for receipt of antiretrovirals (ARVs) during pregnancy [prophylaxis (PR) vs. treatment (TR)].Data from a prospective cohort of HIV-infected pregnant women (National Institute of Child Health and Human Development International Site Development Initiative Perinatal Study) were analyzed.Women experiencing their first pregnancy who received ARVs for PR (started during pregnancy, stopped PP) or for TR (initiated prior to pregnancy and/or continued PP) were included and were followed PP.Increases in plasma VL (≥ 0.5 log 10 ) and decreases in CD4% (≥ 20% relative decrease in CD4%) between hospital discharge (HD) and PP were assessed.Of the 1,229 women enrolled, 1,119 met the inclusion criteria (PR: 601;TR: 518).At enrollment, 87% were asymptomatic.The median CD4% values were: HD [34% (PR); 25% (TR)] and PP [29% (PR); 24% (TR)].The VL increases were 60% (PR) and 19% (TR) (p < 0.0001).The CD4% decreases were 36% (PR) and 18% (TR) (p < 0.0001).Women receiving PR were more likely to exhibit an increase in VL [adjusted odds ratio (AOR) 7.7 (95% CI: 5.5-10.9)and a CD4% decrease (AOR 2.3;.Women receiving PR are more likely to have VL increases and CD4% decreases compared to those receiving TR.The clinical implications of these VL and CD4% changes remain to be explored. America and in the Caribbean (Read et al. 2007).ARVs for HIV-infected women and children had to be available at each participating site, along with alternatives to breastfeeding.The primary objectives of this observational study included assessing the use of ARVs for the prevention of MTCT and for the woman's own health.Enrollment began in 2002 and was completed in 2007.Women were eligible for enrollment if their pregnancy was confirmed, their HIV infection documented, they intended to deliver at a participating clinical site and to be followed up (along with their children) after delivery or birth and they were willing and able to provide informed consent.Enrollment had to occur before delivery.Signed informed consent was obtained for all subjects before enrollment into the study.The protocol was approved by the ethical review board at each clinical site, as well as by institutional review boards at the sponsoring institution (NICHD) and at the data management center (Westat). Maternal study visits were conducted during pregnancy, at delivery, at hospital discharge (HD) after delivery and at 6-12 weeks and six months PP.During each of these visits, a medical history was obtained, a physical examination was performed and laboratory samples were obtained (except at the delivery and the 6-month PP visits).The virological, immunological and clinical characteristics of the women were assessed during pregnancy, at the time of HD after delivery and at the 6-12 week PP visit.Maternal clinical disease staging was performed at each visit using the US Centers for Disease Control and Prevention classification system (CDC 1992). Study population and definitions for this analysis - The study population for this analysis was restricted to women enrolled in the NISDI Perinatal Study as of October 2007, with their first pregnancy during the study, who were followed until at least the 6-12 week PP visit, who received ARVs during pregnancy and for whom the reason for receipt ARVs was known.The receipt of ARVs during pregnancy was categorized as either PR or TR.Women were classified as having received PR if they were not receiving ARVs when they became pregnant, but they initiated one or more ARVs during pregnancy and discontinued these drugs at or before the 6-12 week PP visit.Otherwise, women were classified as having received TR if they were receiving ARVs when they became pregnant or continued ARVs after the 6-12 week PP visit.The outcome variables of interest were changes between the HD visit and the 6-12 week PP visit defined as: an increase in plasma VL (≥ 0.5 log 10 increase) and a decline in CD4% (≥ 20% relative decrease).Other covariates were defined as follows: homemakers, unemployed individuals and students were classified as not gainfully employed outside of the home and all others were classified as gainfully employed outside of the home.A maternal history of substance use during the index pregnancy was ascertained through maternal interview at enrollment. Data collected from a small part of the current cohort (4% of the study population) has been published elsewhere (Cavallo et al. 2010). Statistical analysis -The associations of categorical variables with VL increase and CD4% decline at the 6-12 week visit were evaluated using the Fisher-Freeman-Halton exact test.Variables at least marginally associated with VL increase and CD4% decline (p ≤ 0.20) were considered candidates for multivariable logistic regression modelling.Both stepwise selection and backward elimination strategies were applied to determine whether both selection procedures arrived at the same parsimonious model (using a 5% significance level). Ethics -The ethical protocol was approved by the ethical review board at each clinical site, as well as by institutional review boards at the sponsoring institution (NICHD) and at the data management center (Westat). RESULTS Size and characteristics of the study population -As of October 2007, there were 1,229 women enrolled in the NISDI Perinatal Study.Of these, there were 1,174 first pregnancies in the study.Of the 1,174 pregnant women, three were lost to follow-up between enrollment and the 6-12 week visit.Of the remaining 1,171 women, 1,163 received one or more ARVs during pregnancy.The reason for the receipt of ARVs during pregnancy could not be determined for 44 women.Thus, the study population comprised 1,119 women (601 received ARVs as PR and 518 received ARVs as TR). The characteristics of the study population of 1,119 women, overall and according to the receipt of ARVs for PR or TR, are shown in Table I.Several characteristics varied significantly (p < 0.05) according to the receipt of PR vs. TR, including country of residence, age, education and tobacco use during pregnancy. At enrollment and at HD, a greater proportion of women in the PR group were asymptomatic or only mildly symptomatic (clinical stage A) than women in the TR group.Women in the PR group were more likely to have received 1-2 nucleoside reverse transcriptase inhibitors (NRTIs) as their most complex ARV regimen of ≥ 28 days during pregnancy or to have received ARVs for < 28 days during pregnancy, than women in the TR group.Finally, at 6-12 weeks PP, women in the PR group were more likely to be at clinical stage A than women in the TR group (Table I). A greater proportion of women in the PR group had a CD4% ≥ 29% at enrollment (63.6%) and at HD (74.2%).At 6-12 weeks PP, they were more likely to have a CD4% ≥ 29% (52.7%) than women in the TR group.The percentage of women with VL < 1,000 copies/mL at enrollment and at HD was not different between the two groups.However, women in the TR group were more likely to have a VL < 1,000 copies/mL at 6-12 weeks PP (67.4%; p < 0.0001) (Table II). The median CD4% at HD was 35% (range 6-67%) in the PR group and 26% (1-62%) in the TR group.At 6-12 weeks PP, the median CD4% was 30% (3-69%) in the PR group and 25% (1-53%) in the TR group.The median VL at HD was 200 copies/mL for both groups.However, at 6-12 weeks PP, the median VL was 200 copies/mL for the TR group, compared to 7,910 copies/mL for the PR group.VL increase and CD4% decline -Univariate analyses assessing associations with an increase in VL between HD and the 6-12 week PP visit were performed.Several factors were marginally associated (p ≤ 0.20) with a VL increase and were included in logistic regression analyses, including the receipt of ARVs for PR (p < 0.0001), the country of residence (p < 0.0001), age (years) (p = 0.004), education (years) (p = 0.03) and alcohol (p = 0.12) and cocaine use (p = 0.08) during pregnancy.Other variables associated with a VL increase were CD4% (p < 0.0001), VL (copies/mL) (p < 0.0001) and HIV clinical disease stage (p < 0.0001) before the 6-12 week PP visit.Specifically, those with lower CD4% (at enrollment and at HD) were less likely to have a VL increase.Those with higher VL values (at enrollment and at HD) were less likely to have a VL increase.Finally, those with a more advanced HIV clinical disease stage (at enrollment and at HD) were less likely to have a VL increase.Multivariable modelling incorporating all of the above variables resulted in a final model including four variables (Table III): reason for the receipt of ARVs during pregnancy, the country of residence, VL at HD and clinical disease stage at HD.The country of residence was retained in the model to control for site variability.Those who received ARV PR during pregnancy had an almost eight-fold increase in risk of a VL increase at 6-12 weeks PP compared to those who received ARVs for TR [adjusted odds ratio (AOR) = 7.7; 95% CI 5.5-10.9].Women with VLs ≥ 1,000 copies/mL at HD were significantly less likely to experience a VL increase at 6-12 weeks compared to those with VL < 1,000 copies/mL.Finally, women with more advanced clinical HIV disease at HD were significantly less likely to have a VL increase that those who were asymptomatic (Table III). Univariate analyses assessing associations with CD4% decline between HD and the 6-12 week PP visit also were performed.Variables marginally associated (p ≤ 0.20) with a CD4% decline that were incorporated into logistic regression analyses included crowding (number of persons living in the household) (p = 0.11), alcohol (p = 0.12) and marijuana (p = 0.12) use during pregnancy and the receipt of ARVs during pregnancy for PR (p < 0.0001).Those with CD4% < 14% at enrollment (p = 0.04) and at HD (p < 0.0001) were less likely to have a CD4% decline.Similarly, those with more advanced HIV clinical disease at enrollment (p = 0.0046) and at HD (p = 0.0049) were less likely to have a CD4% decline.Multivariable modelling incorporating all of the above variables resulted in a final model including reason for the receipt of ARVs during pregnancy, country of residence and CD4% at HD (Table IV).As before, country of residence was retained in the model to control for site variability.Those who received ARV PR during pregnancy had a higher risk of a CD4% decline at 6-12 weeks PP compared to those who received ARVs for TR (AOR = 2.3; 95% CI 1.6-3.2).Those with a CD4% of 14-28% at HD were less likely to have a CD4% decline at 6-12 weeks PP compared to those with a CD4% ≥ 29% [AOR = 0.60 (95% CI 0.4-0.8)](Table IV). DISCUSSION In this study of a relatively healthy population of HIVinfected women [at enrollment, approximately half had a CD4% ≥ 29%, almost 60% had a VL below 1,000 copies/ mL and approximately 87% were asymptomatic (clinical stage A)], there was little clinical progression over time (by 6-12 weeks PP, 86% of women remained at clinical stage A).Although there were changes in CD4% and VL values over time (60% of women who received ARVs during pregnancy for PR had a VL increase and 40% had a CD4% decline during the first few weeks PP vs. 19% and 18%, respectively, of women who received ARVs during pregnancy for TR), most women who received ARVs for PR maintained a CD4% ≥ 29% (74% at HD and 53% at 6-12 weeks PP).Eighty-five percent of women who received ARVs for PR during pregnancy had a VL < 1,000 copies/mL at HD.However, only 24% of these women had a VL < 1,000 copies/mL at 6-12 weeks PP. The interruption of ARV PR after delivery is not completely analogous to structured ARV interruption (Ananworanich et al. 2006, Danel et al. 2006, El-Sadr et al. 2006, Pai et al. 2006, Ruiz et al. 2007).With the use of ARVs for the prevention of the MTCT of HIV, assuming the woman does not meet criteria for ARV TR, maternal ARVs are discontinued after the delivery of the infant.With the interruption of ARV TR, structured according to VL and CD4 + count results, patients whose immunological status suggests a low risk of disease progression either continued therapy or interrupted it until the CD4 + count reached a certain threshold.Studies generally have demonstrated worse outcomes for those with structured TR interruptions.Previous studies have evaluated changes in the absolute CD4 + count (Read et al. 2007) or percentage (Watts et al. 2003) among HIV-infected women who did or did not discontinue ARVs after delivery.As noted previously, the CD4% should be used to monitor T lymphocytes during pregnancy and PP (Miotti et al. 1992, Ekouevi et al. 2007).In a secondary analysis of the PACTG 185 clinical trial database (Watts et al. 2003), women who continued ARV therapy after delivery were compared to those who discontinued therapy.All enrolled women had CD4 + lymphocyte counts below 500 cells/mm 3 at enrollment.Most (86%) received zidovudine (ZDV) alone, while 14% received two NRTIs during pregnancy.Changes in CD4 + percentages between delivery and 18 months PP did not differ significantly according to whether ZDV monotherapy was continued or discontinued following delivery. Similarly, other studies have assessed VL changes among HIV-infected women according to the continuation or discontinuation of ARVs after delivery (Cao et al. 1997, Melvin et al. 1997, Watts et al. 2003, Martin et al. 2006, Tungsiripat et al. 2007).Some studies have suggested no change in VL PP, although others indicated VL increases.First, in a retrospective study of HIV-infected women who discontinued ARVs at delivery (Tungsiripat et al. 2007), median VLs at 12-96 weeks PP were similar to values obtained during pregnancy.In another study of 44 HIV-infected pregnant women, 23 women initiated ZDV therapy during pregnancy, 17 women did not use ARVs and four women used ZDV before and during pregnancy (Melvin et al. 1997).Overall, VLs remained stable until six weeks PP, but there was a trend toward an increase in VL values PP among those women who received ZDV therapy only during pregnancy.However, in the Ariel Project (Cao et al. 1997), in which most women (85%) used ZDV during pregnancy for the prevention of MTCT and only a minority of women received it before pregnancy and continued after delivery (TR), a significant VL increase was noted at two and six months PP.Among women enrolled in the PACTG 185 clinical trial (Watts et al. 2003), increases in VLs from delivery to 12 weeks PP were observed, but VL changes were similar among women continuing or discontinuing therapy after delivery.In a study of HIV-infected pregnant women who received ZDV alone, combination ARV therapy and combination ARV PR (Martin et al. 2006), follow-up continued for an average of 33 months after delivery.At the end of follow-up, the median VL was higher in the ARV PR group (3.5 log copies/mL) compared to the TR group (1.7 log copies/mL).However, at the last followup, the proportion of women on combination ARV therapy with VLs < 50 copies/mL did not differ significantly according to whether ARV TR or PR was used during pregnancy (78% and 79%, respectively). Finally, in a recent study comprising 112 HIV-infected pregnant women treated with potent ARVs (60 taking ARV for PR and 52 for TR), VL rebound affected women much more often in the PR than in the TR group (84.7% vs. 15.3%;p < 0.001) six months after delivery and was associated with ARV discontinuation.In addition, there was a higher decline in CD4 cell percentage among women in the PR group at this time (Cavallo et al. 2010). We believe that these apparently conflicting results in the literature regarding the interruption of ARVs after delivery and VL rebound and/or CD4% decline may be related to the increased activity of the current ARV combinations against HIV, which are predominantly triple regimens, offered to most of our pregnant women, in comparison to mono or dual therapy, which were used by women in older studies and were not as efficient at reducing VL and improving CD4 count. PP increases in VLs have been attributed to physiological changes occurring during pregnancy, such as blood volume expansion and elevated levels of estrogen and progesterone, that do not persist after delivery.However, in our study, we did observe a difference in VLs PP according to whether ARVs were continued or discontinued following delivery (60% of women who received ARVs during pregnancy for PR had a VL increase). The clinical implications of a VL increase in the PP period could be an increased risk of the development of viral resistance, possibly related to decreased adherence to ARVs in the PP period (Bardeguez et al. 2008).A recent analysis of data from the NISDI Perinatal Study included a predominantly asymptomatic population of HIVinfected pregnant women who were diagnosed with HIV infection during pregnancy and, therefore, only initiated ARVs during the index pregnancy (Duran et al. 2007).In this analysis, 14% of women had ARV resistance mutations at 6-12 weeks PP.The occurrence of resistance mutations was not associated with clinical, immunological or virological disease stage at the time points of comparison (enrollment and 6-12 weeks PP), nor with the most complex ARV regimens received during pregnancy.No significant association was found between VL and resistance mutations in this population of pregnant women. A major strength of this study is the large size of the cohort, with enrollment at multiple sites.Also, enrollment occurred at clinical sites in six Latin American and Caribbean countries and the "country of residence" variable was retained in the multivariable model to account for heterogeneity attributable to differences in the study populations and practices.The period of PP follow-up was relatively short, which prevented the analysis of HIV disease progression after 6-12 weeks PP.Adherence to ARVs was not assessed as part of the protocol.However, the protocol has been modified to incorporate a longer follow-up PP (up to 5 years) and assessments of adherence. The clinical implications of the observed VL and CD4% changes remain to be explored.However, it is important to emphasize that clinical disease stage remained quite stable throughout the duration of followup.Similarly, most women in the PR group maintained a CD4% ≥ 29% throughout the duration of follow-up.Whether the observed early increase in VL among those who used ARVs for the prevention of MTCT affects the risk of developing ARV-resistant viral mutations is an important question to assess, particularly in terms of the response to future ARV TR regimens. TABLE I Characteristics of the study population, overall and according to reason for receipt of antiretrovirals (ARV) during pregnancy NRTI: nucleoside reverse transcriptase inhibitor; PI: protease inhibitor. TABLE III Risk of viral load (VL) increase [unadjusted (OR) and adjusted odds ratios (AOR)] a: p value for unadjusted OR using Fisher-Freeman-Halton exact test for assessment of association of each characteristic with the outcome; CI: confidence interval.
2017-07-16T07:42:52.161Z
2011-02-01T00:00:00.000
{ "year": 2011, "sha1": "f850d3153b4e7ecfb7563bf0474061b6040ad1ed", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/mioc/a/WLzXGYd8JBBVB7MptNngyZg/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f850d3153b4e7ecfb7563bf0474061b6040ad1ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }