id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
247530336 | pes2o/s2orc | v3-fos-license | Rs868058 in the Homeobox Gene HLX Contributes to Early-Onset Fetal Growth Restriction
Simple Summary Fetuses with hypotrophy (FGR, fetal growth restriction) are too small for their gestational age and may be prone to various diseases and loss of life. This study aimed to determine the role of single nucleotide polymorphisms (SNPs), located in two homeotic and two angiogenesis-related genes, in the occurrence of FGR, by analyzing blood samples from 380 women in singleton pregnancies. We found that the AT heterozygotes in HLX rs868058 were significantly associated with an approximately two-fold increased risk of FGR, diagnosed before 32 weeks of gestation (early-onset FGR). AT heterozygotes were significantly more frequent in women with early-onset FGR than in those with late-onset FGR (diagnosed from 32 weeks of gestation) and compared with healthy subjects. In conclusion, the AT genotype in HLX rs868058 may be a significant risk factor for the development of early-onset FGR. So far, the only therapeutic strategy for the management of early-onset FGR is to monitor and terminate pregnancy when the risk of fetal immaturity is lower than the risk of intrauterine death. Therefore, the disclosure of the mechanisms of action of the heterozygous AT state in HLX rs868058 would be important to identify plausible targets for new therapeutic approaches to treat the condition. Abstract Fetal growth restriction (FGR) is a condition that characterizes fetuses as too small for their gestational age, with an estimated fetal weight (EFW) below the 10th percentile and abnormal Doppler parameters and/or with EFW below the 3rd percentile. We designed our study to demonstrate the contribution of single nucleotide polymorphisms (SNPs) from DLX3 (rs11656951, rs2278163, and rs10459948), HLX (rs2184658, and 868058), ANGPT2 (−35 G > C), and ITGAV (rs3911238, and rs3768777) genes in maternal blood in FGR. A cohort of 380 women with singleton pregnancies consisted of 190 pregnancies with FGR and 190 healthy full-term controls. A comparison of the pregnancies with an early-onset FGR and healthy subjects showed that the AT heterozygotes in HLX rs868058 were significantly associated with an approximately two-fold increase in disease risk (p ≤ 0.050). The AT heterozygotes in rs868058 were significantly more frequent in the cases with early-onset FGR than in late-onset FGR in the overdominant model (OR 2.08 95% CI 1.11–3.89, p = 0.022), and after being adjusted by anemia, in the codominant model (OR 2.45 95% CI 1.23–4.90, p = 0.034). In conclusion, the heterozygous AT genotype in HLX rs868058 can be considered a significant risk factor for the development of early-onset FGR, regardless of adverse pregnancy outcomes in women.
Introduction
Fetal growth restriction (FGR, fetal hypotrophy) is a condition that characterizes fetuses as too small for their gestational age that reveal an estimated fetal weight (EFW) below the 10th percentile and abnormal Doppler parameters and/or with EFW below the 3rd percentile, diagnosed in approximately 3-10% of all pregnancies [1,2]. FGR results from impaired genetic growth potential due to a pathological process of various etiologies, which leads to hypoxia and malnutrition of the fetus, imposing a serious threat to its health and life [3]. An FGR-affected newborn may be unable to maintain normal body temperature and may present respiratory distress, hypo-or hyperglycemia, susceptibility to infections, as well as cognitive delays plus neurological and psychiatric disorders in childhood [4][5][6].
Homeotic (homeobox) genes are the most important transcription factors that play a fundamental role in body structure pattern formation [7]. Several homeotic genes have been reported to be involved in placenta and embryo development [2,7]. In mouse models, the disrupted functions of distal-less homeobox 3 (Dlx3), extraembryonic, spermatogenesis, homeobox 1 (Esx1) and TGFB-induced factor homeobox 1 (Tgif1) genes resulted in an abnormal placenta development and embryonic hypotrophy [8,9]. In women with idiopathic FGR, significant differences were noted in the expression of the DLX3 and H2.0-like homeobox (HLX) genes, as well as in HLX downstream targets, compared to healthy controls without fetal hypotrophy [10][11][12]. Reduced expressions of HLX and ESX homeobox 1 (ESX1L) genes were found in a placenta with idiopathic FGR compared to those from control pregnancies [11]. On the other hand, in case of the DLX3, DLX4 and TGIF1 genes, an increased expression was shown in the FGR-affected placenta [7,13]. It is worth mentioning that TGIF1 has been shown to participate in the regulation of the expression of several genes, including angiopoietin 2 (ANGPT2) and the integrin subunit alpha V (ITGAV) involved in angiogenesis [14]. In the case of the DLX3 gene, its contribution has also been confirmed in the regulation of the expression of the peroxisome proliferator activated receptor gamma (PPARγ) transcription factor, involved in placenta development, trophoblast differentiation, and the occurrence of FGR [3,15].
Considering the genetic changes localized in homeotic genes previously related to FGR, today's literature indicates single nucleotide polymorphisms (SNPs), both from DLX3 and HLX genes [16][17][18]. For the DLX3 gene, a certain association was demonstrated between the rs2278163 polymorphism and the occurrence of dental caries in children with higher loads of Streptococcus mutans and Streptococcus sobrinus [18]. Additionally, a weak correlation was found between the incidence of alleles in DLX3 rs10459948, localized near rs2278163, and a susceptibility to dental caries in children with higher loads of Streptococcus mutans [18]. In the case of rs2278163, some involvement was also demonstrated in the susceptibility to molar-incisor hypomineralization [19]. With regard to the HLX gene, several polymorphisms were associated with the clinical course of Graves' disease (GD), the expression and secretion of type 1/2 T helper (Th1/Th2) cell line cytokines in neonates after birth, the onset of asthma in children, and the development of treatment-dependent acute myeloid leukemia [16,17,20,21]. In the case of HLX rs3806325 and rs2184648 polymorphisms, the presence of allelic variants −1407 T and 2742 G, respectively, was significantly associated with a reduced HLX promoter transactivation, which was accompanied by an almost complete decrease in the binding of the specificity protein-transcription factors to that region [21]. Regarding the angiogenesis-related genes, the ANGPT2 −35 G > C polymorphism was recently shown to be significantly correlated with high C-reactive protein levels and severity scores in patients with sepsis [22]. Among the ITGAV polymorphisms, Biology 2022, 11, 447 3 of 17 both rs3911238 and rs3768777 were associated with rheumatoid arthritis, while rs3768777 was also correlated with a severe progression of primary biliary cirrhosis [23][24][25].
No studies have previously been performed to determine the possible role of SNPs from the DLX3 and HLX homeotic genes, as well as from the angiogenesis-related ANGPT2 and ITGAV genes, in the pathogenesis of FGR. Therefore, we designed a case-control genetic association study to demonstrate the contribution of DLX3 (rs11656951, rs2278163, rs10459948), HLX (rs2184658 and 868058), ANGPT2 (−35 G > C), and ITGAV (rs3911238 and rs3768777) polymorphisms in the occurrence of FGR.
Characteristics of Pregnant Women
A cohort of 380 women with singleton pregnancies included in our study consisted of 190 individuals with FGR and 190 healthy full-term (from 37 to 42 weeks) controls (see Table 1), being inpatients of the Department of Perinatology, Obstetrics and Gynecology, as well as of the Department of Obstetrics and Gynecology of the Polish Mother's Memorial Hospital-Research Institute (PMMH-RI), in Lodz, Poland. Maternal blood samples were prospectively collected from all the pregnant women on admission during the period from August 2016 to March 2021. Among the diagnosed FGR cases, 58 (30.5%) women were characterized as early-onset FGRs, with the diagnosis obtained between the 18th and the 32nd week of gestation, and 129 (67.9%) were late-onset FGRs, where the diagnosis was obtained from the 32nd week to the 40th week of pregnancy (see Table S1). The women with FGR and early-onset FGR were 15 to 43 years old, while the control group was 19 to 43 years old. The patients with late-onset FGR ranged from 16 to 43 years of age. Early-onset preeclampsia (PE) was diagnosed in 7 (12.1%) women with early-onset FGR. The control group included women after 37 weeks of pregnancy and without FGR, admitted to the department for delivery. The exclusion criteria from the study included: multiple pregnancy, congenital anomalies, genetic syndrome, structural uterine defects, endometriosis, two-vessel umbilical cord, and fetal abnormalities. FGR was diagnosed by ultrasound when EFW was below the 10th percentile in relation to the gestational age and Doppler abnormalities were found, and/or EFW was below the 3rd percentile. The Fetal Medicine Barcelona (FMB) calculator [26] was used to assess percentiles based on ultrasound EFW, determined from biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), and humerus length (HL), as well as umbilical artery (UA) and middle cerebral artery (MCA) pulsation indices, UA diastolic flow, and uterine artery (Ut.A) flows, estimated by Doppler ultrasound.
See Table 1 and Table S1 for detailed data on the number of pregnancies and the occurrence of certain pregnancy disorders among the studied pregnant women, including anemia, asthma and respiratory infections, bleeding, diabetes mellitus (DM), hypothyroidism, threatened miscarriage, thrombocytopenia and urogenital infections. The activated partial thromboplastin time (APTT) and platelet (PLT) parameters, including PLT count, platelet distribution width (PDW), mean platelet volume (MPV) and plateletcrit (PCT), are also presented for the women enrolled into the study. EFW was below the 10th percentile in FGR cases, while among the controls, it was between the 11th and the 100th percentiles, estimated by the FMB calculator. The study was approved by the Research Ethics Committee at the PMMH-RI (approval numbers 31/2018 and 13/2019). Informed consent forms were signed by all the invited pregnant women, as recommended by the Research Ethics Committee.
Blood Sample Collection and Analysis
Peripheral venous blood samples were collected on admission from each pregnant woman enrolled into the study, both for diagnostic and research purposes, being anonymized in the latter instance. Nine NC/1.4 mL coagulation tubes were used to evaluate APTT using the HemosIL APTT-SP reagent on an ACL TOP 550 CTS automated system (Instrumentation Laboratory, Werfen Company, Bedford, MA, USA). The APTT reference range was 23 to 36.9 s, according to the manufacturer. EDTA KE/1.2 mL tubes were used for complete blood count (CBC) and DNA extraction. PLT parameters were estimated as a part of the CBC complex using the Fluorocell PLT reagent on a Sysmex XN-2000 Automated Hematology System (Sysmex, Kobe, Japan). The PLT count was referenced between 150 × 10 9 /L and 400 × 10 9 /L, and the MPV was normal from 8.0 to 10.0 fL, as reported by the manufacturer (Sysmex, Kobe, Japan).
Total DNA was purified from 200 µL of whole-blood samples using a Syngen Blood/Cell DNA Mini Kit (Syngen Biotech, Wroclaw, Poland). The obtained DNA was eluted from a mini spin column in 100 µL of DE buffer and stored at −20 • C until further analysis. 2.3. PCR-RFLP Assays for ANGPT2, HLX, and ITGAV Polymorphisms ANGPT2 −35 G > C, HLX rs2184658 and rs868058, as well as ITGAV rs3911238 and rs3768777 polymorphisms, were assayed by PCR-RFLP, as previously described [17,24,[27][28][29]. The European minor allele frequencies (MAFs) for SNPs, localized on the HLX and ITGAV genes, were >10.0%, according to the NCBI Allele Frequency Aggregator (ALFA) project. The primer sequences and PCR-RFLP assay parameters are presented in Table S2. Briefly, PCR mixtures contained up to 0.5 µg of purified DNA, 0.2 mM dNTPs mix, 0.4 µM of each SNP-specific primer, 1 × polymerase B buffer, and 0.5 U of Perpetual Taq DNA Polymerase (EURx, Gdańsk, Poland). The PCR program included initial denaturation at 95 • C for 3 min, 40 cycles of denaturation at 95 • C for 30 s, annealing at 52.7-61.5 • C, depending on polymorphism, for 40 s, an extension at 72 • C for 1 min and a final extension at 72 • C for 7 min. The PCR products were digested with 10 U of the appropriate endonuclease at defined temperatures for 16 h. PCR and restriction digestions were performed on a T100 Thermal Cycler (Bio-Rad, Singapore). The PCR and RFLP products were separated in 1.0-3.4% agarose gels (see Figure 1), prepared in 1 × TAE buffer, depending on the length of analyzed DNA fragments, and visualized in a ChemiDoc XRS+ imaging system (Bio-Rad, Hercules, CA, USA).
Sanger Sequencing for the SNPs Localized on DLX3 Gene
The primer sequences used for genotyping of the three SNPs, localized on DLX3 gene, i.e., rs11656951, rs2278163 and rs10459948, were self-designed using the PerlPrimer v1.1.21 software, and their specificity was checked by the Primer-BLAST tool [30]. The European MAFs for the tested DLX3 SNPs were >5.0%, according to the NCBI ALFA project. For rs11656951, PCR products of 331 bp length were obtained, using the following primers For: 5 -CCCACCTTAGACCATCTCTTTCC-3 and Rev: 5 -CTCTCGCTCCTATGCTCTCC-3 (primer concentration: 0.4 µM, annealing at 58 • C, 40 cycles). For rs2278163 and rs10459948, 337 bp PCR products were obtained with the primers: For: 5 -CATTTGATTGTGGCTTGGGAC-3 and Rev: 5 -GTGACAGAAGACTCGGGCAG-3 (primer concentration: 0.3 µM, annealing at 64 • C, 36 cycles). PCR was performed using Perpetual Taq DNA Polymerase (EURx, Gdańsk, Poland), similarly to the PCR-RFLP assays described in this study. PCR products were verified in 1.0% agarose gels and purified using the ExoSAP-IT™ PCR Product Cleanup Reagent (Thermo Fisher Scientific, Waltham, MA, USA). Sanger sequencing was performed with forward primers, using the BigDye™ Terminator v3.1 Cycle Sequencing Kit (Thermo Fisher Scientific), on a 3500 Genetic Analyzer (Thermo Fisher Scientific). Chromatograms (see final extension at 72 °C for 7 min. The PCR products were digested with 10 U of the appropriate endonuclease at defined temperatures for 16 h. PCR and restriction digestions were performed on a T100 Thermal Cycler (Bio-Rad, Singapore). The PCR and RFLP products were separated in 1.0-3.4% agarose gels (see Figure 1), prepared in 1 × TAE buffer, depending on the length of analyzed DNA fragments, and visualized in a ChemiDoc XRS+ imaging system (Bio-Rad, Hercules, CA, USA).
Statistical Analysis
Descriptive statistics of the pregnant women were performed using NCSS 2004 software. Pearson's chi-square test was used to determine differences between the studied groups of pregnant women in the number of pregnancies, the occurrence of pregnancy disorders, the methods of delivery, and the fetal sex. In order to compare the women in terms of age, APTT, PLT parameters, gestational age at delivery, as well as birth weight and Apgar scores of their offspring, Student's t-test or Mann-Whitney U test were performed, depending on the normality level of the examined variables. The Hardy-Weinberg equilibrium and the frequencies of alleles and genotypes in the ANGPT2, DLX3, HLX, and ITGAV gene polymorphisms were determined using SNPStats software [31]. Relationships between the genotypes in the tested SNPs and the occurrence of FGR, as well as the models of inheritance, were estimated using logistic regression analyses. The distribution of alleles from the studied polymorphisms between the pregnant groups was determined using Pearson's chi-square test. The results were considered statistically significant at the significance level of p ≤ 0.050.
Poland), similarly to the PCR-RFLP assays described in this study. PCR products were verified in 1.0% agarose gels and purified using the ExoSAP-IT™ PCR Product Cleanup Reagent (Thermo Fisher Scientific, Waltham, MA, USA). Sanger sequencing was performed with forward primers, using the BigDye™ Terminator v3.1 Cycle Sequencing Kit (Thermo Fisher Scientific), on a 3500 Genetic Analyzer (Thermo Fisher Scientific). Chromatograms (see Figures 2 and 3) were analyzed using the Sequence Scanner 1.0 software (Applied Biosystems, Waltham, MA, USA).
Statistical Analysis
Descriptive statistics of the pregnant women were performed using NCSS 2004 software. Pearson's chi-square test was used to determine differences between the studied groups of pregnant women in the number of pregnancies, the occurrence of preg-
Females and Offspring with FGR and Healthy Controls
The pregnant women in the study groups were of a similar age (see Table 1 and Table S1). Among FGR cases, multiparity occurred much more frequently compared to healthy controls (p = 0.021). Considering pregnancy disorders, anemia, threatened miscarriage, and thrombocytopenia were significantly more frequent in women with FGR than in healthy subjects (p ≤ 0.050). Similarly, anemia was more common in the women with early-onset FGR compared to those with late-onset disease (p = 0.001). The APTT and PLT parameters reached similar values in the studied groups. A lower gestational age at delivery was observed among the women with FGR compared to healthy ones, and in the women with early-onset FGR than in those with late-onset FGR (p ≤ 0.001). Among the FGR cases, caesarean section was significantly more frequent than vaginal delivery vs. the healthy controls (p ≤ 0.050). The fetal sex of the offspring had a similar distribution between the studied groups of women. Neonatal birth weight and Apgar scores at 1 and 5 min were significantly lower in the FGRs than in healthy controls and in the early-onset FGR compared to the late-onset disease (p ≤ 0.001).
Hardy-Weinberg Equilibrium
The Hardy-Weinberg (H-W) equilibrium was maintained for the genotypes from DLX3, HLX and ITGAV polymorphisms in all the groups of pregnant women (p > 0.050). In ANGPT2 −35 G > C SNP, H-W was observed in FGR cases (p > 0.050), while it was not obtained in healthy controls (p = 0.027).
Genetic Alterations from ANGPT2, DLX3, HLX, and ITGAV Polymorphisms
The genotypes, defined in ANGPT2 −35 G > C, DLX3 rs11656951, rs2278163, rs10459948, HLX rs2184658 and rs868058, as well as in ITGAV rs3911238 and rs3768777, were similarly distributed among the women with FGR and the healthy controls (see Table S3). A comparison of the women with early-onset FGR and the healthy subjects showed that the AT heterozygotes in HLX rs868058 were significantly associated with an approximately two-fold increase in disease risk in the codominant (OR 2.18 95% CI 1.16-4.09, p = 0.045, see Table 2) and overdominant models (OR 2.11 95% CI 1.16-3.83, p = 0.014). The relationship was significant in the overdominant model (OR 1.99 95% CI 1.07-3.70, p = 0.030) and also after the cases of being large for their gestational age (54/190 (28.4%)) were excluded from the control group. That association was significant in both the codominant (OR 2.21 95% CI 1.08-4.53, p = 0.028) and overdominant models (OR 2.42 95% CI 1.21-4.82, p = 0.011) after the adjustment imposed by anemia. Similarly, the AT heterozygotes in rs868058 were significantly more frequent in the women with early-onset FGR than in those with the late-onset disease in the overdominant model (OR 2.08 95% CI 1.11-3.89, p = 0.022, see Table 3). The results from the study also remained significant in the codominant and/or overdominant models when corrected for pregnancy disorders including anemia, asthma and respiratory infections, bleeding, DM, hypothyroidism, miscarriage risk, thrombocytopenia, and urogenital infections (see Table 4 and Table S4). After the adjustment by anemia, the AT heterozygotes in HLX rs868058 were found significantly more often in the women with early-onset FGR, compared to those with late-onset disease, also in the codominant model (OR 2.45 95% CI 1.23-4.90, p = 0.034). The alleles, localized in all the analyzed SNPs, had a similar distribution pattern among the studied groups of pregnant women (see Tables S5 and S6).
Study Size Calculation
Based on the allele prevalence rates determined for all the polymorphisms analyzed in our populations of women with FGR and healthy controls, a minimum necessary sample size should have been 184 subjects, with a 95% confidence level and a 5% margin of error. Considering the analyses performed for the women with early-onset FGR and healthy controls, the number of enrolled individuals should have been at least 147, while for comparisons of early-onset FGR and late-onset disease, the minimum sample size should have been 123 women. All those results were obtained with respect to the allele frequencies, found for ITGAV rs3768777.
Taking into account the European MAFs, provided for SNPs from the DLX3, HLX, and ITGAV genes, as part of the NCBI ALFA project, we calculated that at least 338 pregnant women should have been included in our study. That minimum sample size was obtained for both ITGAV rs3768777 and HLX rs868058.
Discussion
The reported study showed that the AT heterozygotes for HLX rs868058 had contributed to an approximately two-fold increase in the risk of early-onset FGR in Caucasian pregnant women. It was previously determined that the homeobox gene HLX was mainly expressed in proliferating cytotrophoblastic cells during early placenta development [32]. It was then suggested that decreased HLX levels were necessary for cytotrophoblast differentiation, while altered gene expression could be associated with placental pathologies [32]. In term placental explants and the BeWo trophoblast cell line, lowered HLX expression was correlated with a knockdown of the insulin-like growth factor 2 receptor (IGF2R) involved in the regulation of villous trophoblast survival and apoptosis [33]. Noteworthy was the significantly reduced expression of both HLX and ESX1L, noted from the 27th week of pregnancy, which could be associated with a declined growth rate of the fetus, observed in the third trimester [34,35]. A decreased expression of HLX mRNA and protein was found in the placentas from the pregnancies with idiopathic FGR [11,12]. Similarly, a significantly lowered expression of the HLX gene at both mRNA and protein levels was determined in the placenta of FGR twins compared to the normal control cases [36]. HLX was suggested to have been involved in abnormal placenta development, found in discordant twin pregnancies [36].
Regarding the rs868058 polymorphism of the HLX gene, it was only investigated in relation to the development of asthma in children, but no relationship was found [21]. Our results, as reported here, were the first outcomes to suggest a possible clinical relevance of the presented SNP. Rs868058 is located in the third intron of the gene and is included in the non-coding transcript exon variant HLX-203 (transcript ID: ENST0000054919.2), according to the Ensembl genome browser. Moreover, rs868058 is located in the regulatory region ENSR00000020421, typed as a promoter, which-among others-contributes to trophoblast suppression and stable placenta. Regarding the ENSR00000020421 regulatory function, the T allele in rs868058 is indicated as a regulatory region variant, in line with the Ensembl browser. Based on the results of our study, AT heterozygotes appear to be involved in a deregulation process of the trophoblast function, resulting in placental imbalance and followed by early-onset FGR. For other SNPs localized in HLX, the association of some polymorphisms with GD, childhood asthma, and therapy-related acute myeloid leukemia (t-AML) was previously confirmed [17,20,21,37]. The G allele in rs2184658, associated with decreased HLX expression was more frequently identified in patients with intractable GD, compared to those with GD in remission [17]. We found no relationship in our study between this SNP and FGR. An association with two tagging SNPs, rs3806325 and rs12141189, representing seven HLX polymorphisms, was reported in childhood asthma [21]. In turn, an over three-fold increased risk of t-AML was found in carriers of the CT genotype or at least one polymorphic T allele in the rs2738756 polymorphism [20].
The genetic results of our study remained also significant when corrected by pregnancy disorders, including anemia, asthma and respiratory infections, bleeding, DM, hypothyroidism, threatened miscarriage, thrombocytopenia, and urogenital infections. Previously, maternal anemia was, among other things, correlated with preterm birth (PTB), low birth weight, perinatal mortality, and maternal death [38]. The incidence of gestational iron deficiency anemia was significantly much higher in the women with FGR, as well as in PTB [39]. Similarly, maternal anemia was in our study significantly more common in the FGRs, and especially in those with early-onset disease, compared to the healthy controls.
Considering pregnant women with moderate or severe asthma, a higher incidence of spontaneous abortion, fetal structural anomalies, PTB, PE, FGR, oligohydramnios, gestational diabetes, and intrauterine fetal death was recently confirmed [40]. In pregnant women with symptomatic asthma, maternal hypoxia was suggested as a possible mechanism of FGR [41]. In our study, asthma and respiratory infections had a similar distribution pattern among the study groups, although the disorder was more frequent in cases of early-onset FGR compared to healthy controls. Conversely, decidual hemorrhage observed in the first trimester was previously associated with later adverse pregnancy outcomes, including fetal loss, PE, abruption, FGR, and PTB [42]. During the second half of pregnancy, independent risk factors for bleeding were reported, including oligo-and polyhydramnios, FGR, previous abortions, and advanced maternal age [43]. Bleeding was in our study more frequent among FGR cases, when compared to healthy controls, but the difference was statistically insignificant. It was also suggested from the first trimester that adverse pregnancy conditions, possibly involved in fetal nutrient restriction, such as smoking, cocaine use, chronic hypertension, anemia, and chronic DM, lead to symmetrical FGR [44]. However, in our study, DM was similarly distributed between the women with FGR and healthy controls.
Regarding subclinical hypothyroidism (SCH), it has recently been linked to FGR, although no effects on the risk of the disease were found for SCH, thyroid peroxidase antibody and isolated hypothyroxinemia [45]. In Bangladesh, overt hypothyroidism predisposed women to pregnancy-induced hypertension, FGR, and gestational diabetes, compared to the subclinical disease [46]. In line with our outcomes, hypothyroidism had a similar distribution among the groups of enrolled pregnant women, although a slightly higher prevalence was observed for FGR, as compared to control cases. Previously, no differences were observed in the incidence of FGR between the groups of women without bleeding and threatened abortion [47]. In our study, threatened miscarriage was found only in the women with FGR, as cases with that diagnosis were excluded from the healthy control group. Considering the mean platelet count, it was observed that the values decreased during gestation in women with pregnancy-related complications, as well as in healthy subjects, starting from the first trimester [48]. However, gestational thrombocytopenia at delivery was more common in the women with adverse pregnancy outcomes compared to healthy controls [48]. In the current study, gestational thrombocytopenia was also more prevalent among the women with FGR, particularly those with the early-onset disease, compared to the healthy controls. Regarding urogenital infections, an increased risk of FGR was previously determined in cases of vaginal and cervical infections with Bacteroides, Prevotella, Porphyromonas spp., Mycoplasma hominis, Ureaplasma urealyticum, and Trichomonas vaginalis [49]. In another study, a colonization of the maternal genital tract with Chlamydia trachomatis and Candida albicans was also associated with FGR [50]. Currently, we observed a similar distribution of urogenital infections in the studied groups of pregnant women.
In conclusion, the heterozygous AT genotype in HLX rs868058 can be considered a significant risk factor for the development of early-onset FGR, regardless of adverse pregnancy outcomes in women. Although early-onset FGR is now diagnosed without difficulty, it is still an incurable disorder of pregnancy. The only available therapeutic strategy for the management of early-onset FGR is to monitor and terminate pregnancy when the risk associated with fetal immaturity is lower than the risk of intrauterine death. It would be extremely important to understand the signaling pathways involved in the heterozygous AT state in HLX rs868058 in order to identify targets for new therapeutic approaches that are still needed to treat early-onset FGR.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biology11030447/s1. Table S1: Characteristics of women with early and late-onset fetal growth restriction. Table S2: Selected PCR-RFLP parameters for genotyping of single nucleotide polymorphisms, localized in HLX, ITGAV, and ANGPT2 genes. Table S3: Distribution of the genotypes from ANGPT2, DLX3, HLX, and ITGAV polymorphisms between women with FGR and healthy controls. Table S4: Distribution of HLX rs868058 genotypes between women with early-onset and late-onset FGR, adjusted by adverse pregnancy outcomes. Table S5: Distribution of the alleles from the ANGPT2, DLX3, HLX, and ITGAV SNPs between women with fetal growth restriction and healthy controls. Table S6: Distribution of the alleles from the ANGPT2, DLX3, HLX, and ITGAV polymorphisms between women with early and late-onset FGR. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Author
Data Availability Statement: All data and materials as well as software application support the published claims and comply with field standards. | 2022-03-19T15:15:13.841Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "a007251e5603e61cc7bbfba40a60cd73d20dba86",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/11/3/447/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "744f0412b82ed3d6be9dea0be9e31838a22a951c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20402226 | pes2o/s2orc | v3-fos-license | Elizabethkingia anophelis: Clinical Experience of an Academic Health System in Southeastern Wisconsin
Abstract In late 2015 and early 2016, 11 patients were identified with cultures positive for Elizabethkingia anophelis in our health system. All patients had positive blood cultures upon admission. Chart review showed that all had major comorbidities and recent health care exposure. The attributable mortality rate was 18.2%.
This was a retrospective case series of all consecutive patients admitted to Froedtert Health System hospitals with positive cultures for Elizabethkingia, Flavobacterium, and Chryseobacterium from November 2015 to June 2016. Froedtert Health is located in Southeastern Wisconsin and is comprised of Froedtert Memorial Lutheran Hospital in Milwaukee (total staffed beds, 518; total patient-days per year, 140 279), Community Memorial Hospital in Menomonee Falls (total staffed beds, 202; total patient-days per year, 30 227) and St. Joseph's Hospital in West Bend (total staffed beds, 70; total patient-days per year, 15 765). The microbiology records were retrospectively searched for cultures positive for Elizabethkingia spp., Flavobacterium spp., and Chryseobacterium spp. [1] from specimens obtained from any body site, with specimen collection dates since January 1, 2011, as requested by the Bureau of Communicable Diseases, Wisconsin Division of Public Health. Medical records of identified cases were summarized after chart review. Patient information collected for the study included demographic data (age, sex, county of residence), past medical history, clinical presentation, culture source, antibiotic therapy, previous contact with hospitals within our health care system 30 days prior to detection of Elizabethkingia, and mortality data. Blood cultures not identified by the Verigene system (Luminex Madison, WI) underwent cultivation on routine media, with preliminary identification and susceptibility testing utilizing matrix-assisted laser desorption/ionization time-of-flight (MALDI-ToF MS, Bruker Daltonics Inc., Billerica, MA) and susceptibilities performed by BD Phoenix (
RESULTS
During the 8 months of surveillance, a total of 13 patients were identified. Eleven (84.6%) were identified upon admission to our system (Table 1), and 2 (15.4%) were positive at transferring facilities prior to admission to our system. From the 11 cases identified in our system, 6 cases were initially identified as F. meningosepticum, and 4 were identified as E. meningosepticum. Out of the 11 patients recognized at our system, 54.6% were female and the mean age was 70.8 years (range, 49-84 years
CONCLUSIONS
From November 2015 to June 2016, 11 patients were identified in our health system with cultures positive for E. anophelis. All patients had positive blood cultures at the time of hospital admission. E. anophelis was identified in both sterile and nonsterile body fluids. All patients had at least 1 major comorbidity, including but not limited to cancer, chronic obstructive pulmonary disease, diabetes, end-stage renal disease requiring hemodialysis, and alcohol abuse. The mortality rate was high (18.2% of patients admitted to our hospital system), and 1 patient died 4 months after initial isolation. These findings seem to reflect the information released by state and federal authorities [7,8].
Outbreaks by Elizabethkingia have been previously described in the literature [2][3][4]6]. Most of these reports are attributed to E. meningoseptica, but misidentification of E. anophelis as E. meningoseptica could underestimate the number of cases attributed to this species. This phenomenon has been attributed to MALDI-TOF without up-to-date reference spectrum databases [9]; therefore, the increased number of cases could be related to better detection rather than emergence of a new pathogen. There have been descriptions of neonatal transmission causing neonatal meningitis, with molecular evidence suggesting vertical transmission from a mother with chorioamnionitis. The mechanism of colonization of the mother could not be identified. Environmental contamination was not found [6]. In contrast, another outbreak report described contaminated taps and aerators with this organism in an intensive care unit [3]. In this report, Elizabethkingia infection was associated with systemic inflammatory response syndrome in more than half of patients with isolation of E. meningoseptica. Five patients died (mortality rate, 17%) during this outbreak. Acquisition was associated with contaminated water sources in the critical care unit.
Our study is limited to patients admitted to hospital facilities within a narrow geographical area experiencing an outbreak by a single strain [9]. This might not be representative of the complete scope of infections with E. anophelis. During the outbreak investigation, the Wisconsin Division of Public Health cultured sinks and faucets in our facilities and failed to isolate Elizabethkingia. This suggests that environmental contamination was not a salient feature of this outbreak. The mortality rate noted in our health care system seems to be within the range of previous reports. The isolation of E. anophelis in blood cultures in all cases is striking, which could suggest either dissemination from nonsterile sites or direct contamination into the bloodstream. We cannot rule out a long incubation period prior to development of symptoms based on our findings, but the initial presentation of bacteremia in all patients makes this hypothesis less likely.
Our observations suggest that E. anophelis is a true pathogen, especially in patients with multiple comorbidities. Isolation of Elizabethkingia in sterile fluid should never be considered a contaminant. Combination antibiotic therapy pending susceptibility data is recommended, given the presence of multiple mechanisms of antimicrobial resistance [2,6,9]. Empirical treatment should include piperacillin/tazobactam plus quinolone, rifampin, or minocycline. Vancomycin has been used in severe infections, especially in meningitis [2]. Duration of therapy has not been evaluated in clinical trials. It should be dictated by the clinical situation, with prolonged courses for deep-seated infection. Treatment can be complicated by the ability of E. anophelis to develop biofilms [3,4]. Ruling out meningitis in a patient with altered mental status is crucial, given the ability of this organism to invade the central nervous system [6]. From an infection prevention standpoint, active surveillance including environmental sampling of hospital water system components, infusion containers, and parenteral and antiseptic solutions, along with enhanced terminal cleaning and the establishment of contact isolation precautions to contain dissemination, have been advocated in outbreaks involving hospital areas caring for patients with complex medical issues [2,3,7]. The clinical community ought to increase their awareness of Elizabethkingia as an emerging pathogen, and further studies should be performed to better determine its pathogenicity, modes of transmission, and optimal treatment. | 2018-04-03T00:48:17.934Z | 2016-10-28T00:00:00.000 | {
"year": 2017,
"sha1": "8a0b793c747875e1c04457efc9d0c418d4d89138",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/ofid/article-pdf/4/4/ofx251/23168502/ofx251.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a0b793c747875e1c04457efc9d0c418d4d89138",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73622192 | pes2o/s2orc | v3-fos-license | Current Treatment for Carpal Tunnel Syndrome
The combination of surgical procedure (open or endoscopic techniques), rehabilitation and antioxidant therapy (Alpha lipoic acid, curcumin) is superior to monotherapies in the prognosis and recovery of patients with this pathology. The prescription of these medications by their mechanisms of action should be allocated prior to decompression surgery and should continue receiving medication during the rehabilitation time. Clinical and electrophysiological follow-ups are required to verify the improvement.
Definition
The American Academy of Orthopedic Surgeons (AAOS) defines the carpal tunnel syndrome (CTS) as the most common form of entrapment neuropathy of the median nerve, and the syndrome affects 3.8% of the general population [1], with an incidence in both genders of 376 per 100,000 US habitants [2] combined and with a prevalence that usually varies in relation to the risk factors of a specific population; a study among poultry processing employees reported an estimated prevalence of 42%. CTS is one of the most common clinical problems encountered by hand surgeons. Although this syndrome is widely recognized, its etiology remains largely unclear [3].
Anatomy
The median nerve (MN) derives from the brachial plexus as a terminal branch of the medial and lateral cord. The fibers from the lateral cord (C6-7) provide sensitivity to the thumb, the index and the middle finger, as well as the motor fibers of the proximal muscles innervated by the median nerve (palmar muscles, pronator teres muscle). The medial cord (C8-T1) supplies most of the motor fibers to the distal muscles of the forearm and the hand, as well as the sensitivity to the external part of the ring finger. The MN descends through the arm without creating any branches until it reaches the forearm, just beneath the head of the pronator teres muscle, where its most important branch originates, the anterior interosseous nerve of the forearm. This nerve supplies the flexor pollicis longus, flexor digitorum profundus and pronator quadratus. Multiple muscular branches arise from the MN during its path, which supply the pronator teres, flexor carpi radialis, palmaris longus and flexor digitorum superficialis muscles. Proximal to the wrist and the carpal tunnel (CT), the palmar branch of the median nerve emerges and innervates the skin of the thenar eminence [4] (Figure 1).
In the palm of the hand, the MN sends a motor branch to the lumbricals of the index and the middle finger as well as a recurrent branch to innervate the muscles of the thenar eminence (abductor pollicis brevis, flexor pollicis brevis and opponens pollicis). The proper palmar digital nerves are sensory fibers that supply the skin of the index and the middle fingers as well as the medial part of the ring finger and medial surface of the thumb [4] (Figure 1). There are multiple anatomical variants in the path and distribution of the MN, which could be present in up to 11% of the population. The most common variants are the presence of a medial residual artery, which is an embryological remnant that usually suffers a regression in the second trimester but could persist in 5% of the population, and it is also related to the presence of the CTS, and the anastomosis of Martin Gruber, which is a motor communication between the median and ulnar nerve at the forearm, which could have its origin from the union of the principal fibers of these nerves or in the anastomosis of the anterior interosseous nerve with the ulnar nerve and it is present in 5-10% of the population [5].
The carpal tunnel connects the anterior compartment of the forearm to the palm of the hand. It is delimited medially by the pisiform bone, laterally by the hamulus of the unciform bone, posteriorly by the scaphoid bone and the trapezoid bone and its roof by the transverse carpal ligament. It can be divided into three portions:
Physiopathology of carpal tunnel syndrome
CTS is generally conceded as disarrangement caused by a decoupling of the size of the components of the carpal tunnel, and the space delimited by the fibrous and osseous structures. This is what conditions the compression of the median nerve, altering its irrigation. The compression of the components within the carpal tunnel induces venous congestion and epineural edema, consequently inducing fibroblast invasion in the affected tissue causing constriction and fibrosis of the endoneural compartment of the median nerve. The edema and the epineural and endoneural compression interrupt the axoplasmic flow of nutrients and ions and cause the median nerve to become enlarged [6].
Furthermore, the most common diagnosis is idiopathic CTS; nevertheless, recent studies that used magnetic resonance imaging (MRI), histological and biomechanical techniques have strongly suggested that abnormalities of the synovial tissue within the carpal tunnel are closely related to the development of idiopathic CTS, which means that subsynovial connective tissue may be predisposed to shear injury from activity done in 60° of wrist flexion [7].
Effect of ischemia/reperfusion in the progression of the nervous injury in the carpal tunnel syndrome
The symptoms of CTS are caused by increased pressure within the carpal tunnel, and therefore, a decreased function of the median nerve. Nerve damage is attributed to restriction of blood flow in the endoneural capillary system, leading to alterations in the bloodnerve barrier structure and resulting in endoneural edema, venous congestion, ischemia and subsequent metabolic abnormalities. The ischemia-reperfusion injury of the median nerve results in oxidative stress and inflammation of the subsynovial connective tissue, and it has been proposed that this could have major contribution in the evolution of idiopathic CTS.
The intermittent compression of the vascular-nervous plexus due to a reduction of lumen of the carpal tunnel is one of the pathophysiological processes that is suspected to be the cause of development of CTS [1][2][3][4][5][6]. Nervous tissue has a very small capacity to tolerate ischemia (<20 min), which makes this tissue very vulnerable to be damaged [2][3][4][6][7][8][9][10]; the component of narrowing of the carpal tunnel is intermittent, but persistent, which means that the injury is not presented acutely but rather progresses chronically [8][9][10].
The ischemia/reperfusion (I/R) phenomenon begins with an occlusion of arterial or venous blood flow to a tissue or an organ (ischemia); this interruption in the perfusion to the tissue will develop a direct injury in a limited area due to the ischemia, and this occurs in a specific time and depends on the affected organ (musculoskeletal, cardiac, renal, neuronal, adipose, tendinous, etc.). When the blood flow is restored (reperfusion), multiple local and systemic mechanisms will be activated in the affected area, which implies an increase of injury, known as I/R injury. The extension of this depend on the perfusion area of the affected vessel, the time of ischemia and the repeated number of I/R events. Initially, it leads to an acute lesion proper of the phenomenon, then a major extension of the damage secondary to the repetition of I/R events, since in the CTS, it occurs in an intermittent and prolonged way [7,[10][11][12].
Components of the I/R injury in the carpal tunnel syndrome
There are multiple components in the I/R phenomenon [10,12]; however, the most important components in the pathological development of the carpal tunnel are as follows: (1) increase in the cytosolic cations' concentration (change in the permeability of the membrane), (2) mitochondrial lesion (alteration of the ATP production and oxidative stress), (3) oxidative stress (production of reactive oxygen and nitrogen species coupled by disruption of redox reaction), (4) immunity-mediated lesion and (5) transcriptional reprogramming [11,12] (Figure 3).
Alteration of the cellular membrane permeability
The activation of multiple transporters in the cellular membrane during the I/R phenomenon will lead to major changes (related to cellular microenvironment) in the calcium (Ca 2+ ) and sodium (Na + ) concentration. The movement of these cations through the membrane will be accompanied by water molecules that migrate from the extracellular space into intracellular and vice versa, making changes in the cellular volume leading to cellular and interstitial edema [9,13,14]. This edema expressed exclusively by the affected tissue will impact all the cellular groups in the adjacency such as nervous, adipose, muscular and endothelial cells from the structure of the carpal tunnel, and it can be accompanied by nervous or endothelial injury that will manifest with changes in the volumes of water in different compartments. These changes will produce secondary and interstitial compartment syndrome followed by edema, extended ischemia and local necrosis [9,[13][14][15].
Mitochondrial lesion
The mitochondria participate in multiple cellular activities such as ATP production and modulation of the redox state of the cell. The ATP production over the injury process is interrupted by a blocked complex in the respiratory chain (complexes III and IV), depletion of metabolic substrates (ADP, Pi, pyruvate, etc.) and high production of nitric oxide (NO * ) [15,16].
The mitochondrial injury translates as a failure to adapt to the deprivation of oxygen and an OS overload to the enzymatic scavenger of the mitochondria in the affected cells by I/R phenomenon [16,17]. If the lesion is substantial, it will be followed by fission processes of the mitochondria (fragmentation), loss of function and loss of the membrane potential [16,17].
Oxidative stress and signaling redox
The blockage of the respiratory chain in the mitochondria increases the amount of oxidative stress (OS), leading to a production of reactive oxygen species (ROS) such as superoxide anion (O 2 * ) and mainly, hydrogen peroxide (H 2 O 2 ). The increase in the production of NO * combined with the overproduction of ROS (in the phase of reperfusion) and the overload of scavenger systems in the mitochondria will magnify the production of reactive nitrogen species (RNS), primarily peroxynitrite (ONOO * ) [18][19][20].
The cells that present mitochondrial injury coupled with loss of mitochondrial membrane potential, "point of safe return" (PSR), will cause opening of mitochondrial permeability transitional pores (mPTP). The opening of these pores will liberate all mitochondrial ROS and RNS gathered in the matrix that will subsequently interact with cellular components (mostly lipids, proteins, nucleic acids), corrupting the function and triggering mechanisms of cellular apoptosis and autophagy [19][20][21].
The ROS and RNS production (OS) alters the function of the cell and make changes in the signaling redox of the adjacent survivor cells [22,23], as some species of free radicals have the capacity to travel up to 400 nm in distance (ONOO * ) causing disturbance in the function and the configuration of membrane components and organelles in other healthy cells. By modifying redox signaling, the affected cells that survived the initial lesion can trigger a transcriptional reprogramming that will lead them to gene expressions of cellular injury such as pro-inflammatory cytokine receptors, making them susceptible to apoptosis induced by immunity-mediated cells [23].
Immunity-mediated lesion
The I/R lesion will activate three types of inflammatory responses such as sterile, adaptive and innate [11,24]. The three kinds of immunity are activated by the OS activity generated in the mitochondria of affected cells by the initial lesion and the liberation of cellular material of necrotic cells (DNA, RNA, lysosomes, proteases, glucosidases, ATP, ADP, etc.), and this will secondarily result in the activation and differentiation of inflammatory cells, producing an adaptive response. In the lesion by IR, the toll-like receptors (TLRs) will be expressed in the cellular membrane, principally TLR2 and TLR4 [25]. Sterile immunity is characterized by the recruitment of neutrophils and macrophages and also by the production of pro-inflammatory and antiinflammatory cytokines such as IL-1β, IL-6, TNF-α and IL-10 liberated by the damaged cells. The expression of pro-inflammatory cytokines will induce the local activation of the immune system causing necrosis of the previously injured cells and increasing the extension of the lesion [26].
The activation of multiple inflammatory systems due to the I/R phenomenon will involve a continuous inflammation that will persist while compression exists, increasing the damage; therefore, the recovery of this process will also depend on the chronicity of the injury. The transcriptional reprogramming of injured tissues will produce a change in some cellular groups, causing the expansion not only of acute but also of chronic and permanent lesion by the presence of fibrosis in the damaged area [27].
Transcriptional reprogramming
During the process of increased OS, disruption of signaling redox and inflammation will develop and result in transcriptional reprogramming, which involves a specific injured cell groups that will suffer a change in their structure and function, which is called epithelial mesenchymal transition. This produces cell mutation into pro-fibrotic phenotype cells, promoting permanent lesion of tissue and dysfunction of the limb [28,29].
Clinical evaluation in CTS
In an individual with classical carpal tunnel syndrome, the most common symptom is pain accompanied by fingerprint weakness and numbness of the hand in the median nerve domain. The pain in CTS is characterized by two main pathophysiological processes: (1) acute ischemic pain due to compression and (2) chronic pain due to inflammation; the nervous tissue is the most susceptible tissue to the changes of oxygen and metabolic substrates [30]. The secondary lesion or death will manifest in sensation alterations and dysfunction, and the chronic lesion will lead to the formation of fibrosis and permanent lesion [31].
Two clinical provocation tests are useful to demonstrate severity and monitoring the progression. Phalen's test is applied by tapping over the median nerve as it passes through the carpal tunnel; a positive response is defined as a sensation of tingling in the distribution of the median nerve in the hand. Tinel's test is performed by hyperextending the wrist for 60 s; a positive response is defined as a sensation of tingling in the distribution of the median nerve in the hand [32].
The Boston Carpal Tunnel Questionnaire (BCTQ) is an easy, brief, self-administered questionnaire developed by Levine et al. for the assessment of symptom severity and functional status of patients with CTS. A validated version of the 11-item Boston Questionnaire for CTS (score range 11-55) is an evaluation instrument that was recognized as reproducible, valid, with internal consistency and able to respond to clinical changes, accepted in many countries for the assessment of severity of symptoms and functional status of patients, evaluating how the syndrome affects daily life [33] and the follow-up of progression ( Table 1).
The severity of CTS is divided into three stages are as follows: 1. The symptoms presented during the first stage are as follows: waking up with the sensation of stiffness, numbness and weakness of the hand, perceiving the hand as swollen even though an increase in volume is not visible, pain with variable intensity that irradiates to the shoulder also called brachialgia paresthetica nocturna. The pain mitigates by shaking or flicking the hand.
2.
In the second stage, the symptoms progress to being constant all day. Repeated hand or wrist motion and immobility of the hand for long periods of time may exacerbate the symptoms. At the moment of gripping objects, patients may also feel clumsiness or awkwardness.
3.
The third stage is characterized by hypotrophy or atrophy of the thenar eminence, with a variable loss in the sensibility [31].
Electrophysiology
Nerve conduction studies (NCS) have to be performed immediately before the conservative treatment to follow-up progression and, in case surgery is required, evaluation before surgery and monitoring during recovery for at least after 3 months. Electrophysiology recordings from the median nerve could be analyzed in the context of Dumitru's reference values: distal sensory latency 3.0 ± 0.3 ms, distal sensory amplitude 15-50 μV, distal motor latency 4.2 ms and distal motor amplitude 13.2 ± 5 mV [34].
Treatments of CTS
Patients with mild or moderate CTS may first be offered conservative treatment. Options include splinting [35], corticosteroid therapy [36], physical therapy and therapeutic ultrasound [37,38]. Patients with severe CTS and those whose symptoms have not improved after 4-6 months of conservative therapy should undergo surgical decompression. Endoscopic or open techniques are equally effective [39]. Clinical and neurophysiological improvements can be observed within the first 3 months of surgery, but up to 20% of patients may experience persistent postoperative sensory symptoms [40,41].
Surgical procedure
The standard technique of open carpal tunnel release has proven to be effective and safe [42,43]. The classical technique consists of a 7-cm curved incision just ulnar to the thenar crease and [43,44]. There is no major advantage of short incision in the palm according to various studies; the major advantage is the size of the scar, which is shorter when compared to the classical incision. The time of return to work in comparing endoscopic, short incision and a classical incision do not show any significant differences between these techniques according to some studies [43,44].
Endoscopic techniques to release the carpal tunnel, either single [44] or double-portal [45], reduce the morbidity and have a faster recovery period. Even though it offers theoretical advantages of reduced postoperative pain, quicker recovery of grip strength, fewer complications and faster return to everyday activities, endoscopic carpal tunnel release has not been widely adopted as the open techniques. It's worth pointing out that the risk of nerve injury increases with these types of techniques [46].
From the pathophysiology to the therapeutic
The chronic lesion process will depend on the induction of I/R phenomenon on a repetitive pattern, in which the main pharmacological therapeutic target for regression or stopping the progression of the lesion would be OS modulation and inflammation.
The increase in OS production by the mitochondria in the I/R phenomenon is a key point to explore in the therapeutical approach to CTS, since the overproduction of ROS and RNS is involved in the activation of the inflammatory response and injury in the nervous cell [10,16,23]. The stimulation of the inflammation secondarily increases the production of OS; consequently, the therapy with antioxidants will be indicated in the acute and chronic phases of CTS.
The use of different antioxidants has been widely explored in the I/R phenomenon; however, there have only been made few investigations, which have had ambiguous results. From the point of view of evidence-based medicine, the use of antioxidants in the CTS treatment can produce a beneficial effect, accompanied by the decompression surgery, due to the impact in the pathophysiological process with the removal of the intermittent ischemic compression determined by the adjacent tissues in the walls and the reduction and modulation of the OS in the I/R phenomenon.
Nowadays, the most frequented phytodrugs that have shown positive results in CTS management and are focused on OS are as follows: 1. Alpha lipoic acid
Alpha lipoic acid
It is an essential substance for the function of different enzymatic components of the cells. It acts as a metal chelate, reducing free radicals, inflammation and modulating redox signaling.
What stand out of this agent is the antioxidant, neuroprotective and neurotrophic properties, which exist in two isomeric forms such as R and S, where R plays a significant role in the pyruvate metabolism process and it is used in the mitochondria for the ATP generation. Its quality as metallic chelate is based on the interaction with divalent transitory metals (Mn, Cu +2 , Pb +2 and Zn +2 ), and the ALA reduced form, which is dihydrolipoic acid (DHLA) (Figure 4), has the capacity to interact with Hg +2 and Fe +2 ; the inflammatory modulation is through the nuclear factor kappa-B (NF-kB) path, it has been reported that degradation of IkB inhibitor can be suppressed by multiples mechanisms, supporting the reduction in pro-inflammatory cytokines production. The therapeutic doses that have been used for different pathologies range between 100 and 1200 mg/day and at least 2 weeks of treatment are needed for positive results and have been safely used for up to 4 years in clinical trials [47] (Figure 4).
Curcumin
It is an herbal polyphenol component with potent anti-inflammatory and antioxidant properties, extracted from the Curcuma longa, which has multiple therapeutic effects in different pathologies (cancer, autoimmunity, inflammation, metabolism, etc.). It has great distribution and favorable therapeutic range; however, its absorption rate and plasma half-life are short.
The new formulation has given this phytodrug the capacity to achieve higher plasmatic concentrations, absorption and distribution, but more clinical studies are needed to be performed to confirm the information [48] (Figure 5).
Antioxidant mechanism of ALA and curcumin
The alpha lipoic acid and curcumin substances are OS scavengers through four mechanisms: (1) radical adduct formation (RAF), (2) hydrogen transfer (HT), (3) single electron transfer (SET) and (4) environments of different polarity. All these activities contribute to an exogenous reduction of ROS and RNS, modulating redox signaling, decreasing inflammation and mitochondrial lesion and modulating epithelial mesenchymal transition, which play a key role in the instauration of permanent lesion and progressive pain in the patient [49].
Clinical studies using antioxidants in CTS
Several investigators have reported favorable results with ALA as monotherapy or in combination with other antioxidants in CTS; the clinical trial conducted by Pajardi et al. [50] reported satisfactory clinical recovery from CTS with a combination of ALA, curcumin and vitamin B complex. Di Geronimo et al. [51] showed that clinical symptoms and neurophysiological outcomes were superior in a group that took a combination of ALA and gammalinoleic acid compared with the group that took vitamin B complex. Notarnicola et al. [52] verified the efficiency of shock wave therapy versus nutraceutical therapy composed of ALA, linoleic acid, quercetin and Echinacea in CTS. Both groups showed clinical and electrodiagnostic improvement. Boriani et al. [53] recently published the effect of ALA in carpal tunnel syndrome, they used ALA as a monotherapy after surgery for 40 days, they showed good electrophysiological and clinical response; however, treatments with a longer period of time are necessary to observe the recovery of the nerve because during the first 30 days after surgery, the healing progress induces an immature and intense fibrosis; however, the recovery of the nerve does not occur.
Disclosure statement
The authors report no conflict of interests. | 2018-12-30T12:34:29.785Z | 2017-12-20T00:00:00.000 | {
"year": 2018,
"sha1": "f587d6cddf5f191656897a76595af1850ed9dba6",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/58645",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1fa62a3801aff9375ddc3f38fc5c8fb2207f3074",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252978181 | pes2o/s2orc | v3-fos-license | A Practical Framework for Academics to Implement Public Engagement Interventions and Measure Their Impact
Academic institutions have shown an increased interest in the so-called third mission to offer an impactful contribution to society. Indeed, public engagement programs ensure knowledge transfer and help to inspire positive public discourse. We aimed to propose a comprehensive framework for academic institutions planning to implement a public engagement intervention and to suggest potential indicators to measure its impact. To inform the framework development, we searched the literature on public engagement, the third mission, and design theory in electronic databases and additional sources (e.g., academic recommendations) and partnered with a communication agency offering non-academic advice. In line with this framework, we designed a public engagement intervention to foster scientific literacy in Italian youth, actively involving them in the development of the intervention. Our framework is composed of four phases (planning/design, implementation, immediate impact assessment, and medium- and long-term assessment). Impact indicators were subdivided into outcome variables that were immediately describable (e.g., changed understanding and awareness of the target population) and measurable only in the medium or long run (e.g., adoption of the intervention by other institutions). The framework is expected to maximize the impact of public engagement interventions and ultimately lead to better reciprocal listening and mutual understanding between academia and the public.
Introduction
Aside from teaching and research activities, academic institutions have recently shown an increased interest in the so-called third mission-a term used especially in the European academic context-to offer an impactful contribution to society [1][2][3][4] by transmitting knowledge and technological achievements and innovations to the industrial sector and the public [2]. Molas-Gallart and colleagues [5] (pp. iii-iv) provided a rather broad definition of the concept of a third mission, stating that it is "concerned with the generation, use, application and exploitation of knowledge and other university capabilities outside tion, use, application and exploitation of knowledge and other university capabilities outside academic environments. In other words, [the third mission] is about the interaction between universities and the rest of society. " The more than 200 land grant colleges and universities in the United States follow a very similar concept, albeit named differently [6,7]. Fulfilling their so-called "extension mission", these land grant institutions do not only teach and conduct research but "extend" their resources, disseminating results to the public, offering research-based programs and courses on various relevant topics, and providing practical support with the aim of improving the lives of individual persons, families, and communities within their state [6,7].
Public engagement and involvement programs, which are an important part of third mission activities, ensure knowledge transfer, help to inspire positive public discourse, and encourage interaction between researchers and citizens. These interventions can help to instill curiosity about science in citizens and foster scientific literacy, critical thinking, and scientific confidence. On the other hand, the public can help to drive new research questions by sharing precious insights and experiences (see Figure 1). Concrete examples of public engagement activities include online and in-person lectures for the general public [8], academic podcasting [9], digital engagement activities [10], science festivals (e.g., the European Researchers' Night [11]), and book exhibitions [12], as well as activities for children and adolescents, such as STEM programs for girls or marginalized youth [13,14], hands-on art-making activities [15], kids' universities [16], and family science workshops [17].
Similarly, patient engagement has recently become a key principle of patient-centered healthcare, fostering patient independence, and likely resulting in better health outcomes (e.g., patient compliance) [18][19][20][21]. Actively listening to patient experiences and insights can also help to drive new research directions, encourage innovation, and improve patient safety [22,23]. Further, building community-academic partnerships, as is done for instance at the Johns Hopkins Hospital in the United States, can help to establish health equity in marginalized communities [24,25].
To be effective and tailored to the respective target population, third mission activities should follow systematic approaches. Although several general methods for strategic project planning and management exist, up to now, too little attention has been paid to the design of comprehensive techniques for the development of public engagement interventions. Therefore, we aimed to propose a detailed, structured framework for academic Concrete examples of public engagement activities include online and in-person lectures for the general public [8], academic podcasting [9], digital engagement activities [10], science festivals (e.g., the European Researchers' Night [11]), and book exhibitions [12], as well as activities for children and adolescents, such as STEM programs for girls or marginalized youth [13,14], hands-on art-making activities [15], kids' universities [16], and family science workshops [17].
Similarly, patient engagement has recently become a key principle of patient-centered healthcare, fostering patient independence, and likely resulting in better health outcomes (e.g., patient compliance) [18][19][20][21]. Actively listening to patient experiences and insights can also help to drive new research directions, encourage innovation, and improve patient safety [22,23]. Further, building community-academic partnerships, as is done for instance at the Johns Hopkins Hospital in the United States, can help to establish health equity in marginalized communities [24,25].
To be effective and tailored to the respective target population, third mission activities should follow systematic approaches. Although several general methods for strategic project planning and management exist, up to now, too little attention has been paid to the design of comprehensive techniques for the development of public engagement interventions. Therefore, we aimed to propose a detailed, structured framework for academic institutions planning to implement public engagement interventions; to suggest potential indicators to measure its immediate-, medium-, and long-term impact; and to outline methods for dissemination.
We then partnered with a Verona-based communication agency (Forest Hand), which offered non-academic advice and valuable insight into the field of communication studies and science communication during several meetings. In particular, the agency provided practical recommendations for the development of the framework and shared their experience in the application of the canvas model in different settings. A set of meetings was organized in which our research team was invited to complete the canvas model under the supervision of the communication agency. In these meetings, questions, critical issues regarding the canvas model, and the timeline and structure of the framework were discussed and resolved.
Finally, we determined the critical variables discussed in the retrieved material, reflected on their relationships, and used the agency's practical suggestions to construct the framework [22,52,53].
Framework for Public Engagement Interventions
The framework is subdivided into four phases (i.e., planning and design; implementation; and immediate-, medium-, and long-term impact assessment) with different steps (Figure 2). It focuses on the engagement of stakeholders, the target population, and the wider public. It should be noted that it is only indicative and may be updated according to demand. In the following, the different phases of the framework are described.
Planning and Design Phase
The research group should allocate an appropriate amount of time to this first phase, as many aspects must be discussed and critical decisions made.
Conducting a situational analysis at the very beginning is critical when assessing whether the public engagement intervention is warranted [31]. In most cases, it includes the SWOT analysis, and if the long-term effects of the program shall be examined, PEST analysis is undertaken to ensure the success of the project and its sustainability. These two techniques complement each other and are often used in conjunction [32].
Indeed, a SWOT analysis is a strategic planning method that is used to detect and evaluate certain factors (i.e., internal strengths and weaknesses, external opportunities and threats) linked to an organization or project that might influence current tasks and future activities [33]. In contrast to the SWOT analysis, the PEST analysis looks at a bigger picture, namely, the political, economic, socio-cultural, and technological changes the organization or project might be exposed to. This allows for the detection of opportunities and the assessment of current and future risks and threats [32].
Planning and Design Phase
The research group should allocate an appropriate amount of time to this first phase, as many aspects must be discussed and critical decisions made.
Conducting a situational analysis at the very beginning is critical when assessing whether the public engagement intervention is warranted [31]. In most cases, it includes the SWOT analysis, and if the long-term effects of the program shall be examined, PEST analysis is undertaken to ensure the success of the project and its sustainability. These two techniques complement each other and are often used in conjunction [32] Indeed, a SWOT analysis is a strategic planning method that is used to detect and evaluate certain factors (i.e., internal strengths and weaknesses, external opportunities and threats) linked to an organization or project that might influence current tasks and future activities [33]. In contrast to the SWOT analysis, the PEST analysis looks at a bigger picture, namely, the political, economic, socio-cultural, and technological changes the organization or project might be exposed to. This allows for the detection of opportunities and the assessment of current and future risks and threats [32].
Both SWOT and PEST analyses provide a rich background for the selection of the main topic and the objectives of the intervention. In this early phase, the research group should remain open to exploring different ideas for topics and reviewing the related research. Indeed, if there are several preliminary topics that seem to be equally interesting Both SWOT and PEST analyses provide a rich background for the selection of the main topic and the objectives of the intervention. In this early phase, the research group should remain open to exploring different ideas for topics and reviewing the related research. Indeed, if there are several preliminary topics that seem to be equally interesting and choosing one is difficult, it is advisable to continue with the other steps in the planning and design phase and circle back to the question later.
To define the objectives of the public engagement intervention, the researchers need to ask themselves what they want to achieve and what kind of difference they want to make. Do they want, for instance, to increase knowledge, expand awareness, receive feedback on their own research, change points of view, or promote critical thinking? It is also recommended to define SMART aims (i.e., specific, measurable, achievable, relevant, and time-based) [54].
Defining the aim(s) is a particularly important step, as it is closely linked to another step in the planning phase, namely, the identification of the impact. Indeed, when discussing the potential aims of the intervention, the research group might consider whether they want to achieve a broad impact (i.e., the activity/intervention reaches a large group of people), a deep impact (i.e., the activity/intervention addresses a small number of people), or they point to both types of impact. In the latter case, for instance, a combination of activities addressed to both small and big groups is possible [55].
When defining the objectives of the intervention, the research group should be aware that the same goal can often be achieved in different manners [34]. Thus, the use of the so-called analysis of alternatives (AOA) is suggested throughout the planning and design phase. This specific technique evaluates the various options that are available for the achievement of a specific objective. It can help to identify the most resource-oriented option that has the highest probability to reach a certain desired impact. Methods such as life-cycle costing and cost-benefit analyses are often applied in the course of the AOA [35].
The identification of the target group (i.e., the audience that will be actively involved in the public engagement intervention and benefit from it), and if necessary, of certain subgroups, depends on the chosen topic and the public engagement aims. The research group should try to be as specific as possible in doing this since a detailed description of the specific target will also help later to identify the related needs and the stakeholders that must be involved.
When defining the target group, the research group should ensure that they are inclusive and accessible, foresee potential barriers to attending and participating for some groups (e.g., physical barriers, financial barriers, concerns among LGBTQ+ people regarding anonymity, learning difficulties), and offer support and encouragement [38,48,56]. Consulting the involved stakeholders throughout the project and gathering feedback from the participants can help the research group to navigate this at times challenging path [56]. Reed [57] also underlined how important it is to be humble when engaging with the stakeholders and the target population to remove barriers and hierarchies and acquire trust.
Indeed, stakeholders (i.e., people or organizations who have an interest in the public engagement activity or who affect or are affected by its outcomes [58]) can and should serve as consultants. The research group might even want to establish a stakeholder advisory panel for this purpose [57]. While stakeholders do not participate in the activity like the target group does, they are largely involved in its making and play an important role in the success of the project. The stakeholders often act as "facilitators", building links between the research group and the target population and helping to define the needs of the target group. Therefore, stakeholders should be chosen carefully. Correct identification of the target group beforehand can give a useful indication of the potential stakeholders. In some cases, it might be useful to involve not only the stakeholders whose interests are directly linked to the topic of the intervention but also other people, such as investors, collaborators, and volunteers (e.g., funders, charities, professional bodies) who can provide precious resources, such as time, expertise, or funds, as well as original thoughts and an outside point of view [59].
To better understand the problems and actual needs of the target group, which is crucial for the success of the project, the research group should directly address the target group and stakeholders when gathering both qualitative and quantitative information while of course adhering to ethical guidelines and ensuring confidentiality and anonymity [60].
A preliminary investigation into the background knowledge, attitudes, preferences, and experiences of the target audience can help in this endeavor. Further, any doubts that might have emerged during the first steps of the planning and design phase are likely to be resolved with such an investigation. The type of investigation (e.g., (semi)structured interviews, focus groups, surveys, online polls) depends also on whether it is suitable for the target group [48].
The type of event or activity should be informed by the specific characteristics of the target group (e.g., age and social background). Moreover, the results of the preliminary investigation can also be useful for choosing the activity. There are various ways to engage with the participants actively and productively, such as through games, writing, science exhibitions and performances, digital platforms, podcasts, and videos.
Then, the activity needs to be carefully planned, considering logistical aspects (e.g., location, equipment, budget, and internal and external funding) and ethical issues (e.g., safety, anonymity, confidentiality, and maintenance of professional integrity) [60]. Moreover, the content that will be conveyed must be determined. In all of this, the researchers should strive toward designing an intervention that is accessible, enjoyable, and enriching [61]. To ensure that the intervention is attractive, it might be useful to collect feedback from some people that are part of the target audience [62].
Another issue that needs to be thought through clearly is the budget (i.e., expenses and any budget that is already allocated and therefore available). Creating an itemized budget is useful for gaining a clear picture of the implementation cost items of the projects (e.g., reimbursements, acquisition of equipment, production of promotional material, and costs of the venue) [63]. While funding is often difficult to secure, it is worth watching out for options for internal (e.g., funding by the university) and external funding (e.g., funding by external agencies and non-profit organizations).
Finally, the identification of specific indicators that will describe its immediate-, medium-and long-term impact throughout and after the intervention should also already take place in this early phase. It is pivotal that the identified impact indicators, in particular, the medium-and long-term impact indicators, are objective and SMART (i.e., specific, measurable, achievable, relevant, and time-based). Figure 3 provides an overview of such indicators. As illustrated, indicators that are immediately describable encompass, for example, the number of main beneficiaries reached (e.g., members of the target population, individuals inside and outside of the academic setting, social media data (e.g., number of shares, likes, mentions, followers, hashtag usage, and URL clicks) [64], the efficacy of the intervention rated by the target population, and the learning outcomes. Regarding the latter, the learning outcomes for the target group include different aspects, namely, enhanced knowledge and understanding, skills (e.g., intellectual skills, social skills, communication skills, and physical skills), attitudes and values (e.g., feelings, increased capacity for tolerance, and increased motivation), enjoyment, inspiration, creativity, behavior, and progression (e.g., reported actions and changes in lifestyles) [65].
While an assessment of the medium-and long-term impact might not be needed for or be applicable to every type of public engagement intervention, it can in many cases be very useful and informative.
Medium-term impact indicators include the dissemination of findings to the public (e.g., number of dissemination events and webinars and number of participants, newspaper articles, and radio shows) and the academic community (e.g., number of publications As illustrated, indicators that are immediately describable encompass, for example, the number of main beneficiaries reached (e.g., members of the target population, individuals inside and outside of the academic setting, social media data (e.g., number of shares, likes, mentions, followers, hashtag usage, and URL clicks) [64], the efficacy of the intervention rated by the target population, and the learning outcomes. Regarding the latter, the learning outcomes for the target group include different aspects, namely, enhanced knowledge and understanding, skills (e.g., intellectual skills, social skills, communication skills, and physical skills), attitudes and values (e.g., feelings, increased capacity for tolerance, and increased motivation), enjoyment, inspiration, creativity, behavior, and progression (e.g., reported actions and changes in lifestyles) [65].
While an assessment of the medium-and long-term impact might not be needed for or be applicable to every type of public engagement intervention, it can in many cases be very useful and informative.
Medium-term impact indicators include the dissemination of findings to the public (e.g., number of dissemination events and webinars and number of participants, newspaper articles, and radio shows) and the academic community (e.g., number of publications in national and international peer-reviewed journals, number of citations, and number of downloads of the publications). Finally, there are indicators that are usually only describable in the long run (e.g., the adoption of the public engagement intervention by other universities, educational institutions, public and/or charity organizations, and the influence of new public policies). Obviously, the different categories (i.e., immediate-, medium-, and long-term) cannot always be strictly separated and smooth transitions are possible. For instance, in some cases, the overall economic value of external financing might be determinable only in the medium-or long-term and not immediately after the completion of the intervention.
The Impact of the public engagement intervention can be described not only in terms of time but also in terms of affected areas. As defined by the Higher Education Funding Council for England, an impact is "an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life beyond academia" [57] (p. 28). Figure 4 portrays these areas of positive impacts of public engagement interventions and offers examples of each area. While economic impacts, impacts on health and wellbeing, impacts on policy, and environmental impacts are rather self-explanatory, cultural impacts and social impacts are more difficult to describe. Although seemingly like cultural impacts (i.e., changes in predominant values, opinions, and behavior patterns benefitting organizations, social groups, or society in general [57]), social impacts regard any changes that improve social injustices and overcome related barriers [66,67]. While economic impacts, impacts on health and well-being, impacts on policy, and environmental impacts are rather self-explanatory, cultural impacts and social impacts are more difficult to describe. Although seemingly like cultural impacts (i.e., changes in predominant values, opinions, and behavior patterns benefitting organizations, social groups, or society in general [57]), social impacts regard any changes that improve social injustices and overcome related barriers [66,67].
Public Engagement Model Canvas
The main elements of the proposed intervention that are discussed in the planning and design phase (i.e., topic, target population, stakeholders, needs of the target popula-
Public Engagement Model Canvas
The main elements of the proposed intervention that are discussed in the planning and design phase (i.e., topic, target population, stakeholders, needs of the target population, objectives of public engagement intervention, type of event, preliminary investigation, budget, and measurement of impacts) can be gradually inserted into the so-called Public Engagement Canvas. Subdivided into several quadrants, the canvas helps in the creative development and realization of public engagement activities by providing a comprehensive and visual overview ( Figure 5). with any new, relevant information. In some cases (e.g., when there are two different target groups), it is preferable to work on two canvasses in parallel. The canvas is based on service design and design thinking theories, which promote "outside the box" thinking and foster co-creation processes [28][29][30]. Design thinking, which is composed of different phases (i.e., inspiration, ideation, implementation), aims to tackle problems in novel and inventive ways by applying the principles of empathizing with human needs, defining the problem, ideating, prototyping, testing possible solutions [26,27,29], and to offer immediately available and easily applicable tools for problem-solving.
Aside from the Public Engagement Model Canvas, it is also recommended to prepare a Gantt chart to track the progress of the project. Finally, a dissemination plan should already be drafted in this first phase that identifies dissemination targets (e.g., target group(s) of the intervention, involved stakeholders, and demographic subgroups) and channels (e.g., mailing lists, a dedicated website, and social media). The plan will then be refined and deepened and/or expanded in the immediate impact assessment phase.
Implementation Phase
From this phase on, the research group should repeatedly use the Public Engagement Model Canvas as a reference point. If any uncertainties regarding the intervention remain, an additional preliminary investigation, like the one conducted in the planning and design phase, can be performed. The canvas can be used as helpful guidance and a reference point not only in the initial planning and design phase but also in the following phases. Namely, after a careful rereading of the inserted elements and a check for congruency, the research group can share the canvas, along with a synthesized report, with the involved stakeholders and external bodies, or archive it internally. It can, if needed, always be modified and updated with any new, relevant information. In some cases (e.g., when there are two different target groups), it is preferable to work on two canvasses in parallel.
The canvas is based on service design and design thinking theories, which promote "outside the box" thinking and foster co-creation processes [28][29][30]. Design thinking, which is composed of different phases (i.e., inspiration, ideation, implementation), aims to tackle problems in novel and inventive ways by applying the principles of empathizing with human needs, defining the problem, ideating, prototyping, testing possible solutions [26,27,29], and to offer immediately available and easily applicable tools for problem-solving.
Aside from the Public Engagement Model Canvas, it is also recommended to prepare a Gantt chart to track the progress of the project. Finally, a dissemination plan should already be drafted in this first phase that identifies dissemination targets (e.g., target group(s) of the intervention, involved stakeholders, and demographic subgroups) and channels (e.g., mailing lists, a dedicated website, and social media). The plan will then be refined and deepened and/or expanded in the immediate impact assessment phase.
Implementation Phase
From this phase on, the research group should repeatedly use the Public Engagement Model Canvas as a reference point. If any uncertainties regarding the intervention remain, an additional preliminary investigation, like the one conducted in the planning and design phase, can be performed.
Before launching the event, the research group must thoroughly organize it following the plan established in the planning and design phase. Depending on the activity, materials and tools might have to be prepared, equipment sorted, and venues booked.
The involved stakeholders should be regularly briefed and updated about the work progress given to the involved stakeholders as well as to the wider public through the previously defined dissemination channels [67]. Sharing your work with the public at large early on can have several advantages, as it can, for instance, enhance awareness of the project, and thus, the chance of additional funding, and provide learning opportunities and inspiration for other academics [67].
Immediate Impact Assessment Phase
To collect evidence of the immediate impact, including the learning outcomes and satisfaction measures, as described above, the research group should design and conduct a post-intervention evaluation with the target population using, for instance, surveys and/or semi-structured interviews. Specific questions addressed to the involved stakeholders regarding, for instance, their experiences, satisfaction with the public engagement intervention, and potential barriers and challenges, can also yield interesting additional results and new, diverse perspectives.
Additionally, certain impact indicators that are not directly collected from the target population and its stakeholders might need to be assessed. The subsequent synthesis of all immediate impact indicators serves different purposes, namely, for internal documentation and dissemination activities, and as a basis for a scientific publication. Then, following the refined dissemination plan, dissemination activities and events need to be organized. Reaching a broad audience through such events as well as through social media is likely to result in a greater medium-and long-term impact of the intervention [37]. Dissemination events are also great opportunities to make those involved in the public engagement intervention feel valued and show that their efforts have been appreciated [39]. The research group should also ensure that the achievements of the target population are sufficiently acknowledged during these events.
Medium-and Long-Term Impact Assessment Phase
Between the third and this fourth, last phase, a certain time interval will pass since the respective impact indicators (e.g., recommendation of the intervention by public and/or private agencies and influence on public policy) are, as outlined above, only measurable in the medium and long run. Additional to the post-intervention evaluation during the preceding phase, a final meeting with stakeholders and members of the target population can prove valuable to discuss the significance and reach of the generated impact [68] and identify areas for improvement. Moreover, it is a good opportunity to appreciate the value of the work from all involved partners [54].
In the last step, the research group might decide to write a final report that is addressed to stakeholders, the target group, university bodies, and/or the public that discusses the achieved medium-and long-term impacts and share the results of the report on traditional and/or digital channels. These insights can inspire others to implement this proposed public engagement intervention or can help colleagues who are planning other types of public engagement activities to maximize their impact [54,55]. The research group should avoid breaking off relations with the target population and the stakeholders at the end of the project. Staying connected not only prevents the involved persons from feeling suddenly let down by the project organizers but also encourages ongoing knowledge exchange [57].
Critical Issues and Limitations
During project development, researchers should also always keep in mind several critical issues that they might encounter along the way. Indeed, a survey by the University of Reading, UK, detected several main challenges and barriers to public engagement activities [40]. The authors listed as potential challenges, inter alia, the availability and continuity of funding, public engagement (e.g., difficulty of maintaining long-term public engagement and identification of target audiences), allocation of time to the project, lack of recognition of public engagement work by the academic community and government authorities, poor communication between stakeholders, lack of best practices and examples of similar projects, and management of expectations of involved researchers and stakeholders [40].
We are aware that the proposed framework and the Public Engagement Model Canvas may have certain weaknesses that researchers should be aware of in the project design. One concern regarding the framework might be that it is only indicative and overall too general. However, given the variety of public engagement activities, we wanted to ensure that it is flexibly applicable to different types of events. Regarding the Public Engagement Model Canvas, it might appear too static since it does not reflect changes in strategy and the evolvement of the intervention [69]. To address this issue, the research group should modify the canvas according to demand, which is a procedural step that certainly requires an additional time investment. Finally, as highlighted in the literature [57,70], authors run the risk of overestimating the impact and reach of their intervention and of not clearly articulating the actual benefits and identifying the beneficiaries.
Real-World Example: Debunker-A Public Engagement Intervention to Foster Critical Thinking and Scientific Literacy in Italian Youth
In the following, we present Debunker, a newly developed public engagement intervention for young people that serves as a real-world example, and briefly describe the completed planning and design phase and outline the upcoming phases of implementation, impact assessment, and dissemination.
Planning and Design Phase
After brainstorming ideas for creative and valuable public engagement interventions and narrowing them down to a few options, our research team agreed on the topic, aim, and type of the intervention; defined the target population and the stakeholders; created an itemized budget; and identified several impact indicators. Along with these considerations, we performed SWOT and PEST analyses, analyzing the various smaller and bigger factors that could influence our public engagement intervention. Subsequently, we started to carefully review the relevant literature and again consulted our partner, the Verona-based communication agency Forest Hand, which offered an outside perspective and gave us critical feedback on our proposal.
As a next step, by administering an ad hoc survey via appropriate communication channels (e.g., WhatsApp, mailing lists, and social media), we will identify the scienceand medicine-related topics (e.g., verification of scientific soundness of newspaper articles, vaccine hesitancy, climate change, and body image on social media) that are of particular interest for our target population, which was recruited from high schools and sports clubs. Before proceeding to outline the next operational steps, the research underpinning Debunker and our main aim are briefly described.
Underpinning Research and Main Aim of Debunker
The COVID-19 pandemic and its accompanying "infodemic" has shed a bright light on matters of pressing concern of our time [71][72][73][74][75], that is, a significant and increasing difficulty by large parts of the population to correctly distinguish facts, misinformation, and disinformation, which, for some people, can culminate in openly embracing science denialism [76][77][78][79][80]. This phenomenon is amplified by a gradual loss of trust in official information sources in favor of a "tailor-made" selection of informal sources of information that tend to only confirm one's own beliefs and suspicions [75,[81][82][83]. Academic institutions play a crucial role in addressing this issue. Indeed, universities following the third mission can help to ensure knowledge transfer and to encourage dialogue between academia and the public [2,4]. Considering the constant exposure of young people to digital media with its rapidly changing online news cycles and potentially harmful social media algorithms, as recently highlighted by the Wall Street Journal Investigation "The Facebook Files" [83][84][85][86][87], it is particularly important to provide young people with appropriate instruments to better face the complexity of digital content and to foster, by appealing to their intrinsic curiosity for the world [88], an open-minded and critical approach to science and knowledge. Therefore, we aimed to design and implement a public engagement intervention to foster critical thinking and scientific literacy in Italian youth, actively engaging our target group in the development of the intervention.
Public Engagement Model Canvas
Following the proposed framework, we prepared a Public Engagement Model Canvas to easily visualize the main elements of Debunker (see Figure 6). However, it represents only a first, preliminary version since, as stated above, we have not yet assessed the needs of the target group. Files" [83][84][85][86][87], it is particularly important to provide young people with appropriate instruments to better face the complexity of digital content and to foster, by appealing to their intrinsic curiosity for the world [88], an open-minded and critical approach to science and knowledge. Therefore, we aimed to design and implement a public engagement intervention to foster critical thinking and scientific literacy in Italian youth, actively engaging our target group in the development of the intervention.
Public Engagement Model Canvas
Following the proposed framework, we prepared a Public Engagement Model Canvas to easily visualize the main elements of Debunker (see Figure 6). However, it represents only a first, preliminary version since, as stated above, we have not yet assessed the needs of the target group. After designing the canvas and critically revising its content, we shared it, together with a short report, with university bodies and all stakeholders involved in Debunker (i.e., university researchers, the communication agency "Forest Hand", and contact persons from schools and sports clubs). After the needs assessment of the target group, we will After designing the canvas and critically revising its content, we shared it, together with a short report, with university bodies and all stakeholders involved in Debunker (i.e., university researchers, the communication agency "Forest Hand", and contact persons from schools and sports clubs). After the needs assessment of the target group, we will update the canvas accordingly. Further, we prepared the first drafts of the Gantt chart and the dissemination plan that might be revised at a later stage of the project.
Implementation Phase
Based on the outcomes of the preliminary investigation, and thus, being closely tailored to young people's priorities and needs, we will create four videos that will be shown to different age groups (ages 11-14, 15-17, 18-21).
During this phase, we will brief the stakeholders on a regular basis and start sharing updates of our work on traditional (e.g., press releases) and/or digital channels (e.g., dedicated website of our Debunker project, Twitter, Instagram, Facebook, and LinkedIn).
Immediate Impact Assessment Phase
In this phase, we will employ a post-survey to assess the intervention impact and its different dimensions.
To measure the nature and extent of the impact of the intervention, several impact indicators identified in the planning and design phase (e.g., participants' satisfaction, acquired knowledge, improved awareness, and the degree to which participants' needs were considered) will be assessed by the post-survey. The post-survey data will then be synthesized by the research team for dissemination activities and events.
To ensure a broad reach of the intervention and the related findings, we will refine and expand the dissemination plan drafted in the planning and design phase while also following the previously identified impact indicators. Dissemination activities will target young people, stakeholders, the public, and the scientific and healthcare community. A dissemination event addressing the public and, in particular, adolescents, young adults, school representatives, teachers, coaches, and journalists, will be organized to present Debunker and to discuss the related findings and implications. Brochures providing an overview of the intervention will be distributed during this event. Further, all members of the target population will receive a certificate as a symbol of their active participation, commitment, and achievements in scientific literacy.
Further, we aim to publish articles in local newspapers describing our intervention and its outcomes. Academics and healthcare providers will be reached through scientific publications in national and international peer-reviewed journals, as well as oral presentations and posters at national and international scientific conferences. A dedicated website and social media posts (Twitter, Facebook, Instagram, etc.) will address both the scientific community and the public.
Medium-and Long-Term Impact Assessment Phase
We also expect positive intervention effects in the medium-to-long term that cannot be measured immediately after the intervention but only in successive phases. Examples of such impact indicators include public dissemination of the findings, citations in the scientific literature, likes and shares on social media, adoption of the intervention by other universities and educational institutions, and the launch of new projects and interventions inspired by Debunker. Finally, a future paper will provide a detailed description of our public engagement intervention and report the results of the preliminary investigation and the post-survey, as well as the assessment of the immediate-, medium-, and longterm impacts.
Expected Challenges during the Implementation of Debunker
As mentioned above, one of the challenges in the field of public engagement is the appropriate allocation of time to the project. Thus, we will have to be mindful of the fact that the preparation of videos requires a lot of time and focus. Our team will also have to be aware of the potential risks that might arise when using the Public Engagement Model Canvas, as mentioned above. For instance, because of the canvas's relatively static structure [69], it will be important to frequently check whether it is still up to date and, if not, modify it and brief the involved stakeholders accordingly. Moreover, as videos may promote less interaction with the audience than other types of public engagement interventions [89], we will have to make sure to regularly collect feedback from stakeholders and participants and organize in-person events to connect and share insights. This will also help with maintaining the long-term engagement of our target group, which was one of the challenges pointed out by the University of Reading [40].
Conclusions
We believe that the proposed framework and canvas serve as a beneficial reference for academic centers being committed to proactively engaging with society. Indeed, research groups aiming to implement different kinds of public engagement interventions are guided through all different process stages, from planning/design to the actual implementation to the assessment of the impact and dissemination of the outcomes.
The presented systematic method, which stands out due to its comprehensiveness, is expected to maximize the significance and reach of the impact in different areas and ultimately lead to improved reciprocal listening and mutual understanding between academia and the wider public. | 2022-10-19T15:52:34.338Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "9880fcc1b8bdb7f2e2bade838c55e90488bc222e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/20/13357/pdf?version=1665913826",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c782c4ab1b25862b6aa6a5e268f9bf206a2b9fa2",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
145463416 | pes2o/s2orc | v3-fos-license | THE BIG FIVE PERSONALITY DIMENSIONS AND JOB PERFORMANCE
The objective of this research was to determine the relationship between personality dimensions and job performance. A cross-sectional survey design was used. The study population consisted of 159 employees of a pharmaceutical company. The NEO-Personality Inventory – Revised and Performance Appraisal Questionnaire were used as measuring instruments. The results showed that Emotional Stability, Extraversion, Openness to Experience and Conscientiousness were related to task performance and creativity. Three personality dimensions, namely Emotional Stability, Openness to Experience and Agreeableness, explained 28% of the variance in participants’ management performance.
The relationship between personality and job performance has been a frequently studied topic in industrial psychology in the past century (Barrick, Mount & Judge, 2001).Job performance is a multi-dimensional construct which indicates how well employees perform their tasks, the initiative they take and the resourcefulness they show in solving problems.Furthermore, it indicates the extent to which they complete tasks, the way they utilise their available resources and the time and energy they spend on their tasks (Boshoff & Arnolds, 1995;Schepers, 1994).
Job performance could be affected by situational factors, such as the characteristics of the job, the organisation and co-workers (Hackman & Oldham, 1980;Strümpfer, Danana, Gouws & Viviers, 1998), and by dispositional factors.Dispositional variables can be described as personality characteristics, needs, attitudes, preferences and motives that result in a tendency to react to situations in a predetermined (predisposed) manner (House, Shane & Herrold, 1996).Job performance is influenced by aptitude, need for achievement, self-regard, locus of control, affective temperament and the interaction between these constructs (Boshoff & Arnolds, 1995, Wright, Kacmar, McMahan & DeLeeuw, 1995).
Traditionally industrial psychologists have questioned the usefulness of personality measures in predicting job-related criteria (such as job performance), because of pessimistic conclusions of early reviews of the topic (e.g.Guion & Gottier, 1965) and concerns that most personality measures are faked (Reilly & Warech, 1993).However, evidence has suggested that personality measures are valid predictors of diverse job-related criteria (Goldberg, 1993).Unlike many measures of cognitive ability, personality measures typically do not have an adverse impact on disadvantaged employees (Hogan, Hogan & Roberts, 1996) and thus can enhance fairness in personnel decisions.Recent research showed that personality dimensions are related to job performance (Rosse, Stecher, Miller & Levin, 1998;Wright et al., 1995).
In this research the relationship between personality dispositions and job performance is studied from a trait perspective, and more specifically the five-factor model of personality dimensions as conceptualised by Costa and McCrae (1992).The five-factor model of personality represents a structure of traits, developed and elaborated over the last five decades.Factors are defined by groups of intercorrelated traits, which are referred to as facets (McCrae & Costa, 1997).The fivefactor model of personality as measured by the Neo-Personality Inventory Revised (NEO-PI-R) includes Neuroticism, Extraversion, Openness, Agreeableness and Conscientiousness (McCrae & Costa, 1997).The reason for deciding on this conceptualisation is because the validity of broad personality dimensions is superior too narrowly defined dimensions (Ashton, 1998).
The results of various studies and meta-analyses (Barrick & Mount, 1991;Hough, Eaton, Dunnette, Kamp & McCloy, 1990;Salgado, 1997;Tett, Jackson & Rothstein, 1991;Vinchur, Schippmann, Sweizer & Roth, 1998) showed that various big five personality dimensions are related to job performance.Barrick and Mount (1991) and Salgado (1997) found that conscientiousness is one of the best predictors of job performance in the United States of America and Europe.De Fruyt and Mervielde (1999), Tokar and Subich (1997), Schneider (1999) and Vinchur et al. (1998) concluded that Extraversion and Conscientiousness predict job performance in various occupations.However, these studies have all been carried out elsewhere in the world and in other contexts.In South Africa, the use of psychometric tests in studies of job performance is still a controversial issue.Research regarding the relationship between personality dimensions and job performance is therefore necessary.If relationships between personality dimensions and job performance are found, the results could be used for recruitment, selection and career development purposes.
The objective of this research was to determine the relationship between personality dimensions and job performance of employees in a pharmaceutical group.
The role of personality dimensions in job performance
Researchers agree that almost all personality measures could be categorised according to the five-factor model of personality (also referred to as the "big five" personality dimensions) (Goldberg, 1990;Hogan et al., 1996).The five personality dimensions seem to be relevant to different cultures (McCrae & Costa, 1997) and have been recovered consistently in factor analyses of peer-and self-ratings of trait descriptors involving diverse conditions, samples, and factor extraction and rotation methods (Costa & McCrae, 1988).Research also showed that the five personality factors have a genetic basis (Digman, 1989) and that they are probably inherited (Jang, Livesley & Vernon, 1996).The five dimensions of the five-factor model of personality are Neuroticism, Extraversion, Openness to Experience, Agreeableness and Conscientiousness.
Neuroticism.Neuroticism is a dimension of normal personality indicating the general tendency to experience negative affects such as fear, sadness, embarrassment, anger, guilt and disgust.High scorers may be at risk of some kinds of psychiatric problems.A high Neuroticism score indicates that a person is prone to having irrational ideas, being less able to control impulses, and coping poorly with stress.A low Neuroticism score is indicative of emotional stability.These people are usually calm, even-tempered, relaxed and able to face stressful situations without becoming upset (Hough et al., 1990).Hörmann and Maschke (1996) found that Neuroticism is a predictor of performance in various occupations.Dunn, Mount, Barrick and Ones (1995) showed that emotional stability (the opposite of Neuroticism) is the second most important characteristic that affects the employability of candidates.In a recent study Judge, Higgins, Thoresen and Barrick (1999) found that Neuroticism is inversely related to job performance.However, according to Salgado (1997), Neuroticism predicts job performance in certain circumstances.Extraversion.Extraversion includes traits such as sociability, assertiveness, activity and talkativeness.Extraverts are energetic and optimistic.Introverts are reserved rather than unfriendly, independent rather than followers, even-paced rather than sluggish.Extraversion is characterised by positive feelings and experiences and is therefore seen as a positive affect (Clark & Watson, 1991).It was found that Extraversion is a valid predictor of performance in jobs characterised by social interaction, such as sales personnel and managers (Barrick & Mount, 1991;Bing & Lounsbury, 2000;Lowery & Krilowicz, 1994;Vinchur et al., 1998).Johnson (1997) found a positive relationship between Extraversion and job performance of police personnel, and explained this relationship in terms of the high level of interaction in the police service.Openness to Experience.Openness to Experience includes active imagination, aesthetic sensitivity, attentiveness to inner feelings, a preference for variety, intellectual curiosity and independence of judgement.People scoring low on Openness tend to be conventional in behaviour and conservative in outlook.They prefer the familiar to the novel, and their emotional responses are somewhat muted.People scoring high on Openness tend to be unconventional, willing to question authority and prepared to entertain new ethical, social and political ideas.Open individuals are curious about both inner and outer worlds, and their lives are experientially richer.They are willing to entertain novel ideas and unconventional values, and they experience both positive and negative emotions more keenly than do closed individuals.Research has shown that Openness to Experience is related to success in consulting (Hamilton, 1988), training (Barrick & Mount, 1991;Vinchur et al., 1998) and adapting to change (Horton, 1992;Raudsepp, 1990).In contrast, Johnson (1997) and Hayes, Roehm and Castellano (1994) found that successful employees (compared with unsuccessful employees) obtained significantly lower scores on Openness.Tett et al. (1991) reported that Openness to Experience is not a valid predictor of job performance.A possible explanation for the contradictory results regarding the relationship between Openness to Experience and job performance is that different jobs have different requirements.
Agreeableness.An agreeable person is fundamentally altruistic, sympathetic to others and eager to help them, and in return believes that others will be equally helpful.The disagreeable/antagonistic person is egocentric, sceptical of others' intentions, and competitive rather than co-operative.
According to Tett et al. (1991), Agreeableness is a significant predictor of job performance.Salgado (1997) found that Agreeableness is related to training success.The co-operative nature of agreeable individuals may lead to success in occupations where teamwork and customer service are relevant (Judge et al., 1999).Conscientiousness.Conscientiousness refers to self-control and the active process of planning, organising and carrying out tasks (Barrick & Mount, 1993).The conscientious person is purposeful, strong-willed and determined.Conscientiousness is manifested in achievement orientation (hardworking and persistent), dependability (responsible and careful) and orderliness (planful and organised).On the negative side, high Conscientiousness may lead to annoying fastidiousness, compulsive neatness or workaholic behaviour.Low scorers may not necessarily lack moral principles, but they are less exacting in applying them.Borman, White, Pulakos and Oppler (1991) and Hough et al. (1990) found a correlation of 0,80 between reliability (an aspect of Conscientiousness) and job performance.Various researchers (Barrick & Mount, 1991;Barrick, Mount & Strauss, 1993;Frink & Ferris, 1999;Ones & Viswesvaran, 1997;Sackett & Wannek, 1996) reported significant correlations between Conscientiousness and job performance.According to Sackett and Wannek (1996), the relationship between Conscientiousness and job performance could be attributed to the conceptual relationship between Conscientiousness and integrity.Furthermore, autonomy and goal setting influence the relationship between Conscientiousness and job performance (Barrick & Mount, 1993;Barrick et al., 1993).
To the lay person it is a self-evident fact that personality factors play an important part in job performance.Yet the psychological literature in this regard is equivocal.Schmitt, Gooding, Noe and Kirsch (1984) found in a meta-analysis of validation studies of personality measures an average validity coefficient of r = 0,21.However, Barrick and Mount (1991) concluded that there are grounds for optimism concerning the use of standard personality tests to predict performance of employees.Hayes et al. (1994) found that supervisor ratings of specific performance criteria and overall job effectiveness were related positively to Conscientiousness and inversely to Openness and Extraversion in a sample of automobile machine operators.In a sample of sewing machine operators, Krilowicz and Lowerey (1996) found significant positive relations between operator productivity and traits corresponding closely with Conscientiousness and Extraversion.Hörmann and Maschke (1996) found that personality variables, especially those reflecting Neuroticism, predicted variance in pilot performance beyond that explained by flying experience, age and grade in a simulator check flight.Substandard pilots were more neurotic than successful pilots.In a sample of nursing service employees, Day and Bedeian (1995) found that the more similar in Agreeableness employees were to their co-workers, the more positive supervisors' ratings of performance were.Salgado (1997) conducted a meta-analysis of the five-factor personality dimensions in relation to performance for three criteria (i.e., supervisory ratings, training ratings and personnel data) and for five occupational groups using 36 validity studies conducted in Europe.Results indicated that Conscientiousness and Emotional Stability were valid predictors for all performance criteria and for most occupational groups.Extraversion predicted manager and police performance, and Openness to Experience predicted police and skilled labour performance.
Because items on many personality inventories are transparent, and thus easily faked, researchers are often concerned about the potential effect of response distortion on the prediction of performance from personality measures.However, Ones, Viswesvaran and Reiss (1996) found that social desirability had no effect on the predictive validity of the big five personality dimensions.Furthermore, Barrick and Mount (1996) reported that Conscientiousness and Emotional Stability (i.e.low Neuroticism) positively predicted supervisor performance ratings for truck drivers and that, when adjusted for social desirability, the validity coefficients were not attenuated significantly.
Several studies reported research evidence suggesting that personality is related differently to different dimensions of job performance.Using a sample of hotel workers, Stewart and Carson (1995) related Conscientiousness, Extraversion and Agreeableness to three different performance variables (i.e.citizenship, dependability and work output) and found significant validity coefficients for Conscientiousness and Extraversion, but for different sets of criteria.Conscientiousness positively predicted dependability and work output, and Extraversion inversely predicted citizenship and dependability.
METHOD Research design
A survey design was used to achieve the research objectives.The specific design was the cross-sectional design, by means of which a sample is drawn from a population at a particular point in time (Shaughnessy & Zechmeister, 1997).
Sample
The sample includes employees of a corporate pharmacy group with 14 retail and 16 hospital pharmacies in the North West Province, Free State, Mpumalanga and Gauteng, as well as a head office (N = 159).The total population of pharmacists (n = 59) and non-pharmacists (n = 100) was included in the empirical study.All pharmacists had a B.Pharm.degree or a Diploma in Pharmacy, while the qualifications of nonpharmacists varied from Grade 10 to a master's degree.About 57% of the sample had some form of post-school education.The total population of employees participated in the research.Approximately 83% of the sample consisted of females.The ages of the participants varied between 18 and 58 years, with 53% in the age group between 21 and 30.A total of 57,2% of the participants were married.
Measuring instruments
The NEO Personality Inventory Revised (NEO-PI-R) (Costa & McCrae, 1992) was used to measure the personality of individuals, based on the five-factor model of personality, which includes the dimensions of Extraversion, Neuroticism, Agreeableness, Openness to experience and Conscientiousness.The five personality dimensions are each divided into six facets.The NEO-PI-R has 240 items (Costa & McCrae, 1992).The Cronbach alpha coefficients of the personality dimensions vary from 0,86 (Openness) to 0,92 (Neuroticism), and those of the personality facets from 0,56 (Tender-minded) to 0,81 (Depression).Costa and McCrae (1992) report test-retest reliability coefficients (over six years) for Extraversion, Neuroticism and Openness varying from 0,68 to 0,83 and for Agreeableness and Conscientiousness (over three years) of 0,63 and 0,79 respectively.Costa and McCrae (1992) showed construct validity for the NEO-PI-R for different gender, race and age groups.
The Performance Appraisal Questionnaire (PAQ) (Schepers, 1994) was used to measure pharmacists' job performance.The PAQ consists of 30 items which measure three scales, namely Performance, Creativity and Management skills.Acceptable Cronbach alpha coefficients were found for the questionnaire.Supervisor ratings (on a 9-point scale) of the performance of employees were used.All supervisors had undergone a halfday intensive rater-training course to ensure that they were aware of and able to avoid common pitfalls.The scales of the PAQ have acceptable alpha coefficients (Schepers, 1994).Construct validity of the PAQ is demonstrated by the fact that factor loadings between 0,41 and 0,98 were obtained (Schepers, 1994).
Statistical analysis
The statistical analysis was carried out by means of the SAS program (SAS Institute, 1996).Descriptive statistics (means, standard deviations, skewness and kurtosis) were used to analyse the results.Cronbach alpha coefficients and inter-item correlations were used to assess the internal consistency of the measuring instruments (Clark & Watson, 1995).Coefficient alpha conveys important information regarding the proportion of error variance contained in a scale.According to Clark and Watson (1995), the average inter-item correlation coefficient (which is a straightforward measure of internal consistency) is a useful index to supplement information supplied by coefficient alpha.However, unidimensionality of a scale cannot be ensured simply by focusing on the mean inter-item correlation -it is necessary to examine the range and distribution of these correlations as well.
Pearson product-moment correlation coefficients were used to specify the relationships between the variables.Because a nonprobability sample was used in this research, effect sizes (rather than inferential statistics) were used to decide on the significance of the findings.A cut-off point of 0,30 (medium effect, Cohen, 1988) was set for the practical significance of correlation coefficients.Canonical correlation was used to determine the relationships between the dimensions of burnout, personality traits and coping strategies.The goal of canonical correlation is to analyse the relationship between two sets of variables (Tabachnick & Fidell, 2001).Canonical correlation is considered a descriptive technique rather than a hypothesistesting procedure.
A stepwise multiple regression analysis was conducted to determine the proportion of variance in Management Performance that is predicted by personality dimensions.The effect size (which indicates practical significance) in the case of multiple regression is represented by the following formula (Steyn, 1999): A cut-off point of 0,35 (large effect, Steyn, 1999) was set for the practical significance of f 2 .
RESULTS
The descriptive statistics of the PAQ for the sample are given in Table 1.Table 1 shows that above average scores were obtained on the three dimensions of the PAQ.Regarding skewness and kurtosis, it is clear that the results were somewhat skew regarding Task Performance.This skewness may be attributed to the fact that poor performers on this dimension probably left the organisation.Scores on the other dimensions seem to be normally distributed.Table 1 shows that high Cronbach alpha coefficients were obtained for all the factors (Nunnally & Bernstein, 1994).The correlation coefficients between the items of scales (0,48 £ r £ 0,70) indicate that the items correlate too highly (Clark & Watson, 1995).However, this should be seen in the context of the specificity of the constructs that are measured.
Table 2 shows the descriptive statistics, Cronbach alpha coefficients and inter-item correlation coefficients of the NEO-PI-R.Table 2 shows that the participants (compared with American norms) measured average on the five personality dimensions.
Regarding skewness and kurtosis, the values in Table 2 show minor deviations from 0, an indication that the scores are relatively normally distributed.The Cronbach alpha coefficients for the five personality dimensions vary from 0,76 (Agreeableness) to 0,86 (Neuroticism).These alpha coefficients could be regarded as acceptable when they are compared with the cut-off point of 0,80 recommended by Nunnally and Bernstein (1994).The mean inter-item correlation coefficients of the personality dimensions vary from 0,36 to 0,49, which compare favourably with the range of 0,15 to 0,50 recommended by Clark and Watson (1995).
Table 3 shows the product moment correlation coefficients between the NEO-PI-R and job performance.Table 3 shows practically significant correlation coefficients (of medium effect) between Management Performance on the one hand and Neuroticism (negative correlation), Openness to Experience and Agreeableness (both positive correlations).No practically significant correlation coefficients were found between personality dimensions on the one hand and Task Performance and Creativity on the other hand.
A canonical correlation was performed between a set of personality dimensions and two aspects of job performance, Task Performance and Creativity, using SAS CANCORR.Shown in the tables are correlations between the variables and canonical variates, standardised canonical variate coefficients, within-set variance accounted for by the canonical variate (percent of variance), redundancies and the canonical correlations.The results of the canonical analysis are shown in Table 4.The set of personality traits included Neuroticism, Extraversion, Openness to Experience, Agreeableness and Conscientiousness.The performance set included Task Performance and Creativity.The first canonical correlation was 0,38 (15% overlapping variance), and the second was 0,13 (2% overlapping variance).With both canonical correlations included, F(10, 298) = 2,76, p < 0,01.Subsequent F-tests were not statistically significant.The first pair of canonical variates, therefore, accounted for the significant relationships between the two sets of variables.Data on the first pair of canonical variates appear in Table 4.Total percentage of variance and total redundancy indicate that this pair of canonical variates was moderately related.
With a cut-off correlation of 0,30 the variables in the personality dimensions set that were correlated with the first canonical variate were Neuroticism, Extraversion, Openness to Experience and Conscientiousness.Among the performance variables, Task Performance and Creativity correlated with the first canonical variate.This pair of canonical variates indicate that emotional stability (low Neuroticism) (-0,65), Extraversion (0,51), Openness to Experience (0,75) and Conscientiousness (0,35) are associated with Task Performance (0,42) and Creativity (0,89).
The results of a stepwise regression analysis with the Big Five personality dimensions as independent variables and Management (as measured by the PAQ) are shown in Table 5.
DISCUSSION
Analysis of the product-moment correlations between personality dimensions, task performance and creativity showed that no practically significant relationships existed.However, the results the canonical analysis showed that a combination of emotional stability (i.e.low Neuroticism), Extraversion, Openness to Experience and Conscientiousness explained about 15% of the variance in task performance and creativity.
It seems that employees who tend towards Neuroticism (i.e. who are prone to having irrational ideas, being less able to control impulses, and coping poorly with stress) perform poorer and are less creative than those who are emotionally stable.This result confirms the findings of Hörmann and Maschke (1996), Dunn et al. (1995) and Judge et al. (1999).Furthermore, Extraversion was associated with task performance and creativity, probably because of the fact that extraverts tend to experience positive affect (Clark & Watson, 1991).
The results of the canonical analysis confirmed that Openness to Experience is related to task performance and creativity.
Employees who are open to experiences show an active imagination, aesthetic sensitivity, attentiveness to inner feelings and a preference for variety, all of which explain why they are rated higher on their performance and creativity at work.This result confirms the findings of researchers such as Horton (1992) and Raudsepp (1990).Conscientiousness was also associated with task performance and creativity, although the loading of Conscientiousness was relatively lower in the personality set.However, it makes sense that conscientious employees perform better compared to less conscientious employees (Barrick & Mount, 1991;Barrick et al., 1993;Borman et al., 1991;Hough et al, 1990).
Furthermore, personalit y dimensions were related to management performance.Emotional Stability, Openness to Experience and Agreeableness were practically significantly related to management performance.Managers who are emotionally stable, open to experience and agreeable tend to perform better than those who measured lower on these dimensions.The negative relationship bet ween Neuroticism and managerial performance may be explained by the fact that managers who score high on Neuroticism are prone to having irrational ideas, are less able to control their impulses, and cope poorly with stress.The significant relationship between Openness to Experience and managerial performance could be explained by the fact that managers in the pharmaceutical company continuously have to adapt to changes (see Horton, 1992;Raudsepp, 1990) because the company is relatively young and has grown fast since it was established.The results show that personality dimensions predict 28% of the variance in managerial performance.
A possible explanation for the lack of relationships between personality dimensions and task performance is that the tasks of employees in the pharmaceutical organisation are well-defined, with relatively low autonomy allowed.According to Barrick (2001), personality dimensions are most likely to affect job performance in situations where autonomy is high.
This study had various limitations.Firstly, a predictive validity design was not used, which could have affected the magnitude of the correlation coefficients obtained.A disadvantage of this design is that poor performers have probably already resigned from the company.Secondly, the sample consisted largely of females, which implies that the results could not be generalised to males.Thirdly, the results cannot be generalised to other settings.Lastly, the research design does not allow one to determine the direction of the relationships obtained.
RECOMMENDATIONS
The results of this study confirm that the pharmaceutical company should consider the personality dimensions of their employees when predicting creativity and managerial performance during selection and career development.
However, more research is needed before these results are used to predict job performance because they were not obtained in a selection context.Furthermore, because relatively poor associations between personality dimensions and job performance were obtained, future research efforts should be directed at the effects of personality on performance through motivation.
The relationship between personality dimensions and job performance should be studied with larger samples and by using predictive validity designs in various South African organisations.The effects of cultural differences and language on the relationship between personality dimensions and job performance should also be studied.
Table 5
shows that personality dimensions predict 28% of the variance in Management (as measured by the PAQ).The multiple correlation of 0,48 is practically significant (large effect) (f 2 = 0,38).Table6shows that Openness to Experience and Agreeableness are the best predictors of performance in Management. | 2019-05-06T14:09:20.808Z | 2003-10-24T00:00:00.000 | {
"year": 2003,
"sha1": "3b9cd92ea8153ed6d733a3cf852b78c0bdc1a6cc",
"oa_license": "CCBY",
"oa_url": "https://sajip.co.za/index.php/sajip/article/download/88/84",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "88d7fbba1b98a474f4e07f35b91dd97cf95a7399",
"s2fieldsofstudy": [
"Psychology",
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
213067284 | pes2o/s2orc | v3-fos-license | The non-stationary case of the Maxwell-Garnett theory: growth of nanomaterials (2D gold flakes) in solution
The solution-based growth mechanism is a common process for nanomaterials. The Maxwell-Garnett theory (for light–matter interactions) describes the solution growth in an effective medium, homogenized by a mean electromagnetic field, which applies when materials are in a stationary phase. However, the charge transitions (inter- and intra-transitions) during the growth of nanomaterials lead to a non-stationary phase and are associated with time-dependent permittivity constant transitions (for nanomaterials). Therefore, time-independence in the standard Maxwell-Garnett theory is lost, resulting in time dependence, εi(t). This becomes important when the optical spectrum of a solution needs to be deconvoluted at different reaction times since each peak represents a specific charge/energy transfer with a specific permittivity constant. Based on this, we developed a time-resolved deconvolution approach, f(t) ∝ εi(t), which led us to identify the transitions (inter- and intra-transitions) with their dominated growth regimes. Two gold ion peaks were precisely measured (322 nm and 367 nm) for the inter-transition, and three different polyaniline oxidation states (PAOS) for the intra-transition, including A (372 nm), B (680 nm), and C (530 nm). In the initial reaction time regime (0–90 min), the permittivity constant of gold was found to be highly dependent on time, i.e. fE ∝ εi(t), since charge transfer takes place from the PAOS to gold ions (i.e. inter-transition leads to a reduction reaction). In the second time regime (90–180 min), the permittivity constant of gold changes as the material deforms from 3D to 2D (fS ∝ ε3D–2D), i.e. intra-transition (combined with thermal reduction). Our approach provides a new framework for the time-dependent modelling of (an)isotropic solutions of other nanomaterials and their syntheses.
SI-1: Metal-ion-ethylene glycol interaction:
To get more information regards the 367nm peak, we subtract the Au-Cl spectra with ethylene glycol from the polymer solution (with Au-Cl & EG) and following the change in 367nm along the reaction. We note that a negligible change appears only after 120 min while 322nm peak was decreasing rapidly [Ö. Dag, O. Samarskaya, N. Coombs, G. A. Ozin, J. Mater. Chem. 2003, 13, 328-334]. Thus, 367nm peak major contributor is gold ions interaction with ethylene glycol [see The Au(III) species is exist in a form of [AuCl 4 ] -. The 322nm peak is related to the Au-Cl ligand. In addition, when the oxidation of PANI finish (t>90min), then the thermal reduction occurring between the [AuCl 4 ]complex with ethylene glycol become the controlling mechanism.
As can be seem from SI-2, the obtained gold flakes show a perfect crystallinity with [111] facets.
SI-3: The Mechanism
The summary of the mechanism is shown in the following figure SI-3. The two regimes with the relevant important feature are illustrated: (i) two main regimes (0-90min) with relevant two phases (0-60min) and (60-90min) and the (ii) kinetic rates and the PAOS along the different regimes.
SI-4. Error bars Calculations:
The error in the absorbance [as an example] was calculated by equation 1.
Where X1 & X2 represent the real value and fitting values respectively. The error between the fitting and real value for each time separately give us the different error percentage as shown in table S1. As shown the errors are negligible and may indicate that our substation method can be used to deconvolution properly the UV-Vis spectra [ Table SI2). This is being applied for all the spectra along the reaction time. As an example, we show the error in the absorbance from 0 to 180min. As shown, the error bras change with the time, this is due to the change in the solution composition. With time different shapes of gold appear leading to increase the light scattering inside the solution, thus, the error in the measurement increase. 16.60 With the same princible, we check the full width at half-maximum (fwhm) for all the PAOS and Au. We found that the fwhm was identical (±5%) for the B and C states of the PAOS over the reaction course (66 ± 3 nm), while the A form exhibited a smaller fwhm due to polymer homogeneity (48± 1 nm). Mass conservation: As shown in figure SI-3, the total polymer mass start to decrease after 60 min due to the decomposition of polymer B as discussed in the paper. The total mass has calculated by sum all the PAOS since all the oxidized states has almost same structure and molecular weight, which make this approximation logical. Figure SI-5a show separated seeds which start to agglomerate in the second phase (SI-5b). Interestingly, in this phase we can see the PAOS (B and C) around the agglomerated nanoparticles (light blue color around the gold which is in red). Subsequently, the state C remain and leads to flake structure i.e. the nanoparticles decompose to produce 2D flakes (SI-5c).
SI-5. Adding additional surfactant (Cetyltrimethylammonium bromide, CTAB) and replacing ethylene glycol with water:
In ethylene glycol solvent (EG), the Au seeds are randomly formed even after complete oxidation of polyaniline. These seeds aren't completely capped by polyaniline to form 2D Au microplates. Therefore, it leads to form Au particles with different shape. To study the effect of EG, we carried out the reactions in both EG and water solvents with CTAB (CTAB: HAuCl 4 (2:1)). In both case, CTAB used to kinetically control the reaction and aniline primarily acts as reducing agent. Figure SI-5a shows that the sample had small percentage (~5%) of particles in EG solvent. This indicated that the growth is not completely controlled via CTAB and polyaniline due to EG. On the other hand, in water, since there is no EG, the reaction is directly depending on the aniline assisted reduction of [AuCl 4 ] -, and the growth completely controlled by CTAB and oxidized aniline, which yielded 2D Au microplates with no Au particles (less than 1%). Similar results also found for Cetyltrimethylammonium chloride (CTAC). Thus, adding additional surfactant and replacing EG are facilitating to grow more 2D Au plates with less or no Au particles. | 2022-05-29T11:21:24.214Z | 2019-12-04T00:00:00.000 | {
"year": 2019,
"sha1": "79db56d7333cce121ecea3cb423bffb3c7585b0c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1039/c9na00636b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e100b6eea6d28c455a9f990dba2ae8d30974f395",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
237148782 | pes2o/s2orc | v3-fos-license | Urine Lipoarabinomannan Testing in Adults With Advanced Human Immunodeficiency Virus in a Trial of Empiric Tuberculosis Therapy
Abstract Background The urine lipoarabinomannan (LAM) antigen test is a tuberculosis (TB) diagnostic test with highest sensitivity in individuals with advanced human immunodeficiency virus (HIV). Its role in TB diagnostic algorithms for HIV-positive outpatients remains unclear. Methods The AIDS Clinical Trials Group (ACTG) A5274 trial demonstrated that empiric TB therapy did not improve 24-week survival compared to isoniazid preventive therapy (IPT) in TB screen–negative HIV-positive adults initiating antiretroviral therapy with CD4 counts <50 cells/µL. Retrospective LAM testing was performed on stored urine obtained at baseline. We determined the proportion of LAM-positive participants and conducted modified intent-to-treat analysis excluding LAM-positive participants to determine the effect on 24-week survival, TB incidence, and time to TB using Kaplan-Meier method. Results A5274 enrolled 850 participants; 53% were male and the median CD4 count was 18 (interquartile range, 9–32) cells/µL. Of the 850, 566 (67%) had LAM testing (283 per arm); 28 (5%) were positive (21 [7%] and 7 [2%] in the empiric and IPT arms, respectively). Of those LAM-positive, 1 participant in each arm died and 5 of 21 and 0 of 7 in empiric and IPT arms, respectively, developed TB. After excluding these 28 cases, there were 19 and 21 deaths in the empiric and IPT arms, respectively (P = .88). TB incidence remained higher (4.6% vs 2%, P = .04) and time to TB remained faster in the empiric arm (P = .04). Conclusions Among outpatients with advanced HIV who screened negative for TB by clinical symptoms, microscopy, and Xpert testing, LAM testing identified an additional 5% of individuals with TB. Positive LAM results did not change mortality or TB incidence.
Tuberculosis (TB) is one of the top 10 leading causes of deaths worldwide and the leading cause of death among people living with human immunodeficiency virus (HIV). In 2018, the World Health Organization (WHO) estimated 10 million cases of TB and 1.5 million deaths with the largest burden in low-and middle-income countries (LMICs). In the same year, there were about 862 000 cases of TB among HIV-positive individuals and 251 000 HIV-associated TB deaths [1]. The global TB incidence is declining at about 2% per year. However, more effort is required to achieve the WHO End TB Strategy targets-to end TB as a global epidemic by reducing TB deaths by 95% and the number of new TB cases by 90% by 2035 [2].
Improving TB diagnosis and treatment is imperative to achieving global targets for ending the TB epidemic. Between 2000 and 2017, approximately 54 million deaths were averted through TB diagnosis and treatment [1]. However, diagnosis of TB remains a challenge in LMICs, especially in HIV-positive individuals [3]. Since 2010, TB diagnosis has expanded from relying largely on sputum smear microscopy that detects only about half of TB cases, to use of rapid molecular diagnostic tools such as GeneXpert (Xpert) MTB/RIF assay and Xpert Ultra assay, and biomarker-based test such as urine lipoarabinomannan (LAM) antigen test [1]. Xpert has been scaled up in many countries but its use and availability is still limited in LMICs, particularly in peripheral health facilities [3].
The Alere Determine TB LAM antigen lateral flow strip is a WHO-recommended rapid, point-of-care, lateral flow immunochromatographic test for qualitative detection of LAM antigen of mycobacteria in urine. It is recommended as an adjunct test to assist in TB diagnosis among HIV-positive adults, adolescents, and children with signs and symptoms of TB or with advanced HIV disease/seriously ill or those with a CD4 count <200 cells/µL irrespective of signs and symptoms [4]. Its sensitivity increases with lower CD4 counts in inpatient settings and in high-TB-prevalence settings, making it potentially useful in LMICs, where the HIV-TB coinfection burden is highest [3] and where a significant proportion of patients present late with severe immunosuppression [5]. In addition, urine LAM testing has potential mortality benefit in some subgroups of inpatients including those with CD4 count <100 cells/µL or low hemoglobin and those suspected to have TB [6]. However, its broader application to TB diagnostic algorithms for HIV-positive individuals in outpatient settings remains unclear.
We sought to determine (1) the diagnostic yield of urine LAM in advanced HIV-positive outpatient adults (CD4 count <50 cells/µL) who remained without a TB diagnosis after TB screening using routine diagnostic tools during evaluation for enrollment into a trial of empiric TB therapy; (2) the clinical outcomes in participants who were urine LAM positive at baseline; and (3) the effect of urine LAM testing on mortality, TB incidence, and time to probable/confirmed TB when LAMpositive participants were excluded.
Study Design and Setting
Details of the AIDS Clinical Trials Group (ACTG) A5274 methods have been published elsewhere [7]. In brief, the A5274 study, also known as Reducing Early Mortality and Morbidity by Empiric TB Treatment (REMEMBER), was an international, open-label, randomized clinical trial. It demonstrated that a 4-drug empiric TB therapy regimen did not improve 24-week survival and was associated with an increased incidence of TB during 24 weeks of follow-up compared to isoniazid preventive therapy (IPT) in HIV-positive adults initiating efavirenz-based antiretroviral therapy (ART) with CD4 counts <50 cells/µL [7]. The study was conducted at 18 sites in 10 countries (Malawi, South Africa, Haiti, Kenya, Zambia, India, Brazil, Zimbabwe, Peru, and Uganda), with most participants from sub-Saharan Africa. To be included, sites had to have a TB incidence >100 per 100 000 personyears and national ART programs with documented high early mortality rates (>10-20 per 100 person-years) among outpatient populations. Noteworthy, the A5274 protocol was amended in February 2012 to include a requirement for urine sample collection and storage from each participant for testing using a new diagnostic assay, Alere's Determine TB-LAM, at study entry. A sample was collected at the next study visit for the participants (11%) who enrolled before the amendment, and those participants were excluded from this analysis. Due to different approval timelines for the ethics boards for the study sites, there were differences in the number of participants who had urine LAM testing by site.
Study Population
The study population was HIV-positive adults who were ART naive, aged ≥18 years with pre-ART CD4 counts <50 cells/ µL and no evidence of active TB. Potential participants were screened for TB prior to study entry using a symptom screen, physical examination, and locally available diagnostic tools. The TB symptom screen included cough ≥2 weeks, any current fever >38°C, hemoptysis, night sweats within the past 2 weeks, unintentional weight loss >10% in the past 30 days, or enlarged axillary or cervical lymph nodes. Locally available diagnostics included sputum staining for acid-fast bacilli, chest radiograph, and Xpert MTB/RIF implemented at screening in 5 sites only.
Study Procedures and Data Collection
Study procedures for the primary study A5274 have been published elsewhere [7]. Urine samples were stored for retrospective batch testing. The tests were positive if 2 readers agreed. A positive urine LAM test for this protocol was grade 1 or higher. Development of symptomatic TB disease was defined as having clinical symptoms of pulmonary or extrapulmonary tuberculosis with or without demonstrable Mycobacterium tuberculosis from any specimens [8].
Statistical Analysis
The A5274 primary endpoint was survival (death or unknown vital status) 24 weeks postrandomization. However, since there were very few participants with unknown vital status, we have concentrated on deaths only for this analysis. The proportion of participants who died and the incidence of confirmed or probable TB was compared between the arms using χ 2 test (or Fisher exact test, where appropriate). The Kaplan-Meier method was used to estimate mortality and TB incidence rates by week 24, and the rates were compared by the z test. Time to confirmed or probable TB was compared by the log-rank test. For the time-to-event analyses, modified intent-to-treat analyses were conducted by excluding participants who were retrospectively identified as TB positive at the time of study entry through urine LAM testing, and conducted among all participants with LAM testing performed at baseline including 3 participants with inconclusive results. All analyses were conducted in SAS version 9.4 software.
Baseline Characteristics of Urine LAM-Tested Participants
A total of 850 participants (424 in the empiric arm and 426 in the IPT arm) were enrolled to the A5274 study. Baseline characteristics were similar across arms (shown elsewhere) [9]. Of the 850 enrolled participants, 566 (67%) had urine samples collected at baseline that were tested for LAM antigen retrospectively (283 in each arm). LAM testing ranged from 20% in South Africa to 100% in Peru and Brazil. Of those who had urine LAM testing, the median age was 36 (interquartile range [IQR], 30-42) years, 311 (55%) were male, 491 (87%) were black, and the median viral load was 5.3 (IQR, 4.9-5.7) log 10 copies/mL (Table 1). The median baseline CD4 count was higher among those who did not have urine LAM testing compared to those who had urine LAM testing (21 [IQR, cells/μL vs 17 [IQR, 8-31] cells/ µL; P = .002). The rest of the baseline characteristics were similar between the 2 populations (Table 1).
Clinical Outcomes of Participants Who Had a Positive Urine LAM Test
Of the 566 who had urine LAM testing, 28 (5%) were positive (21 [7%] and 7 [2%] in the empiric and IPT arms, respectively) ( Table 2). Of the 28 participants with a positive test, most (23 [82%]) participants did not develop TB. Only 5 (18%) participants developed TB and they were all in the empiric arm, and 1 participant in each arm died from other TB-unrelated causes ( Table 3). The 3 participants who had inconclusive urine LAM test results were excluded from Table 3, but they were included in the subsequent survival analysis.
Clinical Outcomes Excluding Participants Who Had a Positive Urine LAM Test
Overall, at week 24, there were 20 deaths in the empiric arm and 22 deaths in the IPT arm, resulting in a similar mortality rate of 4.8% (95% confidence interval [CI]: 3.1%-7.3%) for the empiric arm and 5.2% (95% CI: 3.4%-7.8%) for the IPT arm and resulting in an absolute risk difference of 0.4% (95% CI: −2.5% to 3.3%) (P = .78). After excluding the 28 LAM-positive participants from the analysis, there were 19 deaths in the empiric arm and 21 deaths in the IPT arm. Similarly, the mortality rate across arms was similar at 4.8% (95% CI: 3.1%-7.4%) for the empiric arm and 5% (95% CI: 3.3%-7.6%) for the IPT arm, resulting in an absolute risk difference of 0.3% (95% CI: −2.7% to 3.2%) (P = .86). There was no difference in the time to death across arms with all participants [9], participants after excluding LAM positives, and participants with LAM testing conducted ( Figure 1A and 1B).
Overall, the incidence of confirmed or probable TB was higher in the empiric arm compared to the IPT arm (21 [5%] in the empiric arm vs 8 [2%] in the IPT arm; P = .01). The time to confirmed or probable TB was faster in the empiric arm compared to the IPT arm (P = .01) [9]. After excluding LAMpositive participants, the incidence of confirmed or probable TB remained higher (18 [4%] in the empiric arm vs 8 [2%] in the IPT arm; P = .045) and the time to TB remained faster in the empiric arm (P = .04). Among participants with LAM testing conducted, the incidence of confirmed or probable TB remained higher (11 [4%] in the empiric arm vs 3 [1%] in the IPT arm; P = .054) and the time to TB remained faster in the empiric arm (P = .03) (Figure 2A and 2B).
DISCUSSION
In this study, the addition of urine LAM testing yielded 5% additional positive TB tests among outpatients with advanced HIV who had previously been systematically screened for TB using clinical diagnosis, microscopy, and Xpert MTB/RIF testing. Exclusion of LAM-positive participants did not alter the lack of effect of empiric TB treatment on survival or the risk of developing symptomatic TB disease in the empiric arm. About one-fifth of LAM-positive participants developed symptomatic TB disease, suggesting that the 4-drug anti-TB treatment in the empiric arm did not optimally prevent participants from developing symptomatic TB disease. This is one of the few studies that evaluated the role of urine LAM for diagnosis of TB among HIV-positive adults with advanced HIV disease who were systematically prescreened for TB.
In a recent Cochrane systematic review of 15 unique studies in the LMICs, pooled sensitivity and specificity of the Alere Determine TB LAM Ag assay among unselected participants not assessed for signs and symptoms of TB were 35% (95% CI: 22%-50%) and 95% (95% CI: 89%-96%), respectively. LAM sensitivity was higher among inpatients, severely ill patients, and high-TB-prevalence countries and increased with decreasing CD4 count [10]. Consequently, the WHO recently revised its policy recommendation in late 2019 to include urine LAM as an "add-on" test to assist in TB diagnosis in a broader patient setting, particularly more inclusive of outpatients [4]. The current consensus is that urine LAM testing has the potential to increase diagnostic yield among HIV-positive individuals and provide an alternative diagnostic method for TB in people who cannot produce sputum, especially in resource-limited settings where standard diagnostics for TB are scarce [10][11][12].
However, there are important differences in the diagnostic yield of urine LAM testing depending on setting. Some researchers have reported very low diagnostic yield of urine LAM testing. In Uganda, sensitivity and additional yield of urine LAM relative to sputum Xpert MTB/RIF was 7.9% and 1%, respectively. However, the study had a broad enrollment criteria of both inpatient and outpatient adults, HIV positive and HIV negative, undergoing sputum-based TB screening, which may have influenced their findings [11]. Among HIV-positive outpatient adults irrespective of TB symptoms initiating ART in Mozambique, urine LAM sensitivity was 3.5%. The early stage of HIV disease (67% WHO HIV stage 1) and higher CD4 count (median, 278 cells/µL) of the study participants could explain the low urine LAM sensitivity [13].
In contrast, other researchers have reported higher urine LAM diagnostic yield. In Kenya, in a cohort of outpatients who were either severely ill or with CD4 <200 cells/µL or with body mass index <17 kg/m 2 and with symptoms of pulmonary TB, urine LAM sensitivity was 58% and the incremental yield to an algorithm based on clinical signs and smear microscopy was 12%. In another adult outpatient population with TB symptoms and CD4 count <200 cells/µL in Malawi, urine LAM sensitivity was 24.9% and urine LAM testing doubled the yield of TB cases identified relative to sputum Xpert MTB/RIF alone [12]. Both studies enrolled populations with TB symptoms for whom yield for TB diagnostics is bound to be high.
Our finding of a 5% diagnostic yield is arguably high, particularly in an outpatient population of patients who, despite having low CD4 counts of <50 cells/µL, were systematically prescreened for TB using standard TB diagnostic tools including Xpert MTB/RIF. Among participants screened out of A5274, a third had TB [7]. We hypothesize that the diagnostic yield could have been higher if these participants received urine LAM testing. In addition, we conducted retrospective testing on stored urine samples. Although no study has directly compared the diagnostic yield of LAM testing on fresh urine vs frozen urine, the Cochrane review showed that urine LAM sensitivity was higher on fresh nonstored urine [10]. The frozen urine samples and retrospective testing may have reduced the diagnostic yield of LAM testing in our study.
Several studies have reported benefits of urine LAM testing on clinical outcomes including mortality. Peter et al found that urine LAM-guided initiation of TB therapy reduced absolute risk of mortality by 4% and relative risk by 17% [14]. In the Rapid urinebased screening for tuberculosis in HIV-positive patients admitted to hospital in Africa trial, urine LAM testing significantly reduced the risk of mortality in 3 prespecified clinical subgroups: patients with CD4 counts <100 cells/µL, severely anemic patients, and those with clinically suspected TB [15]. Huerga et al reported that urine LAM grade was a marker for patients at higher risk of death [12] and a marker for severe disease and TB dissemination. Despite the growing evidence on mortality benefits of LAM testing, the uptake and implementation of LAM testing remains low in high-TB/HIV-burden countries-only 21% had implemented LAM testing by end 2019 [16]. In our study, regardless of the striking imbalance across arms in positive pre-ART urine LAM tests (21 cases in the empiric arm vs 7 cases in the IPT arm), exclusion of urine LAM-positive participants had no impact on the risk of developing symptomatic TB disease and mortality. Similar outcomes were seen in the Systematic Empirical vs. Test-guided Anti-TB Treatment Impact in Severely Immunosuppressed HIVinfected Adults Initiating ART With CD4 Cell Counts <100/mm 3 [17] and TB Fast Track [18] studies.
Among the LAM-positive participants, 18% developed symptomatic TB disease-all in the empiric arm only (5 cases in the empiric arm and 0 cases in the IPT arm). This finding implies that the 4-drug anti-TB treatment in the empiric arm did not optimally prevent participants from developing symptomatic TB disease. Although self-reported drug adherence was similar between arms in A5274, there were more premature drug discontinuations in the empiric arm (47 empiric vs 18 IPT discontinuations) [7], which may have resulted in more symptomatic TB disease. In addition, despite the similar grade 3 or 4 adverse events between arms [7], the premature discontinuations may have been due to more grade 1 or 2 adverse events in the empiric arm compared to the IPT arm, which were not part of the A5274 primary analysis. Last, as the study was unblinded, diagnostic suspicion bias between the arms may have differed, resulting in more aggressive use of TB diagnostics or more liberal TB diagnosis in the empiric arm.
Our study enrolled HIV-positive outpatients who had screened negative for TB using standard TB diagnostic tools who started either empiric TB treatment or IPT plus ART. Despite the low CD4 count, in such a setting, TB cases are expected to be low, potentially limiting the generalizability of our results. In addition, the lack of an ART-only arm in the A5274 study may have masked the role of urine LAM testing in patients with subclinical disease who may have later developed active TB. Last, our results should be interpreted with caution as it was not a real randomized comparison, since we used a subset of the A5274 study participants and did not compare urine LAM results with microbiologically confirmed tests due to the lack of a perfect reference standard, particularly for patients with extrapulmonary TB and paucibacillary disease [19]. In our study, addition of LAM testing yielded more additional positive TB tests to clinical diagnosis, and microscopy and Xpert MTB/RIF testing among outpatients with advanced HIV who were systematically prescreened for TB; however, LAM testing was a poor marker for risk of development of symptomatic TB disease or mortality. Ultimately, our results support the current consensus that urine LAM testing has the potential to increase diagnostic yield among HIV-positive outpatients and provide an alternative diagnostic method for TB in people who cannot produce sputum, especially in resource-limited settings. Implementation and scale-up of existing LAM tests and development of next-generation assays such as the FujiLAM assay and the Foundation for Innovative New Diagnostics should be prioritized. | 2021-03-01T06:15:53.454Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "092f94e463e16f78271f3654023018537671555c",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/cid/article-pdf/73/4/e870/39764130/ciab179.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "30aaf5d75da7df8fd53710d6844e9c2f69fea1b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258775738 | pes2o/s2orc | v3-fos-license | Genome-Wide Signal Selection Analysis Revealing Genes Potentially Related to Sheep-Milk-Production Traits
Simple Summary In our research, candidate genes related to sheep-milk production were revealed by a genome-resequencing analysis and a genome-signal-selection analysis, and a RT-qPCR experiment was performed to prove the expression levels of these candidate genes, the results showed that the FCGR3A gene’s expression level had a significant negative relationship with sheep-milk production. Abstract Natural selection and domestication have shaped modern sheep populations into a vast range of phenotypically diverse breeds. Among these breeds, dairy sheep have a smaller population than meat sheep and wool sheep, and less research is performed on them, but the lactation mechanism in dairy sheep is critically important for improving animal-production methods. In this study, whole-genome sequences were generated from 10 sheep breeds, including 57 high-milk-yield sheep and 44 low-milk-yield sheep, to investigate the genetic signatures of milk production in dairy sheep, and 59,864,820 valid SNPs (Single Nucleotide Polymorphisms) were kept after quality control to perform population-genetic-structure analyses, gene-detection analyses, and gene-function-validation analyses. For the population-genetic-structure analyses, we carried out PCA (Principal Component Analysis), as well as neighbor-joining tree and structure analyses to classify different sheep populations. The sheep used in our study were well distributed in ten groups, with the high-milk-yield-group populations close to each other and the low-milk-yield-group populations showing similar classifications. To perform an exact signal-selection analysis, we used three different methods to find SNPs to perform gene-annotation analyses within the 995 common regions derived from the fixation index (FST), nucleotide diversity (Ɵπ), and heterozygosity rate (ZHp) results. In total, we found 553 genes that were located in these regions. These genes mainly participate in the protein-binding pathway and the nucleoplasm-interaction pathway, as revealed by the GO- and KEGG-function-enrichment analyses. After the gene selection and function analyses, we found that FCGR3A, CTSK, CTSS, ARNT, GHR, SLC29A4, ROR1, and TNRC18 were potentially related to sheep-milk-production traits. We chose the strongly selected genes, FCGR3A, CTSK, CTSS, and ARNT during the signal-selection analysis to perform a RT-qPCR (Reale time Quantitative Polymerase Chain Reaction) experiment to validate their expression-level relationship with milk production, and the results showed that FCGR3A has a significant negative relationship with sheep-milk production, while other three genes did not show any positive or negative relations. In this study, it was discovered and proven that the candidate gene FCGR3A potentially contributes to the milk production of dairy sheep and a basis was laid for the further study of the genetic mechanism underlying the strong milk-production traits of sheep.
Introduction
The sheep is one of the earliest domesticated-farm-animal species and has experienced evolution and domestication over thousands of years [1]. Dairy sheep are traditionally farmed in southern Europe (France, Italy, Spain, Greece), central Europe (Hungary and the Czech and Slovak Republics), eastern Europe (Romania and Ukraine), and countries in the Middle East, such as Turkey and Iran [2]. Organized sheep-breeding programs were developed from at least the 1960s [3]. The most efficient selection scheme for local dairy sheep is based on the pyramidal management of the population, with the breeders of the nucleus flocks at the top, and pedigree, official milk recording, AI (artificial insemination) breeding, controlled natural mating, and breeding-value estimation (i.e., BLUP) are carried out to make genetic progress. In 2013, dairy small ruminants accounted for a minor part of the total agricultural output in France, Italy, and Spain (0.9 to 1.8%) and a larger part in Greece (8.8%) [4]. In these European countries, the dairy-sheep industry is based on local breeds and crossbreeds raised under semi-intensive and intensive systems, and it is concentrated in a few regions. While with the development of dairy-sheep breeding and the emergence of dairy products emerging, dairy ruminants have become a major part of European agricultural income [5,6]. The average flock size varies from small to medium (140 to 333 ewes/farm), and the average milk yield ranges from low to middle (170 to 500 L/ewe) [7], showing substantial space for improvement in relation to cows' milk [8]. Furthermore, sheep milk has higher protein, fat, lactose, total non-fat solids, and ash contents and a higher nutritional value than cows' and goats' milk [9], which makes it suitable for processing into various types of dairy products. Most sheep milk is sold to industries and then processed into traditional cheese products, most of which are made into Protected Denomination of Origin (PDO) cheeses for gourmets. The animals' udder health is also critical in sheep-milk quantity and quality. Mastitis is an inflammation of the mammary gland that is usually caused by pathogens, mainly bacteria, which develop in the udder tissue after infections in the teat canal. It is one of the most prevalent and costly diseases in the dairy industry due to the significant reductions in milk production and physical harm that it causes [10][11][12]. In previous studies, some quantitative trait loci (QTLs) and SNPs associated with SCC (somatic cell counts) that serve as markers of mastitis were identified based on linkage-disequilibrium analyses of different dairysheep breeds [10,13,14], leading to significant improvements in ewes' ability to resist mastitis [15,16].
The dramatic decreases in the costs of whole-genome sequencing (WGS) and RT-qPCR experiments on animals have made it possible to scan the complete genomes of thousands of animals. Using genome information makes it possible to explain parts of the total genetic variance that are difficult to measure, such as low-heritability, sex-limited, and postmortem traits. To help clarify the link between the genome sequence and real data of important traits, this research was designed to gain a better understanding of sheep lactating genes by comparing the SNP-variant information on different sheep breeds with significantly different dairy-production characteristics. These breeds significantly differ at two dairy levels, since there are high-milk-yield (East Friesian sheep, 700 kg/lactation, Dairy Meade sheep, 500 kg/lactation and Awassi sheep) and low-milk-yield sheep breeds (Hu sheep, small-tailed Han sheep, and Churra sheep) [17]. To this end, we performed a whole-genome sequencing (WGS) analysis of these sheep breeds, as we found that in previous studies, little research was performed using whole-genome sequencing on these dairy sheep populations. First, we re-sequenced the crossed breed of Dairy Meade sheep and small-tailed Han sheep. The information on this sheep breed will be uploaded to a public genome-sequence database for researchers to use freely (NCBI). In our research, a genetic-structure analysis was performed to establish the relationships and geographical distances between 10 sheep breeds, and then genome-wide-signal scans were carried out to identify significant genes associated with sheep-milk production. The genes validated by the RT-qPCR results and their relationship with milk production represent significant progress in our knowledge of the gene-expression levels of lactating sheep and, thus, provide potentially valuable information for future dairy-sheep studies.
Whole-Genome-Resequencing Analysis
We sequenced 41 sheep genomes at an average depth-of-coverage of 10X, which contained 4 sheep breeds. To this end, genomic DNA was extracted from ear tissue using the Wizard ® Genomic DNA purification kit (Promega, A1125, Madison, WI, USA). The A260/A280 ratio using Nano-Drop ND-2000 (Thermo Fisher Scientific, MA, USA) was used to check the quality and integrity of the extracted DNA and agarose-gel electrophoresis was used to check its quality. Good-quality DNA from each collected sample was sequenced on the MGI-SEQ2000 platform from the Beijing Compass Biotechnology Company (Beijing, China). The satisfactory sequence data obtained were used to characterize individual genomes at a minimum of 10X depth of coverage, with more than 30 G of raw data from each sample. After quality control, data were used to perform genome mapping and SNP calling. Resequencing data from a further 60 sheep, downloaded from NCBI, were analyzed by Plink parameters on Linux (San Francisco, CA, USA) (Version V1.90). The details are listed in Section 2.3. All high-quality double-trimmed read pairs were aligned against the reference assembly Oar v.4.0 genome [29] (https://www.ncbi.nlm.nih.gov/assembly/GC A_000298735.2, accessed on 15 March 2022) using BWA software (https://sourceforge.net/ projects/bio-bwa/files/, accessed on 15 March 2022) (version: 0.7.12). Paired-end reads that mapped to exactly the same position on the reference genome were deleted by the Mark Duplicates in Picard (picard-tools-1.56, at http://picard.sourceforge.net, accessed on 15 March 2022). Additional realignment of indels and SNPs was performed by using the Genome Analysis Toolkit (GATKV4.0) (https://gatk.broadinstitute.org/hc/en-us, accessed on 15 March 2022) [30] and sequence alignment (SAM tools) [31]. The GATK was used to identify SNP variation in each individual sample. The SNPs that did not meet the following criteria were excluded: (1) SNP call rate > 99.6%; (2) minor-allele frequency > 0.01; (3) missing rate is lower than 0.1; and (4) all loci followed the Hardy-Weinberg rule; (5) linkage-disequilibrium loci were excluded (R 2 < 2). All SNPs were annotated using the Bio Mart 4.0 (https://asia.ensembl.org/info/data/biomart/index.html, accessed on 15 March 2022) [32] based on the gene-reference genome provided by the Oar v.4.0 genome from NCBI.
Genome-Wide Signal-Selection Scan and Gene Annotation
We used all SNPs that passed quality control to detect signatures of selection in high-and low-milk-yield sheep within 101 sheep genomes. Genome-wide signal-selection analysis was performed by using F ST fixation indices (population-differentiation value), (θπ), the nucleotide-diversity ratio (θπ-HMY/θπ-LMY), HMY (high milk yield), LMY (low milk yield), and the transformed heterozygosity score (ZH P ). The window-based ZH P method was calculated by the formula ZH P = (Hp − µHp)/σHp, where µ is the overall average signal value of heterozygosity and σ is the standard deviation of all windows of each group. High-milk-yield group included 57 samples from five breeds (Dairy Meade sheep = 20, DM (F1) = 15, DM (F2) = 10, East Friesian sheep = 10, and Awassi sheep = 2), while low-milk-yield group included 44 individuals across five breeds (STHS = 5, Hu sheep = 14, Fin sheep = 9, Churra sheep = 6 and Suffolk sheep = 10). The F ST fixation, ZHp [39], and log 2 θπ (high/low), were calculated within 100-kb sliding windows and 10-kb steps to obtain overlapping regions. The variation regions within the highest 5% of all three statistics were considered as candidate selections, and then candidate genes were annotated by a genomic-database search and annotating engine, Bio Mart (https: //asia.ensembl.org/info/data/biomart/index.html, accessed on 15 March 2022) [40].
Gene Ontology and Kyoto Encyclopedia of Genes and Genomes
Functional enrichments for Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses were performed using g: Profiler (https:// biit.cs.ut.ee/gprofiler/gost, accessed on 1 June 2022) for all selected genes. The enrichment significance was assessed using a condition for g: SCS 'Set Counts and Sizes', and the p value was set at <0.05.
Validation of RT-qPCR Experiment
We randomly selected 11 Lacaune sheep in from a farm in Belgium (https://lesfauve slaineux.wordpress.com/le-troupeau/, accessed on 1 September 2022) to determine their genes' expression levels. These were frequently selected in our research, since Lacaune sheep are among the most famous and heavily produced dairy sheep in France, and imported into Belgium. This farm efficiently recorded the milk-production data from different milk lactations and information on newly lambing ewes. Fresh milk was sampled from 11 newly lambed ewes. From each ewe, 50 mL of milk was sampled in non-RNA se tubs. All these steps complied with the requirements of European Animal Welfare Committee. The milk was stored at 4 • C during transport from farm to laboratory and processed immediately for somatic cell isolation. Subsequently, 50 uL of 0.5 M EDTA was added for each 50-mL milk sample. The samples were then centrifuged for 10 min at 2000× g, after which the supernatant containing cream and skin milk was removed, followed by washing in 10 mL PBS. Samples were then centrifuged a second time, the maximum amount of supernatant was removed, and the lysed samples were stored at −80 • C. The RNA was extracted according to the procedure of the isolation of RNA from non-fibrous tissue (Promega Inc., Madison, WI, USA), after which concentration and A260/A280 of RNA were tested; the standards were concentration >10 ng/ul and 2.2 > A260/A280 > 1.8. Satisfactory RNA was used to perform reverse transcription in accordance with Protocole reverse transcription (Promega Inc., Madison, WI, USA). The conditions of reverse transcription were as follows: 25 • C, 5 min; 42 • C, 60 min; 70 • C, 15 min, 4 • C, 20 min; 4 cycles in total. The cDNA was stored at −20 • C. Quantitative real-time PCR was performed using DNA, premiers were designed to amplify target genes and reference genes, reference genes were selected from NCBI database, which were similarly expressed in all tissues, the genes' premier pairs for RT-qPCR were designed by Eurogentec CO., (Brussel, Belgium), and RT-qPCR analyses were performed using Rotor Gene 6000 system (Corbett) and SYBR green (Gembloux, Belgium). Each amplification reaction contained 2 µL of cDNA and 18 µL of SYBR master mix (Thermo Fisher Scientific, MA, USA), including 500 nM of primers at a final volume of 20 µL. Reactions were performed as follows: 95 • C for 5 s, 60 • C for 30 s, 72 • C for 45 s, for 40 cycles in total, with data collection at the annealing step.
The Relationship between Gene Expression and Milk Production
Candidate genes from RT-qPCR experiment were used to perform a linear correlation analysis between genes' relative expression values and milk production. The milkproduction recordings were from Belgian farm with the same Lacaune sheep as those in 2022 (Data S4), as well as the production data from 2015 to 2021 (Data S5). We used the BLUPF90+ (http://nce.ads.uga.edu/html/projects/programs/, accessed on 15 March 2023) program to obtain estimated values (similar to estimated breeding values) of milkproduction traits of each ewe from 2015 to 2021. Usually, higher estimated values meant the animal had stronger production traits, so we used these values to conduct a linear analysis with ∆CT gene values and actual milk production from 2022, as well as the estimated values from period of 2015-2021 to show their relationships. Animal model is needed in BLUPf90+. In our research, the animal model was as follows: y ijk = A i + S j + βx ijk + e ijk , where y ijk is observed milk production, A i is fixed effect of lactation days, lactation number, birth year, and milking times, S j is random effect of animals, βx ijk is regression coefficient of variance of 1.0, and e ijk is residual effect. The estimations were given after data analysis using BLUPF90+ program.
Genomic Variants and Principal Component Analysis
We combined the genome sequences of 101 high-and low-production dairy sheep. The genomes were mapped and the SNPs were called using PLINK. After the joint calling and quality control, we detected 4,751,642 SNPs in total. To study the population stratification, three different approaches were employed, of which PCA is the first to be discussed here. The intention behind the use of PCA was to reduce the dimensionality of the genomic relationship matrix so that individuals (and breeds) were separated along different principal components (PCs). The first three principal components were able to explain most of the variation: PC1 explained about 50.26%, PC2 explained about 23.51%, and PC3 explained about 14.95%. Principal component 1 (PC1) differentiated the breeds in this study into three main clusters ( Figure 1A): Cluster1 (high-yield breeds, EFR, DM, DMF1, and DMF2); Cluster2 (Mongolian low-yield breeds, HS, STHS, CS, and AWS) and Cluster3 (low-yield breeds, FS and SFK). However, overlap was observed among the DM sheep, DMF1 and DMF2, within Cluster 1. The reason for this may have been that these sheep have extremely close genetic relations. The PCA separated the different sheep breeds in this study, with SFK the most clearly separated due to its longer distance from and non-blood relations with the other sheep. In the PCA plots, the high-yield breeds were closer to the Mongolian sheep than to the low-yield sheep, and in the high-yield cluster, the EFR sheep were clearly separated from the DM sheep. The separation of the clusters corresponded to the geographical origins and blood mixes of these sheep breeds, and may have also revealed their genetic distance.
Ancestor Analysis
To further explore the population structure among individuals from the different breeds in this study, a model-based hierarchical clustering analysis was undertaken for K-values of 2-7 (with K the user-defined number of biological ancestral populations). The cross-validation (CV) estimates revealed the K-value of 3 to be the best fit for the nine populations; it demonstrated the lowest CV error among all the K values. The bar plot of the ADMIXTURE results from the K-values of 2 and 3 is presented in Figure 1C. The analysis with K = 2 separated the high-milk-yield sheep breeds from the low-yield breeds. When K = 3, the FS and SFK were separated from the other breeds in the low-yield group. The DM and EFR sheep had the highest milk yield among all the populations, the Mongolian CHS, HS, and STHS sheep had average milk production [26], and the FS and SFK sheep produced the least milk. It is known that FS and SFK sheep are clearly separated from the other two sheep populations, and this was validated by the results obtained from the PCA analysis and the admixture analysis.
Neighbor-Joining-Tree Analysis
In this study, the phylogenetic neighbor-joining tree divided the high-and lowproduction groups into nine different clusters ( Figure 1B). The nine main branches were as follows: DM, DMF1, DMF2, EFR, CS, FS, HS, STHS, and SFK. According to the tree, we observed that the DM and DMF2 sheep branches were close to each other and that some of the DM, DMF1, and DMF2 sheep were mixed together, indicating their genetic similarity and probable sharing of a single ancestral line. The DM sheep is the hybrid offspring of EFR breeds and New Zealand sheep [41], so it was the second-nearest to the EFR sheep. Our phylogenetic results showed two breeds (DMF1 and DMF2) in the different sub-branches, which might have been due to the hybridization between the DM breeds and the STHS breeds. For the HS and STHS sheep, all the Mongolian sheep had similar genetic information, and their branches were close to each other. Similar to the PCA, there was a close genetic distance between the FS and SFK sheep, since there are shorter geographical distances between them. In the NJ-tree picture, the Awassi breed was not clearly separated, possibly due to its small numbers. It should be noted that the results of the phylogenetic analysis were reliable due to the large number of SNPs from the 10 sheep breeds used in this study.
Ancestor Analysis
To further explore the population structure among individuals from the different breeds in this study, a model-based hierarchical clustering analysis was undertaken for K-values of 2-7 (with K the user-defined number of biological ancestral populations). The cross-validation (CV) estimates revealed the K-value of 3 to be the best fit for the nine populations; it demonstrated the lowest CV error among all the K values. The bar plot of the ADMIXTURE results from the K-values of 2 and 3 is presented in Figure 1C. The analysis with K = 2 separated the high-milk-yield sheep breeds from the low-yield breeds. When K = 3, the FS and SFK were separated from the other breeds in the low-yield group. The DM and EFR sheep had the highest milk yield among all the populations, the Mongolian CHS, HS, and STHS sheep had average milk production [26], and the FS and SFK sheep produced the least milk. It is known that FS and SFK sheep are clearly separated from the other two sheep populations, and this was validated by the results obtained from the PCA analysis and the admixture analysis.
Signal-Selection Analysis and Gene Ontology
We scanned the genomes of 101 high-and low-milk-yield sheep for signals of positive selection with different traits. To achieve this, we calculated three complementary statistics along the sheep reference genome Oar v.4.0 (https://www.ncbi.nlm.nih.gov/assembl y/GCA_000298735.2, accessed on 15 March 2022) using 100-kb-long sliding windows and 15-kb step sizes. The first statistic was the population-differentiation index F ST , for identifying genomic regions with different allelic frequencies between high and low groups. The second statistic measured the differences in nucleotide diversity between the two groups (θπ(high/low)), with high or low θπ values indicating positive selection of different milk-production sheep, respectively. The third measure was also used to calculate the selected candidate SNPS using the index of the transformed heterozygosity score (ZH P ) ( Figure 2). With regard to the limiting of the false-positive identifications, we considered the 5% most heavily overlapping regions from all three scans, which provided a total of 995 overlapping SNP regions and 553 protein-coding genes and small RNA. The selection candidates identified in the high-and low-yield groups are provided in Data S1. Most of the protein-coding genes were significantly enriched using the functional gene ontology and KEGG categories related to the protein binding, RNA binding, molecular transport, and cytokine-receptor activity (p = 9.23 × 10 −3 ) (Figure 3, Data S2). Importantly, through the gene annotation, the three highest 5% genomic regions providing the strongest signatures of selection containing FCGR3A, CTSK, CTSS, ARNT, GHR, and SLC29A4 were selected as the strongest annotation genes by comparing the high-and low-milk-yield groups.
RT-qPCR Results
We chose the four genes that were most frequently selected by the signature-selectio analysis shown in Figure 2, which were FCGR3A, CTSK, CTSS, and ARNT, according the NCBI database and previous studies. We chose GAPDH gene as reference gene to pe form RT-qPCR validation. The premier sequence is listed in Table 2. All the gene-expre sion values were derived from 11 ewes; the CT and ΔCT values of each gene are listed DataS3, with ΔCT values meaning the relative expression of each gene. 20 10 nmol TE-100 um *-F means the forward premiers of genes; -R means the reverse premiers of genes.
The Relationship between Gene Expression and Milk Production
The milk-production data were sourced from the Belgian farm. They covered the a erage daily milk production from four months in 2022: September, October, Novembe and December (Data S4). A linear analysis of the ΔCT values of each gene was performe
RT-qPCR Results
We chose the four genes that were most frequently selected by the signature-selection analysis shown in Figure 2, which were FCGR3A, CTSK, CTSS, and ARNT, according to the NCBI database and previous studies. We chose GAPDH gene as reference gene to perform RT-qPCR validation. The premier sequence is listed in Table 2. All the gene-expression values were derived from 11 ewes; the CT and ∆CT values of each gene are listed in Data S3, with ∆CT values meaning the relative expression of each gene.
The Relationship between Gene Expression and Milk Production
The milk-production data were sourced from the Belgian farm. They covered the average daily milk production from four months in 2022: September, October, November, and December (Data S4). A linear analysis of the ∆CT values of each gene was performed using the daily average milk production from 2022 and values estimated using the program of BLUPf90+ (Data S5), and the correlations between daily milk production and the estimated values are also shown in Figure 4. The results showed that the FCGR3A gene had significant correlations with milk production and estimate values. The higher ∆CT values meant lower expression values during the lactation period. It was revealed that the FCGR3A gene had a significantly negative relationship with milk production and the estimate values (R 2 = 0.8452, R 2 = 0.7382); furthermore, as predicted, the relationship between actual milk production in 2022 and the estimate value was positive (R 2 = 0.6988. The CTSK, CTSS, and ARNT did not show significant differences between their expression values, the milk production values, and the estimate values, but the correlation coefficients between actual milk production in 2022 and the estimate values for the 2015-2021 milk production were positive (R 2 = 0.6988, R 2 = 0.628, R 2 = 0.628).
using the daily average milk production from 2022 and values estimated using the program of BLUPf90+ (Data S5), and the correlations between daily milk production and the estimated values are also shown in Figure 4. The results showed that the FCGR3A gene had significant correlations with milk production and estimate values. The higher ΔCT values meant lower expression values during the lactation period. It was revealed that the FCGR3A gene had a significantly negative relationship with milk production and the estimate values (R 2 = 0.8452, R 2 = 0.7382); furthermore, as predicted, the relationship between actual milk production in 2022 and the estimate value was positive (R 2 = 0.6988. The CTSK, CTSS, and ARNT did not show significant differences between their expression values, the milk production values, and the estimate values, but the correlation coefficients between actual milk production in 2022 and the estimate values for the 2015-2021 milk production were positive (R 2 = 0.6988, R 2 = 0.628, R 2 = 0.628).
Discussion
In this study, we sequenced the genomes of 101 sheep with high and low levels of milk production. Using a neighbor-joining-tree analysis, we divided these sheep into 10 groups according to their genetic relationships and background, and then, using a principal component analysis and a structure analysis, these 10 groups were divided into three clusters: high-milk-yield sheep (EFR, DM, DMF1, DMF2), Mongolian low-milk-yield sheep (HS, CS, AWS, STHS), and low-milk-yield sheep (SFK and FS). The scanning of the genomes of high-and low-yield sheep breeds revealed that the FCGR3A ARNT, CTSK, CTSS, and GHR genes were the strongest candidates. According to the RT-qPCR experiment and the analysis of the relationship between these candidate genes' ∆CT values and milk production, the FCGR3A gene had a significant relation with Lacune sheep milk production. All of these genes have also been reported to be highly expressed during the lactation period in cattle [42] and buffalo, reflecting the similar biological functions of these genes when they are expressed in lactating mammary glands. The GO and KEGG analyses showed that most of these genes were significantly enriched for mammary-gland-specific GO terms (cytokine-receptor activity and protein binding), as well as establishing the cellular functions and protease bindings. These genes were found to have high and low levels of milk production, milk protein, milk fat, milk lactose, cheese traits, somatic cell counts, etc. For some of these traits, it is not easy to find a direct relationship with milk quantity. This was probably the reason why we did not find a significant correlation coefficient between the CTSK, CTSS, or ARNT expression values and milk production. Furthermore, as we found in many publications, some of the most significant genes in our research play a crucial role in sheep-mastitis traits and in the immune-response process, which improves milk production. Sheep-mammary-gland mastitis is also a common source of economic losses on sheep farms, so it is also important for us to focus on some of the genes related to mammary-disease resistance.
Candidate Genes Potentially Associated with Some Milk Traits
The FCGR3A (Fc fragment of IgG, low-affinity IIIa, receptor) gene is considered a novel and promising candidate for relieving stress, inflammation, and disease [43], as well as dairy-cattle mastitis. The binding of FCGRs to the Fc region of immunoglobulins mediates a variety of immune functions, such as antigen presentation, the clearance of immune complexes, the phagocytosis of pathogens, and cytokine production [44], they work together to resist viral injection to improve udder health and, thus, increase the production of milk. It is reasonable to suggest that lactation periods cause the increase in expression of a large number of genes, resulting in improvements in performance, as mastitis or Staphylococcus aureus infections often occur during lactation [45,46]. With regard to CTSS (Cathepsin S)-encoding protease, Sodhi et al. (2021) found a higher expression of most of the cathepsin genes (CTSS, CTSD, and CTSK) during the mid-to-late lactation stages, emphasizing their potential roles in milk synthesis, since the expressions of most of the proteases were higher during peak lactation [47]. The higher expression of cathepsin-encoding genes during late lactation stages could be attributed to the fact that CTSS plays a crucial role in mammary-gland involution [48]. Previous studies found the role of the CTSK gene in influencing cheese traits, as it is a protease-encoding casein that increases milk protein [49]. Similar results were found when evaluating the expression patterns of important proteasepathway-associated CTSK genes derived from different lactation stages in Sahiwal cows and Murrah buffalo. In both breeds, the RNA-expression levels of these genes were higher in the late lactation stages than in the early lactation stages. Lactation induces bone loss in order to provide sufficient calcium in milk, and to prevent this from taking place, the CTSK gene elevates its expression to increase osteocyte numbers, in order to maintain the balance of bone and milk calcium [50,51]. This proves that the CTSK gene plays an important role in milk-calcium traits. Furthermore, a differential expression analysis of Churra and Assaf sheep allowed us to notice some genes that were significantly and differentially expressed between the two breeds. These genes were mainly associated with protein-protease activity. Furthermore CTSK was differentially expressed and was selected as a candidate gene associated with cheese traits in this study [49]. These findings confirmed previous results, which highlighted the importance of the expression of genes encoding for some proteases in sheep milk. The aryl hydrocarbon receptor nuclear translocator (ARNT) can interact with the AHR aryl hydrocarbon receptor). The AHR is restricted to the cytoplasm in its unbound state [52,53]. Once activated, the AHR is combined with the nucleus of the ARNT and forms an active complex with the AHR nuclear translocator (ARNT) to alter the expressions of target genes [54]. It was reported that an association between AHR activation and ARNT causes changes in milk production [55]. More specifically, pregnant mice exposed to TCDD (an ARNT agonist) in vivo produced lower levels of the milk proteins, β-casein and whey protein [56]. The GHR is a regulator of developing growth and, as a growth hormone, it has important effects on carbohydrate, protein, and lipid metabolism. In cattle, mutations in GHR have been associated with milk yield and composition in Ayrshire, Holstein, and Jersey cattle. Dettori et al. (2018) reported that variations of the ovine GHR gene might affect milk-quality traits in Sarda sheep [57]. The rs55631463 GHR SNP genotype affects milk fat and protein yield, and the rs411154235 SNP is associated with lactose content in milk, as it encourages the transformation of glucose into glycogen to help glycogen cellular deposits [58].
In our study, except for FCGR3A, the strongest gene candidates, CTSK, CTSS, ARNT, were not found to be significantly relevant to milk-production quantity. This was potentially due to the small number of Lacune sheep selected to validate their function. Since our objective was to detect milk-yield genes, not milk-composition genes, it was difficult to directly find the relationships between these genes, which may participate in milk-protein or milk-fat synthesis. Thus, further measures need to be applied in the future to prove that these genes are correlated with other sheep-milk traits. In particular, Dairy Meade should be tested, since this breed, or crosses in which it is involved, comprised a high proportion of the high-milk-yield group.
Conclusions
In conclusion, our research is the first attempt to report a genome-signature-selection scan revealing genes that are associated with high-and low-milk-yield sheep breeds by using whole genome sequencing. Some of the most significant candidate genes associated with milk yield were identified through a combination of genome-signature-selection scanning and RT-qPCR experimentation. In the high-and low-yield groups, 553 genes were detected that were enriched with significant protein-receptors-combing pathways. Under selective pressure, we selected the four genes, FCGR3A, CTSS, CTSK, ARNT, that were most likely to be related to milk-production traits, in order to perform a correlations analysis, in which FCGR3A was found to have a significant relationship with daily milk yield, as it participates in the process of immune reaction. Furthermore, since higher gene-expression values mean severer mastitis, they are negatively correlated with milk production. These results were supported by the correlation between the FCGR3A ∆CT values and the estimated values of the corresponding ewes. Therefore, these results could provide a better genetic perspective on the phenotypic differences between different-milk-yield groups for similar studies. Our findings provide an insight into the dynamic characterization of sheep-mammary-gland gene expression, and the identified candidate genes can provide valuable information for future functional characterization, as well as contributing to a better understanding of the genetic mechanisms underlying the milk-production traits in sheep. | 2023-05-19T15:13:27.576Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "aac1c05ca4c78cdecc9b3d22057182f38f7f0374",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ani13101654",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55bbb5fb620420557980187534da7718a92634a1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253082969 | pes2o/s2orc | v3-fos-license | CCHCR1-astrin interaction promotes centriole duplication through recruitment of CEP72
Background The centrosome is one of the most important non-membranous organelles regulating microtubule organization and progression of cell mitosis. The coiled-coil alpha-helical rod protein 1 (CCHCR1, also known as HCR) gene is considered to be a psoriasis susceptibility gene, and the protein is suggested to be localized to the P-bodies and centrosomes in mammalian cells. However, the exact cellular function of HCR and its potential regulatory role in the centrosomes remain unexplored. Results We found that HCR interacts directly with astrin, a key factor in centrosome maturation and mitosis. Immunoprecipitation assays showed that the coiled-coil region present in the C-terminus of HCR and astrin respectively mediated the interaction between them. Astrin not only recruits HCR to the centrosome, but also protects HCR from ubiquitin-proteasome-mediated degradation. In addition, depletion of either HCR or astrin significantly reduced centrosome localization of CEP72 and subsequent MCPH proteins, including CEP152, CDK5RAP2, and CEP63. The absence of HCR also caused centriole duplication defects and mitotic errors, resulting in multipolar spindle formation, genomic instability, and DNA damage. Conclusion We conclude that HCR is localized and stabilized at the centrosome by directly binding to astrin. HCR are required for the centrosomal recruitment of MCPH proteins and centriolar duplication. Both HCR and astrin play key roles in keeping normal microtubule assembly and maintaining genomic stability. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-022-01437-6.
Background
Microtubules constitute an essential part of the cytoskeleton, maintaining cell shape and regulating mitosis [1]. During mitosis, microtubules extend from the centrioles, forming a spindle [2][3][4]. As the microtubular organization center, the centrosome is composed of a pair of centrioles and pericentriolar materials (PCM, also known as pericentriolar satellites) [5,6]. Centrioles that display polar barrel-shaped structures with radial symmetry play a key role in the organization of centrosomes [6]. The number of centrioles in a cell is strictly regulated by the cell cycle. In the G1 phase, there is only one centrosome, which contains two isolated centrioles. PCM proteins are gradually recruited to the centrioles as the cell enters the S phase, and new procentrioles are formed at the proximal end of the existing centrioles. During the G2 phase, two centrosomes appear after duplication, and each contains two closely attached centrioles, which ensures that the daughter cells receive one centrosome with two centrioles after mitosis [7]. The PCM consists of various proteins, including pericentriolar materials 1 (PCM1), pericentrin, and a large number of centrosomal protein (CEP) family, such as CEP152, CEP63, and CEP215 (also named as cyclin-dependent kinase 5 regulatory subunitassociated protein 2 (CDK5RAP2)) [8]. These CEPs are not called a family in terms of homology, but they are all located in centrosomes, some of which are near the centriole and others are located in the outer part of the PCM, and perform different functions [9]. This complex structure of multiple, intertwined proteins is considered a platform for regulating organelle transport, spindle assembly, and cilia formation [10][11][12].
Astrin, a centrosome-related protein, which is also named sperm-associated antigen 5 (SPAG5) or mitotic spindle-associated protein p126 (MAP 126), dynamically localizes to the PCM, spindle poles, or outer kinetochores at different stages of the cell cycle. It participates in maintaining the dual-polarization of the spindle, the connection between microtubules and kinetochores, and the cohesion between sister chromatids, ensuring that mitosis proceeds properly. Deletion or mutation of astrin can lead to mitotic errors, such as spindle multipolarization and chromosome separation failure [13][14][15][16]. In the centrosome, astrin is involved in the assembly of microcephaly (MCPH) proteins during interphase, which promotes centriole duplication [17]. The high expression of astrin is also positively correlated with the malignant degree of many tumors, indicating that its role in the centrosome is crucial [18][19][20].
Coiled-coil alpha-helical rod protein 1 (CCHCR1 or HCR) is a centrosome and processing body (P-body)localized protein composed of multiple coiled-coil domains [21][22][23]. Although HCR has been widely reported as a susceptibility gene of psoriasis in genomewide association studies, its function in cells is far from clear [24][25][26][27]. HCR interacts with them RNA-decapping protein 4 (EDC4) in the P-body, a special membraneless organelle dedicated to regulating mRNA decay and storage [23,[28][29][30]. However, the specific function of HCR in the P-body is unknown. HCR also exhibits a wide range of roles in various physiological processes, such as cell proliferation and steroid production [31,32], and is also associated with alopecia areata, type-2 diabetes, and squamous cell carcinoma [33][34][35]. Interestingly, HCR has been predicted to interact with a series of centrosomeand mitosis-related proteins, such as PCM1, centrin, astrin, and CEP72, which suggests that HCR may participate in PCM networks and processes related to centrosome replication and mitosis [23].
In this study, we present evidence indicating that HCR is a key regulator of centrosome replication and microtubule organization. We show that HCR is localized and stabilized at the centrosome by directly binding to astrin. We also demonstrate that both HCR and astrin are required for the centrosome recruitment of CEP72 and MCPH proteins, including CEP152, CEP63, and CDK5RAP2. These findings provide a deeper understanding of the molecular function of HCR and are helpful for better exploring the role of HCR in psoriasis and other diseases.
HCR interacts with spindle-associated astrin and localizes at the centrosome and spindle
In previous reports, exogenous HCR has been found to localize to the centrosomes and P-bodies, and several P-body-and centrosome-associated proteins have been identified as candidate interactors with HCR [23]. In this study, we also examined the binding partners of CCHCR1 by proximity-dependent biotinylation (BioID)coupled mass spectrometry (LC-MS/MS). Similar to the data reported by Ling et al., we found astrin and mRNA-decapping protein 4 (EDC4) on the identified list (Table 1). Reciprocal immunoprecipitations were performed in HeLa cells to confirm the interaction between HCR and astrin. The endogenous immunoprecipitation experiments showed that astrin and HCR bound together as they were co-precipitated (Fig. 1A, Additional file 1: Fig. S1A). In 293 cells and U2OS cells, the exogenous and endogenous immunoprecipitation experiments performed showed the same results (Additional file 1: Fig. S1B). To further investigate whether there is a direct interaction between HCR and astrin, a GST pull-down assay was performed, and the results showed that HCR directly interacted with astrin in vitro (Fig. 1B).
To map the binding sites between the two proteins, we analyzed the domains of HCR and astrin according to other studies [36,37] and SMART Sequence Analysis Tools. Astrin consists of one unstructured region and two coiled-coil regions, whereas HCR contains three coiled-coil regions. Accordingly, we constructed a series of plasmids expressing truncated forms of HCR tagged with GFP or astrin tagged with myc (Fig. 1C). Immunoprecipitation assays revealed that the C-terminus of HCR (aa 441-782, coiled-coil 3, CC3) and the C-terminus of astrin (aa 893-1193, coiled-coil 2, CC2) mediated the interaction between them (Fig. 1D-F). In order to confirm whether there is a direct interaction in vitro, we also constructed a plasmid expressing GST-tagged astrin and a series of plasmids expressing truncated forms of Histagged HCR to perform a GST pull-down experiment. The result confirmed that the third region of HCR interacted with astrin in vitro (Fig. 1G), which was consistent with the co-IP results in vivo.
Previous studies have reported that astrin is located in the centrosome and spindle [13,17]. To more precisely examine the intracellular localization of HCR, we generated a stable HeLa cell line transfected with GFPtagged HCR (Additional file 2: Fig. S2A). Immunofluorescence (IF) staining showed that stably transfected HCR co-localized with astrin. In addition, the IF image of GFP-tagged-astrin-transfected HeLa cells co-stained with HCR and gamma-tubulin showed that astrin and HCR were co-localized in the centrosome ( Fig. 2A). While the CC2 domain of astrin was sufficient to be recruited by the kinetochore [38], we also found that it was colocalized with the CC3 domain of HCR around centriolar (Additional file 2: Fig. S2B). This further confirms that HCR and astrin bind to each other through their C-terminus. In mitotic cells, HCR showed spindle localization indicated by alpha-tubulin, similar to that of astrin (Additional file 2: Fig. S2C). To confirm that the spindle localization of HCR is real and reliable, we also knocked down HCR by RNA interference (RNAi), and the results showed that the spindle localization of HCR disappeared (Fig. S2D). Also, GFP-tagged HCR showed co-localization with astrin throughout mitosis (Fig. 2B). Since both HCR and astrin co-immunoprecipitated with PCM1 ( Fig. 2C), HeLa cells were stained with HCR and PCM1. The results showed that HCR only overlapped on the edges of the PCM1 throughout the cell cycle, except for telophase, which suggests that HCR may function as a bridge between the PCM and centriole (Fig. 2D). To further investigate whether HCR is also recruited to the centrosome via the microtubule transport system as PCM1, we disrupted the balance of microtubules using either the microtubule inhibitor nocodazole or microtubule stabilizer paclitaxel. Both treatments caused centrosome disintegration and disrupted the localization of HCR (Fig. 2E, Additional file 2: Fig. S2E), suggesting that the localization of HCR requires balanced microtubule dynamics. We also investigated whether HCR localization is regulated by PCM1 and pericentrin. Depletion of either PCM1 or pericentrin resulted in the delocalization of HCR from the whole centrosome (Fig. 2F), which indicated that the centrosome localization of HCR was controlled by both PCM1 and pericentrin. In turn, the knockdown of HCR did not affect PCM1 localization (Additional file 2: Fig. S2F). These results indicate that HCR is indeed a centrosome-associated protein and is under the control of the PCM platform.
Astrin deubiquitinates HCR and is essential for its centrosomal localization
To further analyze the functional relationship between HCR and astrin, we used siRNA to knockdown astrin and HCR in HeLa cells. Interestingly, depletion of astrin simultaneously reduced the protein level of HCR, while the protein level of astrin did not change after knockdown of HCR (Fig. 3A), and the decrease of HCR caused by depletion of astrin was not due to apoptosis or changes in the cell cycle (Additional file 3: Fig. S3A). Correspondingly, transient transfection of GFP-astrin in HeLa cells also increased the expression of endogenous HCR (Fig. 3B). These results suggested that astrin positively regulated the protein level of HCR. Fig. 1 Direct interaction of HCR with astrin. A Reciprocal co-immunoprecipitation analysis of HCR binding to astrin. HeLa cell lysates were immunoprecipitated with astrin, HCR, or control rabbit IgG antibodies and analyzed by western blotting with anti-astrin and anti-HCR antibodies. Anti-GM130 and anti-beta actin antibodies were used as negative controls. B GST pull-down assay of the interaction between astrin and GST-tagged HCR. Total lysates of HeLa cells expressing GFP-astrin were incubated with GST alone or GST-HCR purified from bacterial cells. Precipitates were detected with an anti-GFP antibody. C Schematic models of the deletion mutants of HCR and astrin. D Co-immunoprecipitation analysis of the astrin-binding domain on HCR. GFP vector alone or each HCR-GFP fragment were co-transfected with myc-astrin into HeLa cells, and then, lysates were immunoprecipitated with an anti-myc antibody and analyzed by anti-myc and anti-GFP antibodies. E Co-immunoprecipitation analysis of the HCR-binding domain on astrin. PCMV-myc empty vector or each myc-astrin fragment was co-transfected with HCR-GFP into HeLa cells, and then, lysates were immunoprecipitated with an anti-myc antibody and analyzed by anti-GFP and anti-myc antibodies. F Co-immunoprecipitation analysis of the interactive domains between HCR and astrin. GFP vector alone or GFP-HCR-CC3 fragment was co-transfected with myc-astrin-CC2 into HeLa cells, and then, lysates were immunoprecipitated with an anti-myc antibody and analyzed by anti-GFP or anti-myc antibodies. G In vitro analysis of the domain in HCR required for interacting with astrin. GST-tagged astrin and His-tagged HCR fragments were purified from E. coli strain BL21(DE3), and a pull-down assay was performed to examine the astrin-binding domain in HCR Additionally, IF staining showed that more HCR was recruited to the centrosome in cells overexpressing astrin as compared to astrin-depleted cells (Fig. 3C, D). By contrast, the depletion of HCR did not affect the centrosomal localization of astrin (Fig. 3E).
To address the mechanism by which astrin affects the expression of HCR, we first examined whether astrin regulates HCR at the mRNA level. Real-time quantitative PCR results showed that the knockdown of astrin did not change the mRNA expression of HCR, suggesting that the regulation does not occur at the transcriptional level (Fig. 3F). Since the ubiquitin-proteasome pathway is one of the most common protein degradation pathways in mammalian cells [39], we speculated that astrin may affect the ubiquitination of HCR and then reduce the degradation of HCR. To test this hypothesis, an astrin knockout (KO) HeLa cell line was generated using the CRISPR/Cas9 technology and was verified by western blot (Additional file 3: Fig. S3B). It was shown that the level of HCR protein decreased significantly after astrin knockout. However, there was almost no difference in the expression level of HCR between astrin-KO and parental HeLa cells when treated with the proteasome inhibitor MG132 (Fig. 3G). Furthermore, immunoprecipitation analysis showed that the loss of astrin caused an increase in ubiquitinated HCR (Fig. 3H). Taken together, these results indicate that astrin protects HCR from ubiquitin-proteasome-mediated degradation and therefore maintains the protein level of HCR. Next, we questioned whether astrin was also responsible for the localization of HCR. The IF image in Fig. 3I showed that the recruitment of HCR on the centrosome was enhanced in HeLa cells treated with MG132. However, in astrin-KO cells treated with MG132, the centrosome localization of HCR did not significantly increase (Fig. 3I). Collectively, these results suggest that astrin not only protects HCR from ubiquitinated degradation, but also is responsible for the centrosome localization of HCR.
Both HCR and astrin contribute to the centrosome localization of CEP72
Another candidate binding partner of HCR is CEP72, a centrosome protein localized to the PCM [17,40]. Both astrin and CEP72 are essential for the centrosome localization of a series of MCPH proteins, such as CDK5RAP2 (CEP215), CEP152, and CEP63, which ensure the successful duplication of centrioles [17]. Since astrin directly binds to CEP72, we wondered whether the association between HCR and CEP72 is direct or mediated by astrin or other proteins. Co-IP and GST pull-down assays confirmed that HCR directly binds to CEP72 with the third coiled-coil domain (Fig. 4A, B). As a cell cycle-dependent protein, the expression level of astrin changes at different stages of the cell cycle [15]. To examine the expression pattern of HCR and CEP72 in the cell cycle, HeLa cells at each cycle stage were obtained by the double-thymidine block method and analyzed by western blotting. It was revealed that the protein level of HCR increased from S to G2/M phase, peaked in the M phase, and then significantly decreased in the G1 phase, which was almost consistent with that of astrin, whereas the peak expression of CEP72 was later than that of astrin and HCR (Fig. 4C), suggesting that CEP72 might be under the regulation of astrin and HCR. In order to better understand whether astrin and HCR regulate CEP72, an HCR-knockout (KO) HeLa cell line was generated using CRISPR/Cas9 technology and was verified by western blotting (Additional file 2: Fig. S2A). Knocking out either HCR or astrin significantly reduced the signal of CEP72 on the centrosomes (Fig. 4D), while the expression level of CEP72 was almost unaffected (Additional file 4: Fig. S4). On the other hand, the depletion of CEP72 by siRNA did not affect the signals of HCR and astrin on the centrosomes (Fig. 4E).
HCR recruits MCPH proteins to centrioles and promotes centriole replication
Previous studies have revealed that centriole duplication relies on the centrosome localization of MCPH-associated proteins and PCM proteins. Among them, depletion of astrin or CEP72 reduced the recruitment of MCPH proteins, such as CEP152 and CEP63, to the centrosome, resulting in the inability of the centriole to duplicate properly from two to four foci [17,41]. We found that knocking down HCR, astrin, or CEP72 by using siRNA lowered the 4 centriole foci ratio (Fig. 5A) and reduced (See figure on next page.) Fig. 2 HCR co-localizes with astrin at the centrosome and mitotic spindle. A HeLa cells stably expressing HCR-GFP (green) were stained with an astrin antibody (red) and DAPI (blue) followed by confocal microscopy analysis (left panel); HeLa cells transfected with GFP-astrin (green) were stained with HCR (red) and gamma-tubulin (cyan) antibodies and DAPI (blue) for nuclear staining (right panel); scale bars, 10 μm. B Mitotic HeLa cells stably transfected with HCR-GFP (green) were stained with astrin (red), gamma-tubulin (cyan), and DAPI (blue); scale bars, 10 μm. C HeLa cell lysates were immunoprecipitated with control rabbit IgG or anti-HCR and detected by immunoblotting for HCR and PCM1. Beta-actin was used as a negative control (left panel) or immunoprecipitated with astrin and analyzed by western blotting for astrin and PCM1. Beta-actin was used as negative control (right panel). D HeLa cells were synchronized and stained with PCM1 (green), HCR (red), and DAPI (blue); scale bars, 10 μm; inset scale bars, 1 μm. E HeLa cells were treated with DMSO, 2 μg/ml nocodazole, or 1 μM paclitaxel and then stained with anti-HCR (red), anti-gamma-tubulin (green), and DAPI (blue); scale bars, 10 μm. F HeLa cells were transfected with the indicated siRNA and then immunostained for HCR (red) and gamma-tubulin (green). The nucleus was stained with DAPI (blue); scale bars, 10 μm; inset scale bars, 1 μm the signals of CEP152 and CEP63 on the centrosomes (Fig. 5B) while not affecting their protein levels (Fig. 5C).
In turn, depletion of CEP152 and CEP63 by siRNA did not affect the localization and expression levels of HCR, astrin, or CEP72 (Fig. 5D, E). Furthermore, immunoprecipitation analysis showed that HCR had no direct interactions with CEP152 and CEP63 (Additional file 4: Fig. S5). In addition to CEP152 and CEP63, another MCPH protein closely related to astrin-CEP72 recruitment is CDK5RAP2, which is also responsible for ensuring the replication of the centrosome [17]. Consistent with the results of astrin and CEP72 in the work of Kodani et al., the depletion of HCR by siRNA also caused the delocalization of CDK5RAP2 (Fig. 5F). These results suggested that, like astrin, HCR is also a key factor determining the centrosome localization of MCPH protein.
Depletion of HCR impedes microtubule assembly due to the loss of centrosome localization of CEP72
One of the most important roles of the centrosome is to regulate microtubule dynamics. PCM proteins play a critical role in the recruitment and assembly of microtubules. A previous study showed that depletion of CEP72 affected the nucleation activity of the microtubules and therefore decreased microtubule regrowth [40]. Similar results were obtained after the depletion of astrin and HCR by siRNA in HeLa cells (Fig. 6A) and RPE cells (Additional file 4: Fig. S6). It was reported that the destruction of the microtubule organization center could increase the length of microtubule plus-end tracking protein EB1 along the microtubules, which represents a decrease in the polymerization speed of the MT plus ends [42]. Compared with mock-treated cells, depletion of HCR, astrin, and CEP72 by siRNA caused a longer staining length of EB1, indicating that the polymerization of microtubules was slowed down (Fig. 6B). Together, these results revealed that lack of any of these three proteins could lead to microtubule nucleation defects and abnormal localization of EB1.
Since the interaction between HCR and CEP72 relied on the C-terminal coiled-coil of HCR (CC3), we transfected GFP-tagged HCR-CC3 into HeLa cells to observe the effect on microtubule organization. In IF images, overexpressed HCR-CC3 showed many large puncta all over the cytoplasm, and the endogenous CEP72 was captured into these puncta, thus losing centrosome localization (Fig. 6C). This phenomenon indicated that overexpressed HCR-CC3 functioned as a dominantnegative inhibitor of endogenous HCR activity. Moreover, the microtubule organization center was seriously disrupted in HCR-CC3-transfected cells, which was in strong contrast to the clear microtubule aster in the surrounding non-transfected cells (Fig. 6D). These results provided further evidence that HCR-dependent centrosome localization of CEP72 is essential for microtubule organization.
Depletion of HCR results in mitotic defects, DNA damage, and decreased tumor proliferation
Apart from their roles in centrosome replication, depletion of astrin or CEP72 also led to mitotic spindle pole defects and mitotic arrest [15,40]. Cell cycle analysis by Astrin protects HCR from ubiquitination and ensures the centrosome localization of HCR. A Negative control, astrin, and HCR siRNA-treated HeLa cells were analyzed by western blotting with antibodies against HCR, astrin, and beta-actin. The relative abundance of HCR protein was normalized to beta-actin and statistically analyzed. Error bars represent the mean ± SD of three independently performed experiments (n = 3); **P < 0.01 and ***P < 0.001 (Student's t test). The individual data values were provided in Additional file 6: Raw Data. B HeLa cells transfected with GFP alone or GFP-astrin were immunoblotted for GFP, HCR, and beta-actin. The relative protein levels of HCR were statistically analyzed across three independent experiments (n = 3). Error bars represent the mean ± SD; ***P < 0.001 (Student's t test). The individual data values were provided in Additional file 6: Raw Data. C GFP alone or GFP-astrin-transfected HeLa cells were subjected to immunostaining with HCR (red), gamma-tubulin (cyan), and DAPI (blue); scale bars, 10 μm; inset scale bars, 1 μm. The relative intensity of HCR in the centrosome was normalized to gamma-tubulin and statistically analyzed. Error bars represent the mean ± SD; **P < 0.01 (Student's t test). D Negative control, astrin, or HCR siRNA-treated HeLa cells were co-stained with HCR (red), gamma-tubulin (green), and DAPI (blue); scale bars, 10 μm; inset scale bars, 1 μm. The relative intensity of HCR in the centrosome was normalized to gamma-tubulin and statistically analyzed. One hundred cells (n = 100) per group were counted for each condition from three independent experiments. Error bars represent the mean ± SD; ***P < 0.001 (Student's t test). E Negative control, astrin, and HCR siRNA-treated HeLa cells were co-stained with astrin (red), gamma-tubulin (green), and DAPI (blue); scale bars, 10 μm; inset scale bars, 1 μm. The relative intensity of HCR in the centrosome was normalized by gamma-tubulin and statistically analyzed. One hundred cells (n = 100) per group were counted for each condition from three independent experiments. Error bars represent the mean ± SD; ***P < 0.001; ns, no significance (Student's t test). flow cytometry showed that almost half of the HCR-KO cells remained in M phase, while almost all the parental HeLa cells returned from M phase to G1 phase (Fig. 7A). This indicated that the loss of HCR might also lead to mitotic spindle defects and mitosis progression arrest.
In mitotic cells, depletion of HCR by siRNA also caused multipolar spindle formation, suggesting that the absence of HCR could prevent the normal assembly of spindles (Fig. 7B, Additional file 4: Fig. S7). Similar results were obtained when astrin or CEP72 was knocked down, which is consistent with previous studies (Fig. 7B, Additional file 4: Fig. S7) [16,40]. During the assembly of mitotic spindles, securin, a negative regulator of separase, can inhibit the production of activated separase before the onset of anaphase, which maintained the integrity of the mitotic centrosomes [43][44][45][46]. We showed here that securin was significantly downregulated, and separase was upregulated in HCRdepleted mitotic cells, similar to that in the astrin-depleted or CEP72-depleted cells (Fig. 7C) [15,16]. This means that the absence of HCR, astrin, and CEP72 can cause abnormal activation of separase, which in turn leads to the polar division of the spindle to form a multi-polarization structure.
In addition, we found an increased ratio of micronuclei in HCR-depleted cells, which indicates frequent chromosome segregation errors (Fig. 7D). In line with this phenomenon, IF results showed that phosphorylation of the DNA damage checkpoint kinases ATM (Fig. 7E) and gamma-H2AX (Fig. 7F) was increased in HCRdepleted cells. Western blot analysis showed that phosphorylation of Chk2 was also increased in HCR-depleted cells (Fig. 7G). These results suggested that the depletion of HCR caused frequent mitotic errors, resulting in genomic instability and DNA damage response.
Astrin is also thought to be related to tumorigenesis [47][48][49]. To address whether HCR is involved in it, a colony formation assay was conducted. It showed that the knockout of astrin or HCR significantly impeded the colony formation ability of HeLa cells (Fig. 7H). To further verify that HCR knockdown could lead to a decrease in tumor proliferation, we constructed a subcutaneous transplantation tumor model in athymic mice. Tumor size in mice transplanted with either astrin-KO or HCR-KO cells was significantly smaller than that of mice transplanted with parental HeLa cells (Fig. 7I). These data indicated that loss of HCR is associated with a decrease in tumor proliferation, which may be due to a mitosis defect and genomic instability caused by HCR deletion.
Discussion
HCR was initially reported as a centrosome and P-bodyrelated protein [23,32]. However, little is known about its cellular function and how it localizes to the centrosome. In this study, we provided evidence that HCR acted as an important link in the centrosomal protein recruitment chain. In fact, a variety of centrosomal components assemble at the centrosome in a PCM1-dependent manner, including centrin, ninein, astrin, and CEP131 [10]. PCM1 may deliver these proteins to the centrosome via the dynein-dynactin motor system [10,50]. HCR is undoubtedly one of them because either depolymerization of the microtubule system or knockdown of PCM1 made HCR lose centrosome localization. Like astrin, CEP72, and CEP131, HCR did co-immunoprecipitate with PCM1. However, we would like to emphasize that astrin may play a more important role in maintaining centrosome localization of HCR. A previous study reported protein interaction between astrin and CEP72 [17]. Here, we show that astrin, HCR, and CEP72 interact with each other. Further analysis showed that astrin is in the most upstream position, which is essential for the centrosome localization of HCR and CEP72. HCR is in the middle, which does not affect astrin localization, but is required for CEP72 centrosome recruitment, while CEP72 is at the most downstream, which does not affect the positioning of HCR and astrin. However, we found that astrin was essential for stabilizing HCR (See figure on next page.) Fig. 4 HCR directly binds to and ensures the centrosomal localization of CEP72. A Co-immunoprecipitation analysis of HCR binding to CEP72. HeLa cell lysates were immunoprecipitated with CEP72, HCR, or control rabbit IgG antibodies and analyzed by western blotting with anti-CEP72 and anti-HCR antibodies. Beta-actin was used as a negative control (left penal). GFP alone or GFP-HCR-CC3 plasmid was transfected into HeLa cells and immunoprecipitated using an GFP antibody. The precipitates were detected by immunoblotting with antibodies to GFP and CEP72 (right panel). B In vitro binding assay of HCR coiled-coil domains with CEP72. GST alone, GST-tagged CEP72, and His-tagged HCR fragments were purified from E. coli strain BL21(DE3), and a pull-down assay was performed to examine the CEP72-binding domain in HCR. C HeLa cells released from double-thymidine arrest were harvested at each time point and were analyzed by immunoblotting with antibodies against HCR, astrin, CEP72, cyclin B1, cyclin E, HURP, and beta-actin. D Negative control, CEP72 siRNA-treated HeLa cells, astrin-KO cells, and HCR-KO cells were co-stained with CEP72 (red), gamma-tubulin (green), and DAPI (blue); scale bars,10 μm; inset scale bars, 1 μm. For quantitative analysis, the intensity of CEP72 at the centrosome was normalized by gamma-tubulin. One hundred cells (n = 100) per group were counted from three independent experiments. Error bars represent the mean ± SD. ***P < 0.001 (Student's t test). E Negative control or CEP72 siRNA-treated HeLa cells were co-stained with HCR (red) and gamma-tubulin (green) antibodies and DAPI (blue) for nuclear staining (upper panel) or co-stained with astrin (red) and gamma-tubulin (green) antibodies and DAPI (blue) for nuclear staining (lower panel). For quantitative analysis, the intensity of HCR (upper panel) or astrin (lower panel) at the centrosome was normalized to gamma-tubulin. One hundred cells (n = 100) per group were counted from three independent experiments. and CEP72, whereas HCR and CEP72 had no significant effect on the protein level of astrin. It is worth noting that Kodani et al. reported that astrin and CEP72 stabilize each other, which differs from our results. The potential of a centrosome to anchor microtubules requires the correct assembly of a subset of proteins. According to the recruitment chain described by Kodani et al., CDK5RAP2 is recruited to the centrosome by astrin and CEP72, followed by CEP152, WDR62, and CEP63 in a stepwise, hierarchical manner, and finally comes CDK2, a protein kinase critical for centriolar duplication [17]. The localization of HCR is in the middle of PCM1 and cen-trin1 (Additional file 5: Fig. S8), which means that it may act as part of the chain linking PCM and centriole. We did find that depletion of HCR phenocopied the effect of astrin or CEP72 depletion on the centrosomal localization of CDK5RAP2. Accordingly, the centrosomal localization of CEP152 and CEP63, two factors downstream of CEP72, were also regulated by HCR, but no direct interactions were detected (Additional file 4: Fig. S5).
In addition, we found that there was an interaction between HCR and CEP131 (also named AZI1) (Additional file 5: Fig. S9), which is consistent with the predictions of Ling et al. [23]. In the study of Kodani et al., CEP131, as a pericentriolar satellite protein, was responsible for ensuring the localization of CEP152 [17]. The interaction between HCR and CEP131 suggests that the recruitment of these MCPH proteins to the centrosome is more complicated than currently known. Like HCR, CEP131 is also considered to play an important role in maintaining genomic stability and tumor proliferation [51,52].
One of the important roles of astrin in mitosis is to strengthen the connection between microtubules and the outer kinetochore of the chromosome, allowing the chromosome to withstand the tension from the spindle filament. In this process, astrin forms a complex with SKAP, MYCBP, and LC8 in kinetochore microtubules [36,37,53]. However, our results did not support the interaction between HCR and this complex (Additional file 5: Fig. S10). Although there is no evidence that HCR localizes to the kinetochore, it is still possible that HCR indirectly influences the role of astrin at the kinetochore, such as the transport of astrin between the spindle pole and the kinetochore, just like NuMA does [54]. Interestingly, we also found an interaction between HCR and NuMA (Additional file 5: Fig. S11). There may be an unknown relationship between NuMA and HCR on the spindle, which can affect or be affected by astrin to participate in the assembly and activity of mitotic spindles. Alternatively, HCR may be associated with important kinases, such as Plk-1 or PP1, which are responsible for the phosphorylation of astrin on kinetochore [36,55,56].
Another important role of astrin is to participate in the cohesion between sister chromatids in mitosis, which is the key point at which the existence of astrin can prevent early activation of separase before the onset of anaphase [15,16]. In this study, we found that the knockdown of HCR increased the expression level of the active form of separase in M phase cells (Fig. 6E), suggesting that HCR is likely to affect sister chromatid cohesion. These critical mitosis processes are regulated by Aurora, a key family of kinases in charge of mitosis [57][58][59][60]. Since the Aurora kinases regulate the active conversion of astrin during mitosis, it is definitely worth exploring whether they also regulate HCR [53,61,62].
HCR is also localized to P-bodies and interacts with EDC4. Astrin was reported to recruit raptor to stress granules (SGs) upon oxidative stress, where it colocalized with G3BP1, an SG marker [63]. In fact, P-bodies and SGs are closely linked in function [64]. Interestingly, we also found that HCR co-localized with astrin and EDC4 in HeLa cells treated with arsenite (Additional file 5: Fig. S12), and the centrosomal protein CEP85 was also considered to be related to P-bodies [65]. Furthermore, we also found that EDC4 co-localized with the HCR in the siRNA-treated HeLa cells were co-stained with centrin-1 (green) and DAPI (blue). For quantitative analysis, the number of centrioles in each cell was counted for a total of 100 cells from three independent experiments. Error bars represent the mean ± SD; **P < 0.01 (Student's t test); scale bars, 10 μm. B Negative control, astrin, HCR, and CEP72 siRNA-treated HeLa cells were co-stained with anti-CEP152 (red), anti-gamma-tubulin (green), and DAPI (blue) for nuclear staining (upper panel) or co-stained with anti-CEP63 (red), anti-gamma-tubulin (green), and DAPI (blue) for nuclear staining (lower panel). For quantitative analysis, the intensity of CEP152 or CEP63 at the centrosome was normalized to gamma-tubulin. Cells (n = 100 per group) were counted from three independent experiments. Error bars represent the mean ± SD; ***P < 0.001 (Student's t test); scale bars,10 μm; inset scale bars, 1 μm. C Negative control, astrin, HCR, and CEP72 siRNA-treated HeLa cells were analyzed by immunoblotting with antibodies against astrin, HCR, CEP72, CEP152, CEP163, CDK5RAP2, and beta-actin. D Negative control, CEP152, and CEP63 siRNA-treated HeLa cells were co-stained with HCR (red), gamma-tubulin (green), and DAPI (blue). For quantitative analysis, the intensity of HCR at the centrosome was normalized by gamma-tubulin. Cells (n = 100 per group) were counted from three independent experiments. Error bars represent the mean ± SD; ns, no significance (Student's t test); scale bars, 10 μm; inset scale bars, 1 μm. E Negative control, CEP152, and CEP63 siRNA-treated HeLa cells were analyzed by western blotting using antibodies against astrin, HCR, CEP72, and beta-actin. F Negative control and HCR siRNA-treated HeLa cells were co-stained with CDK5RAP2 (red), gamma-tubulin (green), and DAPI (blue) for immunofluorescence detection. For quantitative analysis, the intensity of CDK5RAP2 at the centrosome was normalized to gamma-tubulin. scale bars, 10 μm; inset scale bars, 1 μm. Cells (n = 100 per group) were counted from three independent experiments. Error bars represent the mean ± SD; ***P < 0.001 (Student's t test) centrosome and punctate staining around the spindle during mitosis (Additional file 5: Fig. S13). Additionally, a pair of P-bodies were found to reside at the centrosome in U2OS cells, as well as diverse non-malignant cells [66,67]. Although the mechanism is unknown, the knockdown of some P-body components by RNA interference impaired primary cilium formation in human astrocytes [67]. Further in-depth study of HCR may reveal clearer functional links between the two structures.
In a more macroscopic direction, the elucidation of the intracellular mechanisms of HCR also contributes to the understanding of various diseases. Recent reports have proposed that HCR is closely related to alopecia areata, psoriasis, and diabetes [31,33,68]. HCR-deficient mice showed stress-induced alopecia [35]. Since primary cilia play an important role in the development of hair follicles, the role of HCR in ciliogenesis deserves future attention. In addition, the interaction between HCR and astrin also suggested that HCR might be related to cancers. Numerous reports have confirmed that astrin overexpression is often associated with malignancy, so HCR, as a protein regulated by astrin, may also be upregulated in tumor tissues [20,48,49,53,69,70]. Moreover, analysis of the TCGA database (portal. gdc. cancer. gov) also revealed that the transcription of HCR did significantly increase in a variety of tumors (Additional file 5: Fig. S14). Although we did not investigate whether HCR was involved in tumorigenesis, as a regulator of the cell cycle and mitosis, this possibility exists. Interestingly, HCR may even have a potential link with COVID-19 [71]. In fact, linking microtubule and centrosome to virus infection is not a new idea. Previous studies have found that retroviruses, such as human immunodeficiency virus type 1 (HIV-1) infection, can affect changes in centrosome function [72]. Although there is no clear evidence as to whether HCR is a direct target of COVID-19, further in-depth research may help explain the biological mysteries of the centrosome and provide substantial clinical value.
Conclusion
In conclusion, our results reveal the role of previously unfocused P-body protein HCR on centrosome, whereby HCR interacts with astrin to recruit CEP72 and MCPH proteins to the centrosome and ensures efficient centriole replication and other centrosome-related functions such as spindle-pole formation and microtubules organization (Fig. 8). Therefore, HCR not only acts as P-body component, but also plays an important role in the development of centrosome and the stability of the genome.
cDNA, plasmids, antibodies, and reagents
Human CCHCR1 cDNA (NM_019052) was amplified from HeLa cDNA by PCR amplification and subcloned into pEGFPN1 or pmCherryN2 vectors. Human astrin cDNA (NM_006461) in the pEGFPC2 vector was gifted by Dr. Yi-Ren Hong (Kaohsiung Medical University, Taiwan China) [73] and was subcloned into the pCMVmyc vector. CEP72 cDNA (NM_018140) was amplified from the pEBTet-CEP72-SNAP plasmid purchased from Addgene (plasmid #136819) and subcloned into the pEGFPN1 vector. Serial deletion fragments of indicated regions of HCR and astrin were amplified from HCR and astrin cDNA, respectively, and subcloned into pEGFPN1 and pCMV-Myc vectors, respectively.
Construction of KO cells
The HCR-KO cell line was created by using CRISPR-Cas9 in HeLa cells with sgRNAs as follows:
Cell cycle synchronization
HeLa cells and HCR-KO HeLa cells were first synchronized with 5 mM thymidine for 16 h, washed with phosphate-buffered saline (PBS) three times, and cultured in DMEM without thymidine for 12 h, After treatment with 5 mM thymidine for another 12 h, cells were released from thymidine and harvested at each time point according to experimental needs. For collecting mitotic cells, cells were released for about 10 h from a double-thymidine block to initiate prometaphase [54]. For separase and securin analysis in mitotic cells, cells were treated with siRNA for 72 h and incubated with nocodazole (100 ng/ml in medium) for another 16 h [15,16].
Plasmid transfection
HeLa cells were transfected with 15 μg of DNA plasmid in a 10-cm dish or 2 μg in each well of a 6-well plate Fig. 7 HCR depletion causes mitotic defects, DNA damage, and decreased tumor proliferation. A After releasing from double-thymidine arrest for the indicated time, parental HeLa cells and HCR-KO HeLa cells were fixed and stained with PI (DNA staining) for flow cytometry. The DNA content in cells is diploid (2N) in the G1 phase and becomes tetraploid (4N) from S to G2/M phase. When mitosis ends, the DNA content in the cell should revert from 4N to 2N. The cell cycle results were analyzed and plotted based on the DNA content in cells. A total of 10,000 cells were counted per group. B Negative control, astrin, HCR, and CEP72 siRNA-treated HeLa cells were treated with 100 ng/ml nocodazole for 16 h to be arrested in M phase and co-stained with gamma-tubulin (green), alpha-tubulin (red), and DAPI (blue). Cells (n = 100 each group) were counted from three independent experiments. Error bars represent the mean ± SD; *P < 0.05 (Student's t test). C Negative control, astrin, HCR, and CEP72 siRNA-treated HeLa cells were treated with 100 ng/ml nocodazole for 16 h to be arrested in M phase and were analyzed by immunoblotting with antibodies to securin, separase, HURP, cyclin B1, and beta-actin, and the relative protein levels of securin and cleaved separase were analyzed statistically. Error bars represent the mean ± SD of three independently performed experiments (n = 3); *P < 0.05 and **P < 0.01 (Student's t test). The individual data values were provided in Additional file 6: Raw Data. D Negative control and HCR siRNA-treated HeLa cells were stained with DAPI (blue). The quantified analysis was based on the percentage of the cells containing micronucleus. Cells (n = 100) were counted from three independent experiments. Each bar represents the mean ± SD; ***P < 0.001 (Student's t test). Arrow represents the micronuclei. E Negative control and HCR siRNA-treated HeLa cells were co-stained with pATM (green) and DAPI (blue). The quantified analysis was based on the percentage of pATM-positive cells. Cells (n = 100 each group) were counted from three independent experiments. Error bar represents the mean ± SD; **P < 0.01 (Student's t test). F Negative control and HCR siRNA-treated HeLa cells were co-stained with gamma-H2AX (green) and DAPI (blue). The quantified analysis was based on the percentage of gamma-H2AX-positive cells. Cells (n = 100 each group) were counted from three independent experiments. Error bar represents the mean ± SD; **P < 0.01 (Student's t test). G Negative control and HCR siRNA-treated HeLa cells were analyzed by immunoblotting with antibodies to pCHK2, CHK2, and beta-actin. H Colony formation assays of parental HeLa cells, HCR-KO cells, and astrin-KO cells. I Parental HeLa cells, HCR-KO cells, and astrin-KO cells (1 × 10 6 ) were transplanted in the athymic mice, and tumor sizes were measured every 3 days after the formation of a measurable tumor. Error bars represent the mean ± SD for different animal measurements (n = 5 each group); P < 0.01, one-way ANOVA for tumor weight analysis and two-way ANOVA for tumor size analysis. The individual data values were provided in Additional file 6: Raw Data
Immunofluorescence imaging
For immunofluorescence imaging, cells plated on glass coverslips were fixed with cold methanol, blocked with 10% FBS, and probed with primary antibodies and then secondary antibodies coupled with Alex-Fluor 488/555/594/647. DNA was stained with DAPI. Immunofluorescence pictures were imaged under an Olympus Confocal Laser Scanning Microscope FV3000 (Olympus Co., Tokyo, Japan) and processed by ImageJ (https:// imagej. nih. gov/ ij/ downl oad. html) when necessary.
Microtubule regrowth assay
siRNA-treated or plasmid-transfected cells were treated with 1 μM nocodazole on ice for 30 min to depolymerize the microtubules and were then released from cold nocodazole after 0 min and 5 min to repolymerize the microtubules. For the microtubule regrowth assay, the cells were fixed and co-stained with gamma-tubulin and alpha-tubulin to show the microtubule organization center and microtubules, and the length of microtubules of each cell was measured to compare the differences between the groups.
Cell flow cytometry
For cell cycle analysis, cells were trypsinized and fixed in 70% ethanol at 4 °C for 16 h, washed with PBS 3 times, and stained with 50 mg/ml propidium iodide (PI, DNA stain) and 0.025 mg/ml RNase A in PBS for 30 min at 37 °C. Cells were analyzed with FACS Calibur (Becton Dickinson, Franklin Lakes, NJ, USA). The cell cycle results were analyzed based on the DNA content in cells. For statistical analysis, the results of 10,000 cells in each group were counted and plotted.
Real-time qPCR
Total RNA was isolated with TRIzol (Invitrogen, Waltham, MA, USA) and used for cDNA reverse transcription with the Goldenstar RT6 cDNA synthesis kit (Tsingke, Beijing, China). Quantitative PCR analysis of gene transcripts was performed by the qPCR method using qPCR Master Mix (Promega, Madison, WI, USA) and Jena qTOWER3 system with the expression of GAPDH as the endogenous control.
Colony formation assay
Parental HeLa cells, HCR-KO HeLa cells, and astrin-KO HeLa cells were maintained in culture media in a 10-cm dish for 2 weeks, followed by staining with Giemsa stain. Then the number of stained colonies were counted.
Tumor xenografts
Animals were randomly grouped in three groups with 5 mice per group. Parental HeLa cells, HCR-KO HeLa cells, or astrin-KO HeLa cells were injected into the subcutaneous prothorax of 6-week-old athymic mice with 1 × 10 6 cells per mice (BALB/c, Guangzhou Medical Animal Center, Guangzhou, China). After visible tumors were observed, tumor size was measured every 3 days and calculated according to the following formula: length × width. The measurement and data processing were performed with blinding. All mice received a humane diet and living environment during the experiment. At the end of the experiment, all mice were executed in a humane manner, and the subcutaneous tumor was exfoliated and weighed. This study was approved by the Animal Care Committee of Shenzhen University Science Health Center.
Domains analysis
For the construction of HCR fragments plasmids, the SMART Sequence Analysis Tools (https:// smart. embl-heide lberg. de) was used to analysis the protein domains.
Statistical analysis
For western blot results and immunofluorescence images, ImageJ (https:// imagej. nih. gov/ ij/ downl oad. html) was used to measure the intensity of the protein of interest. Microsoft Office Excel and GraphPad Prism were used to perform statistical analyses and graphing. For statistical analysis of blotting experiments, each experiment was performed three times independently. For statistical analysis of immunofluorescence images, 100 cells were counted from three independent experiments. All statistical results are presented as mean ± SD and tested with a two-tailed Student's t test (Graph-Pad Prism software) to calculate the P-values between unpaired samples. The differences were considered statistically significant when P < 0.05. | 2022-10-24T14:29:06.631Z | 2022-10-24T00:00:00.000 | {
"year": 2022,
"sha1": "2e4013c857657ec57b96fa92b9bca21f39954134",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "2e4013c857657ec57b96fa92b9bca21f39954134",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266201251 | pes2o/s2orc | v3-fos-license | Tax and International Trade in the SADC Region: A Panel Gravity Model Approach
: Purpose: The study intends to investigate the effect of taxation on bilateral trade in the Southern African Development Community region. It is motivated by the on-going reviews of tax rates in the frame of the SADC regional integration. Design/Approach/Methodology: This paper further employs the Poisson pseudo maximum likelihood with high dimensional fixed effects (PPMLHDFE) to ascertain the objective, which caters for multilateral resistance and ensures the accuracy and validity of the results for a time spanning from 2012 to 2018. Findings: The results show that during the period of the analysis, import tax for exporting countries significantly increases bilateral trade, while export tax for exporting countries increases bilateral trade, and significantly reduces bilateral trade for importing countries in the region. International trade tax for exporting countries significantly reduces bilateral trade. Practical Implications: Authorities should formulate a more effective and rational approach to taxation, such as increasing their tax net and downward revision tax rates for struggling companies, so that taxes do not become a hindrance, but rather, a pivotal determinant of trade, growth, and development in the region. Originality/Value: This paper is unique because it is the first to examine and understand the impact of taxation (import tax, export tax and international trade tax) on bilateral trade in the SADC region, employing the standard Poisson Pseudo Maximum Likelihood gravity model approach, which accommodates heteroskedasticity and zero trade flows.
Introduction
Global and domestic international trade prevail because of variations in the availability of resources and the existence of the comparative advantage principle.Given the growing level of innovation, technology, and globalization, international trade has become a prominent facilitator of economic growth (Sohail et al., 2021).In a fast-changing world, international trade ties between states are imminent and essential (Durguti and Malaj, 2022).
International trade is rich in history and it is accompanied by greater benefits to trading economies as it is associated with a greater share of world production of goods and services, a boom in economic activity, a boost in the level of savings and foreign direct investment, enhancing economic growth as well as the development of nations.It also helps countries accomplish substantial developmental goals, such as poverty reduction, curbing high unemployment rates, food security, fair and equitable inclusive policies, health, and environmental sustainability (Sohail et al., 2021).
Furthermore, trade benefits are bound to vary from nation to nation based on their economic, political, regional, and strategic situation (Durguti and Malaj, 2022).Recently, many countries have enacted and negotiated trade initiatives towards commercial integration for their economic performance and national well-being.SADC (2020) further purports that countries that develop trade with others through liberalization trade policies boost economic growth while elevating their people's quality of life.
Based on this, the analysis of this study is based on the Southern African Development Community (SADC) Free Trade Area member states, where member states consent to eliminating barriers against one another but are free to impose their non-member states external tariffs, to foster economic cooperation among member countries.
In similar light, several countries tried to develop new types of trade agreements with the assistance of international organizations such as the WTO, the IMF, and the World Bank.These new trade agreements aimed to create new business opportunities for emerging economies after a significant reduction in their trade restriction levels (Durguti and Malaj, 2022).
However, developing and least-developed nations continue to face significant tariff and non-tariff restrictions on trade, despite the growth of trade and interference of these international organizations, giving rise to governments resorting to tax policies to shield their domestic products, enhance competitiveness and trade openness (Longoni et al., 2009).Taxation's impact on international trade has been uncertain.CIT, as Holzner (2021) discovered, diminish exports and imports.
As evidenced by SADC indicators in Figure 1, SADC regions' average real GDP growth rate has been declining dramatically since 2007 from 6.8 percent to 2.1 percent in 2019, contracting to a further 4.8 percent in 2020.In 2009, the region's average growth was 0.2 percent due to the global financial crisis which put a strain on global fiscal activities in many economies globally, reducing household incomes, wealth, and consumption, hence economic growth.
A contraction of 4.8 percent in the region, in 2020 was likely due to the COVID-19 protective alleviation measures such as the global restrictions which disrupted and held back economic activity, weakening prospects in the country's main trading partners, as well as low external demand which badly impacted the mining and manufacturing industries (UNESCO, 2021).To proceed, markets are competitive and their prices are unstable, responding to variations and fluctuations in demand and supply.Hence, the need to improve performance in bilateral trade has informed the redesign of the recent trade policy.
However, there exists no empirical study showing the extent to which the last tax reform was effective.It is on this premise that this research contributes to the existing stock of literature in 2 ways; first, the study investigates the effect of different taxes on bilateral trade in the SADC region.The different taxes stem from different policy interests and vary with countries.
For its analysis empirically, the study investigates the effects of these taxes on bilateral trade with the aid of a gravity model which gives us more inside into the intra-trade system in SADC.Lastly, the study employs a relatively recent modelthe gravity model using the Poisson Pseudo Maximum Likelihood approach that caters for heterogeneity and zero trade values which have been the major limitation of this model in the past.
According to the researcher's knowledge, the research analysis on the impact of taxation on bilateral trade in the SADC region is deficient in the literature as most studies are focused on trade policies and international trade in individual countries, while a few are focused on developed economies.This study, therefore, adds to the little research on underdeveloped countries by looking at the Southern African Development Community (SADC) region.SADC member countries possess a substantial potential for economic growth and development, so, understanding taxation, as well as its impact on bilateral trade, in general, will help policymakers to implement robust policies that will help in achieving ambient objectives to fight high poverty in the region, to guarantee peace, stability, and sustainability.
The paper is structured logically, starting with the introduction, followed by a literature review, then methodology, and a discussion of the results, completed with the conclusions and implications for policymaking.
Literature Review
There are numerous arguments from previous literature that a greater tax burden weakens productivity in the economy and trade performance, resulting in a decline in exports in the longer term.Beck and Chaves (2013) examined the different macroeconomic impacts of various taxes on trade competitiveness in OECD countries by employing a gravity model using panel data from 25 OECD countries.He further examined the influence of average effective rates of taxation on expenditure, earnings from labour, and investment earnings on trade openness.Beck and Chaves's (2013) findings conceded previous debates that indeed; high tax burdens negatively impact exports.
Moreover, Khair-Uz- Zaman et al. (2011) explored the potential of bilateral trade between Pakistan and Turkey, employing a gravity model technique derived from Newton's Law of Gravitation.Both regression and correlation analyses were performed on secondary time series data from 1998 to 2008.Correlation analysis evidenced that trade between the two countries correlates strongly to GDP and income per capita, albeit uncorrelated to distance.Both of their methods promoted the concept of trade between Pakistan and Turkey, which can provide economic success to both countries.Agbeyegbe et al. (2006) examined the linkage between free trade facilitation and income from taxation, as well as the interaction between fluctuations in exchange rates, the rate of inflation, and earnings from taxes, using a panel of 22 Sub-Saharan African nations, from 1980 to 1996.Trade liberalization has been proxied using two different indicators, international trade as a percentage of GDP and the ratio of import tariffs to imports value.The authors performed a Generalized Method of Moment regression.Evidence proved that the relation between trade opening and tax revenue is susceptible to the approach used to approximate trade liberalization, but that in general, trade liberalization is not closely interrelated with aggregate tax revenue, and however, it corresponds to higher income tax revenue by one measure.Alinaghi and Reed, (2021), Khumbuzile and Khobai, (2018) and Macek, (2014) similarly studied the impact of individual types of levies on GDP growth.Alinaghi and Reed (2021) conducted a systematic review of the effect of taxes on economic development in OECD nations.Macek (2015) assessed the influence of specific forms of taxes on economic development using a model-based approach on OECD nations from 2000 to 2011, while Khumbuzile and Khobai (2018) for the time frame spanning 1981 to 2016, used the ARDL technique to evaluate the influence of revenue taxation on GDP growth in South Africa.The empirical results revealed that the vast majority of tax systems were extremely important and related to a country's economic growth.
In a 2013 study, Solleder determined the trade-related consequences of taxes on exports relying on the estimation of a log-linearized traditional gravity approach, employing a Panel Export Taxes (PET) dataset encompassing 20 exporting nations and 169 importing counterpart nations from 2000 to 2011.Solleder (2013)'s findings on the other hand suggested that the financial strain of export taxes is borne by both exporters and importers and that export taxes contribute to an increase in global prices.
The contribution made by this research will be to broaden the deficient empirical literature between taxation and bilateral trade using the most recent data available for the SADC nations included in the analysis, by employing another version of the gravity model -the standard Poisson Pseudo Maximum Likelihood gravity model approach, which accommodates heteroskedasticity and zero trade flows.This information may not be gotten from a time series or traditional panel studies but is essential in informing countries on the effect of their tax policies on international trade within SADC.
Data Description
The study uses a stacked time series with a balanced panel of 13 SADC countries, and employs a gravity model of international trade, with a quantitative approach method relying on secondary data for the time frame spanning from 2012 to 2018.These countries are Angola, Botswana, Kingdom of Lesotho, Republic of Madagascar, Malawi, Mauritius, Republic of Mozambique, Namibia, Seychelles, South Africa, United Republic of Tanzania, Zambia, and Zimbabwe.
The dependent variables together with the independent variables included in the study are bilateral trade (dependent variable), import tax, export tax, international trade tax, GDP per capita, lending rates, investment, inflation, corruption control, political instability and voice and accountability.
Table 1 summarizes the statistics for all of the variables used in this investigation.The mean score for net exports for the SADC region is 121.26 million dollars.Table 1, column 1 contains the names of the variables used in the empirical analysis.
The mean value of the variables represents the central tendency.When the mean value is high, it shows that there is more power in central tendency (McHugh and Hudson-Barr, 2003;Sohail et al., 2021).
The standard deviation, or SD, for each of the variables, indicates how far the estimates deviate from the average value of the variable.It is significant and reliable statistical data (Sohail et al., 2021).When the standard deviation value is smaller, it shows that the estimates are closer to their mean values and less volatile, while a large value of the standard deviation shows that the estimates are far from their mean values and more volatile.
From the results, we see that investment spending averaged only 9 percent for exporting countries while it averaged 8 percent for importing countries in a year throughout the research period in the area.The minimum spending on investment is about 2.7 percent in Seychelles and the highest spending on investment of about 10.9 percent in South Africa.
The inflation rate averaged approximately 6.1 percent for exporting countries and 8.4 percent for importing countries.Zimbabwe had a negative and the smallest inflation rate of -2.41 percent in 2015 while Angola had the highest inflation of almost 30.7 percent in 2016.
GDP per capita averaged 3.35% for exporting countries and 3.34% for importing countries with a minimum of 2.5% from Malawi and a maximum of 4.23% from Seychelles.Lending rates averaged approximately 15.1 percent and 20 percent for exporting and importing countries respectively, with a minimum rate of 6.5 percent charged in Botswana and a maximum rate of 60 percent charged in Madagascar.
There is a lack of control over corruption with a mean of -0.339 for exporting countries and -0.254 for importing countries on average.Zimbabwe has the lowest index of -1.42, while Seychelles has a maximum index value of 1.182.In the SADC region, Seychelles relative to the rest of the member nations over the research period performed far better concerning reducing corruption.
On average, there is a lack of political stability, with a mean of 0.008 for exporting countries, and a mean of 0.123 for importing countries.Mozambique had the lowest index of -1.094, while Botswana had a maximum index of 1.104.
Botswana's economy is more stable politically, than other countries in the SADC region.Voice and accountability has a mean of -0.114 for exporting countries, and a mean of -0.06 for importing countries, with a minimum index of -1.47 from Zimbabwe, and a maximum index of 0.94 from Mauritius.
On average, import taxes are quite higher than international trade taxes and export taxes for sample countries in the region, with 16.9 percent in 2012.GDP per capita is below average while lending rates are much higher on average, showing 18.6 percent in 2016.Investment is lower in the region, showing that high lending rates impede borrowers from accessing funds from their banks to engage in trade.From Table 2 there is no multicollinearity in the variables used for the empirical analysis.There exists a negative correlation between inflation and GDP per capita.Distance and bilateral trade show a negative relationship as expected from the literature.Source: Author's calculations using WGI, WDI, DOTS, and CEPII data.
Estimation Strategy/Technique
The empirical technique is centred on the estimate of the PPML estimation approach in the log-linearized form of the control variables.The net exports enter the model in their level form, as portrayed by equation 4.
The data has a panel structure, and the convention that is consistent with the theory of estimation of this dataset requires control of fixed effects by country pair and fixed effects by country time for both importer and exporter countries.The dissimilarity between country pairs has been adjusted for by taking into account country-pair fixed effects, to mitigate bias generated by heterogeneity across countries and control for endogeneity issues which may be caused by omitted variable bias and reverse causality (Correia et al., 2020).Also, standard errors should be grouped at the country-pair level to allow for endogeneity in policy variables.
Further, country-time fixed effects will be incorporated in the regression to account for worldwide economic repercussions that may affect trade, and time-varying multilateral resistance, that is, the barriers to trade that each country faces with all its trading partners.The PPML estimation method is the only remarkably accurate pseudo maximum likelihood estimator for gravity equations that is ideal for models with high-dimensional fixed effects and under extremely minimal requirements (Santos Silva and Tenreyro, 2022).
The likelihood-based goodness-of-fit measurements are similarly invalid because PPML is a non-linear model, so it is a reasonable theoretical justification for why OLS properties are not a problem with non-linear models (Santos Silva and Tenreyro, 2022).
Gravity Model
This study's theoretical foundation is centred around Newton's gravity, and the Gravity model technique which is founded on Newton's Law of Gravitation is employed (Anderson, 1979;Bergstrand, 1985;Khair-Uz-Zaman et al., 2011;Tinbergen, 1962).This model estimates adjust for spatial and other observable and unobservable country attributes and allows me to focus on bilateral trade in the SADC region while most studies have focused on international trade in different countries.
The framework assumes that trade between two nations increases in proportion to the sum of their GDPs per capita GDPs (Vavrek, 2018).It also postulates that trade decreases with increasing distance.This is because proximity minimizes transportation as well as information fees (Khair- Uz-Zaman et al., 2011).
The gravity model has its foundation in physics and the Newtonian Law of Gravitational Attraction which asserts that a particle's gravitational pull draws in other particles in space with an attraction that is directly related to the products of their masses and inversely related to the square of the distance between their centres (Vavrek, 2018).
Newton proposed that the attraction force between the two elements is given by: Fij = (1) Where: Fij is the force of attraction; G is the gravitational constant; M M is the product of the country's masses; D is the squared distance between the two countries; i and j are trading countries.
This force of attraction ( 1) is used by economists to provide an overview of trade between two countries.They hypothesized trade in goods is identical in its attracting power between two big economies, that is, their GDP is non-zero.
Trade between two countries i and j is expressed as: Where: Tij is trading between exporting and importing countries, i and j; is a constant; Y is GDP; D is the distance between countries i and j; 's are parameters.
Equation 2 assumes that the larger the size of the economies, the more they are obligated to trade with one another.On the contrary, if the distance between the countries is short, those countries may engage in trade more easily with one another.
The gravity model of trade is multiplicative, so from equation 2, it means trade is equal to the products of other variables.It can be estimated by applying the natural logarithmic operators of the multiplicative form across both sides, by breaking the products into sums.
Equation 2 then becomes: (3) Despite the gravity model's empirical success in accurately predicting trade flows, the ways considered to deal with heteroscedasticity issues and the existence of a large number of zeros trade observations represent one of the issues surrounding the accuracy of trade data (Solleder, 2013), and some estimation practices have been an area under discussion to criticism.
This paper further adopts the Poisson Pseudo Maximum Likelihood approach instigated by Gourieroux et al. (1984).The PPML has been suggested by Silva and Tenreyro (2006) and Yotov et al. (2016), who presented an easy panacea to this issue of concern after criticizing the procedures of the log-linearized gravity trade models.
They argued that the equation for gravity in its additive structure can potentially be calculated by employing a Poisson Pseudo Maximum Likelihood (PPML) estimation technique, which naturally includes zero observations.The PPML structure is an improved version of the Generalized Nonlinear Linear Model (GNLM) structure that is resilient to multiple types of heteroscedasticity and surpasses the inefficiency problem, providing consistent estimates of the original nonlinear model (Mnasri and Nechi, 2019).
The estimation performed using PPML employs the command syntax invented by Correia et al. (2020) the statistical package, STATA (PPMLHDFE), which is more effective in the existence of sizable fixed effects because it allows one to incorporate as many countries as possible to encompass all multilateral resistance.
The developments that the PPML approach has contributed to the estimation of gravity models have made it good and popular in the international trade literature, as it has become frequently utilized to estimate gravity equations (Durguti and Malaj, 2022;Levin et al., 2002;Martin and Pham, 2020;Yotov, 2012) among others.See Appendices 1-4 for variable description.
Model specifications:
To ascertain the papers' objectives, the model to be utilized to examine the implications of an import tax, export tax, and international trade tax on bilateral trade in the region is: (4) With: , , , , , , , and are all > 0 < 0 Where: is bilateral trade between countries i and j at time t; it is exporter time fixed effects; jt is importer time fixed effects; ijt is country pair fixed effects; ijt is an error term, covering the leftover effects.It is assumed as distributed independently and normally with zero (0) mean and constant variance.
Regression and correlation analyses are employed for the evaluation of data.
Robustness Tests/Checks
Two key criticisms in gravity model analysis have always been the inability to take care of zero trade values and the problem of heteroscedasticity.To address the issues of zero trade values, this study employed an inverse hyperbolic sine transformation to transform the negative values and zero trade values of the net exports, since in PPML, the dependent variable cannot be negative.To transform the negative values, the paper employed the formula below: And for heteroscedasticity, the ppmlhdfe command used in the paper provides output with robust standard errors, which ensures that the standard errors of the regression output are valid.Also, the PPML caters for heteroskedasticity automatically (Correia et al., 2020).
Research Results and Discussion
In this section, the paper demonstrates the results of the PPMLHDFE regression analysis and their discussions and interpretations, and further discusses the techniques for absorbing zero trade flows.The overall objective of the paper is to examine the effect of import taxes, export taxes, and international trade taxes on bilateral trade in the SADC region.The results have an R squared of 0.7265 implying that 72.65 percent of the variation in bilateral trade is explained by the independent variables incorporated in the analysis.The fitted model is as follows: On average, all other factors held constant, a unit increase in the share of import taxable earnings as a percentage of total tax revenue for exporting countries, increases bilateral trade by 0.1567.For exporting countries to stimulate growth, they tend to export more to importing counterparts because the burden of import taxes is borne by importers only, as a result, a share of import tax revenue to total tax revenue positively influences bilateral trade.
As the share of import tax revenue to total tax revenue increases, it means the cost of importing is increasing and so reduces imports for the exporting countries and therefore increases net exports for the exporting countries.These results are surprisingly conflicting with the results by Suravonic (2010), who rather found a negative influence on trade, and argued that a rise in taxes on imports as a share of total tax revenue alters trade as exporting countries do not have a monopsony power in trade.Meanwhile, import tax shows a positive relationship with bilateral trade for importing countries but this association is not significant at a level of significance equivalent to 5% for all 3 models.Further, a unit increment in the proportion of revenue from export taxes to revenue from the total tax for all the 3 models is significant and expectedly reduces bilateral trade for importing countries by 4.8321, and significantly increases bilateral trade for exporting countries by 2.2245 on average and ceteris paribus.When export taxes increase, it becomes expensive for importers to purchase from foreign markets because taxes imply an additional cost to the initial price.
Imports will then reduce for exporting countries, as a result, increase net exports for exporting countries.Giordani et al. (2012) found that a price increase drives governments to enforce export barriers, resulting in a reduction in global supply and a subsequent rise in prices, advocating additional restrictions on exports.
On average, a unit rise in the ratio of international trade tax revenue to total tax revenue reduces bilateral trade for exporting countries, for both model 1, 2, and 3. When a share of revenue from international trade taxes as a percentage of overall tax revenue increases, it implies a boost in international trade tax.Earnings from investments abroad will reduce from high international trade tax charges, reducing investment for exporting countries as a larger share of profits from an international investment will go back to their foreign economy, leaving only a small portion of the return on investments, reducing exports for exporting countries, hence, a reduction in net exports for exporting countries.Keen and Syed (2006) however emphasized that increases in international trade tax, whether measured by earnings or tax rates are connected with a temporary surge in net exports, a trend linked to drive investments internationally.Though insignificant at 5% for all models, international trade tax increases bilateral trade for importing countries and reduces bilateral trade for importing countries.
A percentage point increase in GDP per capita remarkably and substantially increases bilateral trade for both exporting and importing countries, on average all other factors held constant, by 2.41% and 1.96% consecutively.GDP per capita conveys a country's degree of development, and when it escalates, the standard of living increases, production in exporting countries and supply to importing countries increase, increasing exports and net exports significantly for exporting countries.
Similarly, a spike in the level of development for importing countries implies a reduction in imports due to less dependency on foreign markets, hence an increase in net exports.The results are consistent for all the 3 models.Other research studies found similar conclusions.Schmitt et al. (2019) are an ideal example who found an upward correlation between the two variables, and concluded that nations with higher GDP growth rates are probable to experience higher trade growth rates as a share of output.
During this study, for countries in the SADC region, geographical distance is linked to a significant decrease of 3.85% in their bilateral trade volume on average.The results are consistent with the empirical literature and gravity theory, and they comply with the results of Disdier and Head (2008).Even after controlling for various significant different major variables in samples and techniques, they found a negative and substantial association between distance and trade.
Moreover, a percentage increase in investment for importing countries is significantly and positively associated with a 0.49% increase in bilateral trade on average, all other factors held constant.Investment is pertinent to a country's economic growth and development, so there is a greater circulation of goods and services domestically, reducing imports for importing countries, hence, an increase in net exports.
The findings are the same in magnitude and significance for all the specified models.The results complement the results by Sohail et al. (2021).He established a strong and positive causal relationship between investment and trade.Investment in exporting countries reduces bilateral trade and is insignificant at 10%.A unit increase in the lack of corruption control index for exporting countries reduces bilateral trade by 4.3917.The results are consistent for model 1 and model 2. When exporting countries suffer from a severe degree of corruption, it means there is mismanagement of resources and there is unequal fairness in the economy.Also, the exploitation of public office for private gain makes citizens lack trust in their government, threatening market reliability.This imperils economic development, increasing imports, hence reducing the net exports.
On average, a unit rise in the political instability index reduces the volume of bilateral trade by 4.0289.The results corroborate the outcomes by Qadri et al. (2020) who found that political instability badly weighs down trade in the long run.When there is high instability and terrorism in the economy, there is no harmony, and the country's resources are mismanaged, chasing away investors and a climate for business and reducing exports.
The coefficient on perspectives of a country's residents' ability to participate in the selection of its government, as well as the freedom to express themselves, is positive and significant in the region for both exporting and importing countries.When citizens can voice out their expressions, there is a sense of belonging and prosperity, enhancing economic activity, thus an improvement in bilateral trade rising from a rise in exports and a drop in imports.
Conclusion and Recommendations
The study examined the effects of import tax, export tax, and international trade tax on bilateral trade in the SADC region.The relationship between taxation and international trade acquired attention from researchers and policymakers that have led to a plethora of literature on this relationship.This paper is unique because it is the first of its kind to examine the effects of import tax, export tax, and bilateral trade in the context of the SADC region.For its empirical analysis, the study utilized Poisson Pseudo Maximum Likelihood high dimensional fixed effects approach to the gravity type trade model, using secondary data for a panel of thirteen SADC member countries from 2012 to 2018.
The analysis of results divulged novel observations.During the period of the study, import tax for exporting countries significantly increases bilateral trade, while export tax for exporting countries increases bilateral trade, and export tax for importing countries significantly reduces bilateral trade in the region.International trade tax for exporting countries significantly reduces bilateral trade despite the inclusion of other control variables.The results on country size and distance corroborate the literature on the gravity model.As a result, the paper views GDP per capita as an essential factor influencing bilateral trade while it views increasing distance between countries involved in trade as a resistant factor for bilateral trade in the region.
Investment in importing countries significantly increases bilateral trade on average.Institutional variables, corruption control and political stability, and absence of violence reduce the volume of bilateral trade, whereas voice and accountability enhance bilateral trade volume in the region.However, the paper found a trivial relationship between inflation, lending rates, and bilateral trade.
Though operating under the free trade zone, the SADC region trade performance has been observed to be slow, and their growth rate has been declining over the years.It is therefore of great importance to learn how taxation affects bilateral trade to provide experts in policymaking come up and implement strategic taxation policy that will attract trade because a better tax regime is known to create a relaxed environment for trade and investment.It has also been observable that geographical distance is a hindrance to bilateral trade in the region.
The findings gave rise to the following recommendations: ➢ Taxation, though crucial in generating revenue for economies and protecting domestic commodities against foreign competition, negatively impacts bilateral trade in the region.Authorities should formulate a more effective and rational approaches to taxation, that is, increase their tax net and reduce tax rates on struggling companies, so that taxes work well and do not become a hindrance, but rather, a pivotal determinant of trade, growth and development in the region.Countries should lower export tax rates for their manufacturing enterprises and corporations to boost growth through lower prices.This will increase the level of exports to importing economies.Countries can encourage the import of raw materials and the export of finished goods by establishing price standards for raw material imports, or by lowering import tax rates on raw materials and lowering export tax rates on processed products.
➢ Further, distance is negatively correlated with bilateral trade.Because of high transportation costs, governments should invest in a more robust transportation infrastructure that links SADC member countries such as through railways and roads to curb high transportation costs and increase proximity and access to foreign markets.
➢ Indexes for instituions are very low.This implies poor governance performance in the region; there is a high rate of corruption and political instability.To increase intra-region trade, public authorities should, therefore, devise and implement transparent policies that will mitigate high corruption.Indicates opinions of a country's residents' ability to participate in the selection of its government, as well as the freedom to express themselves, and associate (Kaufmann et al., 2010).
Voice and accountability index WGI +
Figure 1 .
Figure 1.Annual average real GDP growth rate, SADC region
Table 1 .
Summary statistics of variables in the analysis or research study Source: Authors' calculations usingWGI, WDI, DOTS, and CEPII data. | 2023-12-14T16:12:02.258Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "e9e79e2b4b05c4746190f1e93fa68d3eb6c543f9",
"oa_license": null,
"oa_url": "https://ijeba.com/journal/822/download/Tax+and+International+Trade+in+the+SADC+Region:A+Panel+Gravity+Model+Approach.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "669db751b0465d3a3b359f182c9f7399171f1b44",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
257089162 | pes2o/s2orc | v3-fos-license | The cardiolipin-binding peptide elamipretide mitigates fragmentation of cristae networks following cardiac ischemia reperfusion in rats
Mitochondrial dysfunction contributes to cardiac pathologies. Barriers to new therapies include an incomplete understanding of underlying molecular culprits and a lack of effective mitochondria-targeted medicines. Here, we test the hypothesis that the cardiolipin-binding peptide elamipretide, a clinical-stage compound under investigation for diseases of mitochondrial dysfunction, mitigates impairments in mitochondrial structure-function observed after rat cardiac ischemia-reperfusion. Respirometry with permeabilized ventricular fibers indicates that ischemia-reperfusion induced decrements in the activity of complexes I, II, and IV are alleviated with elamipretide. Serial block face scanning electron microscopy used to create 3D reconstructions of cristae ultrastructure reveals that disease-induced fragmentation of cristae networks are improved with elamipretide. Mass spectrometry shows elamipretide did not protect against the reduction of cardiolipin concentration after ischemia-reperfusion. Finally, elamipretide improves biophysical properties of biomimetic membranes by aggregating cardiolipin. The data suggest mitochondrial structure-function are interdependent and demonstrate elamipretide targets mitochondrial membranes to sustain cristae networks and improve bioenergetic function. Allen and Pennington et al. show that the cardiolipin-binding peptide elamipretide mitigates disease-induced fragmentation of cristae networks following cardiac ischemia reperfusion in rats. This study suggests that elamipretide targets mitochondrial membranes to sustain cristae networks, improving their bioenergetic function.
T he biophysical organization of the mitochondrial inner membrane regulates bioenergetics. Studies spanning fifty years have described the intertwined relationship between mitochondrial structure and function 1,2 , bolstered in more recent years by advances in imaging modalities [3][4][5] . The composition of the inner membrane is unique, comprised predominantly of phosphatidylethanolamine, phosphatidylcholine, and cardiolipin (CL). Notably, CL represents a structurally distinct anionic phospholipid enriched in the mitochondrial inner membrane 6,7 . CL is postulated to exist in microdomains (i.e., distinct membrane regions enriched in CL) that influence mitochondrial structurefunction 8 . CL is found at negatively curved regions of the inner membrane, including cristae contact sites and along the inner leaflet of cristae tubules 6 . CL is essential for protein import, localization, and assembly, profoundly influencing mitochondrial dynamics, energetics, and network continuity 9,10 . Previous studies established oxidation and subsequent lowering of CL content across cardiac pathologies, including acute ischemia-reperfusion (I/R) 11,12 and heart failure [13][14][15] . Aside from exogenous perfusion with CL 16 , which may only be applicable in experimental settings, there are currently no therapies that can improve mitochondrial function by targeting CL.
A number of cell permeable, mitochondria-targeting peptides have emerged over the last two decades. This class of peptides typically contain residues of alternating cationic-aromatic motifs ranging from 4-16 amino acids (reviewed in ref. 17 ). Elamipretide (formerly known as MTP-131, Bendavia, SS-31) is a cellpermeable peptide currently being investigated in several clinical trials to mitigate mitochondrial dysfunction associated with genetic-and age-related mitochondrial diseases. This peptide consists of a tetrapeptide sequence of D-arginine-dimethyltyrosine-lysine-phenylalanine. Preclinical studies spanning numerous models and laboratories have demonstrated preserved mitochondrial function and cytoprotection with this peptide (reviewed in refs. [18][19][20], although the mechanism of action has remained elusive. Previous work demonstrated that elamipretide interacted with CL 21 , yet the physiological consequences of this interaction are not fully understood. In this study, we utilized high-resolution mitochondrial respiration and simultaneous reactive oxygen species emission assays, biophysical membrane models, and mitochondrial imaging (serial block-face scanningand transmission electron microscopy), to test the hypothesis that elamipretide would improve post-ischemic mitochondrial structure-function by aggregating mitochondrial CL molecules.
Effects of I/R and elamipretide on mitochondrial respiration.
We first confirmed previous studies of myocardial uptake and mitochondrial localization using a TAMRA-conjugated elamipretide (Supplemental Fig. 1). Mitochondrial functional studies are presented in Fig. 1. In permeabilized ventricular fibers isolated after reperfusion ("Post-I/R" Fibers), respiratory control ratios (RCR; using glutamate/malate substrate) fell from 3.6 ± 0.2 in normoxic fibers to 1.9 ± 0.1 after I/R. This decrement was partially blunted with peptide treatment, with elamipretide leading to a post-I/R RCR of 2.5 ± 0.1. The substrate-uncoupler-inhibitortitration (SUIT) protocol employed indicated decrements (average of −78% during state 3) in mitochondrial respiration across complexes I-IV after ischemia-reperfusion. Post-ischemic administration of elamipretide improved mitochondrial respiration with complex I and II substrate by an average of 56% during state 3 conditions (P < 0.05 compared to ischemia-reperfusion alone, Fig. 1a), and tended to improve complex IV-dependent respiration (+21%). Improved mitochondrial bioenergetics was also supported by higher myocardial oxygen consumption in the intact heart in post-ischemic hearts receiving elamipretide.
Effects of I/R and elamipretide on H 2 O 2 emission. A major limitation to using fibers in the above paradigm (isolating fibers from the heart after I/R) was the inability to determine whether the observed protection of mitochondrial energetics was a cause or consequence of cardioprotection. Accordingly, we devised a series of studies to determine the efficacy of elamipretide in models where energetics can be measured during the insult. Permeabilized ventricular fibers (from normoxic hearts) were placed in a high-resolution respirometry chamber and respired (saturating ADP present, "State 3") until they consumed all of the oxygen in the chamber, thus inducing anoxia. To account for variability, each fiber preparation was normalized to its own preanoxia, normoxic value. There were observable increased levels of H 2 O 2 at the onset of reoxygenation. Elamipretide treatment reduced fiber H 2 O 2 emission by 33% (Fig. 1b). This effect persisted whether H 2 O 2 rates were expressed alone or when normalized to the fiber's simultaneous oxygen consumption rate (−36%).
We then determined if the mechanism of elamipretide involved reduction in reactive oxygen species (ROS) emission through reverse electron transfer (RET), presented in Fig. 1c-e. There was a modest but statistically significant reduction in succinatederived RET when mitochondria were treated acutely with elamipretide. This was reflected whether the H 2 O 2 emission was integrated over a five-minute timespan after succinate addition (−13%) (Fig. 1d) or normalized to simultaneous oxygen flux (−18%) (Fig. 1d). As expected, treatment with rotenone abolished almost all RET (rates declined from around 40 pmol/sec*mg to around 5 pmol/sec*mg). After rotenone treatment there were no differences in the rates of H 2 O 2 production between the saline and elamipretide-treated mitochondria (Fig. 1e). We could rule out if a subtle but statistically significant decrease in H 2 O 2 emission after rotenone would become apparent with higher N's. Furthermore, the addition of the complex III blocker antimycin-A was not done in these studies.
These mitochondrial function studies were accompanied by studies to determine the macromolecular/supercomplex assembly of electron transport system proteins. (Supplemental Fig. 2). There was a 27% decrease in the supercomplex coupling (flux control factor) after I/R, which was improved by 10% with elamipretide (Supplemental Fig. 2D), although this did not directly correlate with changes in mitochondrial supercomplex band density (Supplemental Fig. 2A, C). There was a decrease in native complex V after I/R (−51%), which was abrogated with elamipretide (+47% versus I/R control) (Supplemental Fig. 2B, E). Although previous studies have extracted supercomplex bands from the gel and measured discernible respiration 22 , in our hands the changes in respiration when substrates were given appeared to be an artifact, as it was observed even when nonloaded acrylamide gel was placed into the respirometer.
Effects of I/R and elamipretide on mitochondrial structure. Given the integral relationships between mitochondrial function and structure 4,23 , we employed two different electron microscopy imaging modalities to determine mitochondrial morphology. Results from transmission electron microscopy are presented in Fig. 2, with representative images in Fig. 2a and Supplemental Fig. 3. Ischemia-reperfusion induced an 18% increase in mitochondrial swelling, with I/R-saline-treated hearts displaying a greater Feret diameter when compared to normoxic hearts ( Fig. 2b, P < 0.05 versus control). Treatment with elamipretide did not markedly influence mitochondrial swelling based on transmission electron microscopy (TEM) imaging. I/R induced a 35% decrease in mitochondrial electron density (P < 0.05, Fig. 2c), which was attenuated with elamipretide treatment by 34%. Sarcomeric contracture (z-band width) was prominent with ischemia-reperfusion (1.48 μM Normoxia vs. 1.18μM I/R + Saline, P < 0.05), and was not affected by post-ischemic elamipretide treatment (1.18 μM I/R + Saline vs. 1.14 μM I/R + Elamipretide). These sarcomere lengths were shorter than other reported values in whole fixed tissue, which may be due to prolonged perfusion with a crystalloid buffer or inconsistencies in the cutting plane of our samples [24][25][26][27] .
Higher resolution serial block-face scanning electron microscopy (SBF-SEM) was employed to obtain more advanced structural insight of cardiac mitochondria after I/R. These data are presented in Figs. 3 and 4 (along with reconstructed threedimensional movies in Supplemental Movies 1-3). Under normoxic conditions, approximately 70% of cristae were physically adhered to the inner boundary membrane, termed "contact sites". I/R injury led to a decrease (−37%) in the number of cristae contact sites (Fig. 3a, b; P < 0.05 versus normoxia). Postischemic administration of elamipretide blunted the loss of cristae adhered to contact sites (+23% versus I/R injury) ( Fig. 3b; P < 0.05 versus I/R alone). Among adjoining mitochondria, intermitochondrial cristae network connectivity analysis indicated a substantial loss (−40%) in network connectivity between mitochondria after I/R (presented in Fig. 3c, d). Intermitochondrial cristae connectivity improved in post-ischemic hearts perfused with elamipretide by 24% (Fig. 3c, d).
Separate SBF-SEM analyses were conducted to determine the extent of intra-mitochondrial cristae network connectivity. In these studies, we attempted to map the connectivity of cristae inside a subset of mitochondria using the rationale that contiguous cristae facilitate efficient energy transfer. We termed cristae that were interconnected "networked cristae", versus cristae that were disconnected from the network as "orphaned cristae". From the analysis, we rendered networked cristae yellow and orphaned cristae red. These data are presented in Fig. 4 (with reconstructed three-dimensional movies in Supplemental Movies 4-6). Interestingly, in normoxic hearts 90% of all cristae appeared to be networked with one another (Figs. 4b-d and S7), 22% more matrix volume swelling, and a 22% loss of cristae volume occurred after reperfusion, which were not prevented with elamipretide ( Fig. 4a-c). However, elamipretide increased the number of "connected cristae" by 10% versus cristae that were orphaned from the network (Fig. 4a, d). Furthermore, cristae that were reconstructed appeared tubular. In healthy mitochondria, cristae were often long, formed a lattice, and attached to the boundary membrane. These features are similar to the "finger-like" digitiform cristae structure as previously described 28 . Taken together, . c-e Effects of elamipretide on reverse electron transport (RET) stimulated by succinate administration. Effects of elamipretide RET stimulated by succinate administration. c Representative trace of succinate-supported RET (N = 11-12 per group). d Elamipretide led to modest reductions in RET whether analyzed as an integrated response after succinate or steady-state (N = 11-12 per group). e Rotenone substantially reduced H 2 O 2 emission, with no differences between saline and elamipretide groups (N = 12 for all groups). *P < 0.05 versus normoxic; # P < 0.05 versus saline.
these data indicate that elamipretide did not prevent mitochondrial swelling or the loss of total cristae at reperfusion, but improved cristae contact sites with the boundary membrane and reticular connectivity among the cristae network.
Mass spectrometry analyses of CL content and composition. The functional and structural importance of CL led us to conduct shotgun lipidomic studies for CL content and acyl chain composition. These data are presented in Fig. 5a, b, with the 20 most abundant CL species presented in Table 1. There was a 23% decrease in the total amount of CL after ischemia-reperfusion, as well as a 28% decline in the most abundant CL species (18:2-18:2-18:2-18:2) CL. The acute administration of elamipretide at the onset of reperfusion did not abrogate a reduction in CL content, the decrease in (18:2-18:2-18:2-18:2) CL species, or alter any of the other CL species examined (Fig. 5a, b and Table 1).
Biophysical studies of elamipretide. To better understand the biophysical interaction of elamipretide with cardiac mitochondrial membranes, we synthesized biomimetic membranes of the inner mitochondrial membrane. The model allowed us to tightly control lipid composition. Mitochondrial models were composed of biologically relevant lipids (see methods), and a CL content that represented 20% of the total lipids (consistent with content percentages seen across mammalian mitochondria 29 ).
Informed by our CL lipidomic studies (Fig. 5a, b), we modeled the effects of decreased CL content after ischemia-reperfusion in biomimetic mitochondrial membranes by examining the biophysical interactions using a Langmuir trough (which provided mean molecular pressure-area isotherms of compressed lipid monolayers). A 25% reduction of CL content ("I/R model") resulted in a 5% reduction in the mean molecular area at a physiological membrane pressure of 30 mN/m (Fig. 5c). Acute addition of elamipretide to I/R biomimetic membranes restored the mean molecular area in biomimetic monolayers with reduced CL (Fig. 5c) and oxidized CL (Fig. 5d). We also modeled severe reduction of CL content (50%, more comparable to some genetic mitochondrial diseases) and also saw a 7% improvement with elamipretide treatment versus I/R (albeit not restoring the area to the non-pathological membrane levels). Notably, elamipretide treatment of biomimetic monolayers without CL present had no discernible effect on membrane behavior (Fig. 5c, left panel).
To compliment the monolayer studies, we synthesized biomimetic mitochondrial lipid vesicles of similar composition. In the absence of peptide, CL fluorescence (assessed by nonyl acridine orange (NAO) localization) was uniform across the vesicle membrane and was accompanied by no observable aggregation of adjacent vesicles. Upon the addition of elamipretide to biomimetic vesicles, the NAO signal clustered into enriched domains. This CL clustering colocalized with fluorescent TAMRA-elamipretide (Fig. 5e), providing a complement to our imaging in intact cells (Supplemental Fig. 1). Furthermore, addition of elamipretide promoted aggregation of adjacent lipid vesicles (Fig. 5e) suggesting a potential mechanism of action ( Fig. 5f). This aggregation effect was not seen in vesicles devoid of CL, and our previous work has shown that proteins that do not associate with CL do not have this effect on CL aggregation 30 . Additionally, we serendipitously discovered that CL-containing vesicles and elamipretide eventually precipitated when in solution together (Supplemental Fig. 4). Titration studies indicated that elamipretide aggregated CL-enriched vesicles, essentially saturating the signal at a molar ratio of one peptide to two CL (Supplemental Fig. 4).
Discussion
This study advanced our understanding of mitochondrial structure-function in hearts exposed to ischemia-reperfusion. First, we found parallel decrements in mitochondrial cristae ultrastructure and respiratory function noted across electron transport system complexes after ischemia-reperfusion. Second, we used an innovative approach to map mitochondrial network connectivity in the heart, discovering decrements among and between mitochondrial cristae networks after ischemiareperfusion. This aspect of our study employed the application of serial block-face scanning electron microscopy (SBF-SEM) to determine and quantify changes in mitochondrial cristae structure and connectivity in cardiac pathology. Third, the studies provided new mechanistic insight into elamipretide, a clinicalstage peptide that appears to aggregate CL and improve mitochondrial membrane structure and bioenergetic function without preventing the acute, reperfusion-induced decrease in CL content. Finally, we employed the use of biomimetic membranes to directly quantify membrane-dependent effects of pathologies and putative therapeutics.
The mitochondrial network is an attractive target for adjuvant therapies 20,31 . The factors that promote bioenergetic impairments include: reactive oxygen species production that exceeds endogenous scavenging capacity, matrix calcium overload, imbalances in substrate content/composition, inner membrane uncoupling, membrane lipid oxidation/degradation, inefficient electron flux, opening of energy-dissipating channels/pores, and collapse(s) in mitochondrial membrane potential (reviewed in refs. 8,32,33 ). A number of different pharmacological approaches have investigated the aforementioned factors, with several compounds progressing to clinical trials (reviewed in ref. 34 ). Our work advanced the field by showing that mitochondrial function could be improved by targeting decrements related to CL.
A reduction in the concentration of (18:2-18:2-18:2-18:2) CL that we observed was consistent with previous studies (refs. 12,16,51,52 ). Likewise, mitochondrial ultrastructural defects, similar to what we observed, are reported across other pathologies 23 characterized by a decrease of CL concentration and corresponding cristae 7 . CL replacement strategies have been tested and include direct perfusion of CL to isolated rat hearts 16 and utilization of CL-containing nanodiscs 53 . While promising, the translational relevance of these paradigms for patients remains to be demonstrated. Original SBF-SEM serial images were acquired and processed into 3D reconstructions using ImageJ (a). Mitochondria were then analyzed for contract site analysis (b) and intermitochondrial network connectivity (d), with quantified data in panels c and e, respectively. N = 6-8 for normoxic, N = 8 for IR + Saline/Elamipretide groups *P < 0.05 versus normoxic, # P < 0.05 versus I/R saline. Three-dimensional reconstructions of these images are presented in the Supplemental Figures, Fig. S5. Scale bar represents 500 nm.
Our finding of multi-complex dysfunction after ischemiareperfusion compliments previous studies noting decrements in complex activity, subunit oxidation, and augmented ROS production along the electron transport system 33,54-63 . Mitochondria-targeting peptides represent an emerging class of therapeutics that are being tested across diseases. These peptides share structural homology with endogenous mitochondriatargeting sequences, which are both amphipathic and cationic 64 . Most mitochondria-targeting peptides contain alternating cationic-aromatic amino acid motifs 17,65,66 . Mitochondriatargeted peptides appear to be lipophilic enough to cross membrane barriers, typically contain arginine (especially the D-isomer for enzymatic stability), and are postulated to hone to mitochondria based on the negative membrane potential, the presence of anionic phospholipids (such as CL), or combinations thereof.
Elamipretide is a salt-variant of the SS-31 peptide first serendipitously discovered by Szeto and Schiller 67 in their search for opioid receptor ligands. This peptide showed protective efficacy across preclinical studies (reviewed in refs. 8,19 ). Improved postischemic respiratory function with elamipretide was observed across complexes, in agreement with previous studies where elamipretide improved activity or expression of several different electron transport complexes 15,[68][69][70] . These data suggest that elamipretide's mechanism of action does not depend on one particular protein or complex. The improved bioenergetic function across complexes was suggested by studies examining the existence of native protein complexes. Supercomplex coupling control factor 71 , a functional measure of the 'intactness' of respirasomes, and native complex V structure were both impaired after ischemia-reperfusion and improved with elamipretide (Supplemental Fig. 2D, E). Although the functional significance of improved complex V density was not directly tested, the mitigation of electron transport chain dysfunction observed with elamipretide suggests the peptide may lessen decrements in ATP production previously observed after I/R 72 . We did not find evidence of a robust decrease in supercomplex band density after ischemia-reperfusion. These results are consistent with other studies examining supercomplexes after acute cardiac ischemiareperfusion 73,74 , which generally show modest effects of ischemia-reperfusion on supercomplex band density. These findings also corroborated previous work where cardioprotection was not associated with augmented supercomplex band density 75 . Given the sensitivity of native protein complexes to detergent conditions 74,76,77 and the issue of sample bias in isolating mitochondrial supercomplexes from infarcted myocardium (i.e., isolation procedures enrich for healthy mitochondria), future studies are warranted to better understand the relevance of blue-native supercomplexes in models of cardiac pathology.
The SS-31 peptide was originally described as a "scavenger" of reactive oxygen species. While it is clear that elamipretide can reduce overall ROS levels from pathological mitochondria (refs. 15,78 , and Fig. 1 herein), several lines of evidence suggest that it is not scavenging ROS. We previously showed that the peptide was not scavenging either superoxide or hydrogen peroxide using cell-free model systems 18 . Other groups reported that elamipretide-mediated reductions in ROS are observed in diseased tissues but not healthy tissues 68,79 , also suggesting that the peptide may be reducing pathological production of ROS and not scavenging per se. Preconditioning of the heart, which has been shown to involve small bursts of ROS that trigger adaptive responses and is typically abolished with "ROS scavengers", was not abolished with elamipretide 80 . Our finding of reduced ROS emission in permeabilized ventricular fibers and a modest reduction in RET 81 provides further evidence that the peptide may be reducing succinate-derived RET early in reperfusion. Interestingly, elamipretide-mediated reductions in ROS production by the electron transport chain (ETC) were not observed downstream of ubiquinone ( Fig. 1), suggesting an alternative mechanism than one involving cytochrome c-mediated injury 21,82 . We observed interactions between elamipretide and CL, confirming findings first made by Birk et al. 21,82 . Among endogenous proteins, cationic amino acid residues are known to associate with CL 44 . Results from our microscopy studies with NAO and elamipretide colocalization suggest the peptide interacts with CL, which we speculate was driven by the cationic amino acids. We acknowledge that a limitation of our work is the use of a simplified biomimetic model system that lacks proteins. Future studies will require the incorporation of differing proteins into biomimetic membranes. An additional potential limitation of our work could be the use of imaging, which could introduce artifacts during sample preparation and analyses. Therefore, future studies will need to effectively integrate imaging, biochemical, and biophysical approaches.
Acute elamipretide treatment at reperfusion did not prevent the loss in CL content observed after acute ischemia-reperfusion. While longer term administration of elamipretide (>4 weeks) normalized aberrant CL in canine models of heart failure 15 and pigs with metabolic syndrome 70 , our data indicated that the peptide has acute activity to preserve mitochondrial structurefunction even in the presence of CL deficiencies. This beneficial activity suggests that the peptide may be effective in diseases characterized by a reduction of CL concentration (such as genetic mitochondrial diseases and/ or Barth syndrome [BTHS]).
The lowering or oxidation of CL modeled after ischemiareperfusion led to a discernible change in the membrane structure (Fig. 5). Elamipretide promoted a physical 'aggregation' of CL molecules where the peptide is acting like a membrane adhesion factor for the CL that is present (even if it is oxidized). As it is difficult to study mammalian mitochondria devoid of CL, our finding that elamipretide-mediated lipid aggregation is not present without CL highlights the preferential nature of this interaction.
The nature of the physical aggregation between CL and elamipretide remains to be studied in the future. One approach would be to use fluorescently labeled CL to understand its complex relationship with the peptide; however, it will be critical to demonstrate that fluorescently labeled CL shows the same biophysical properties as native CL, which is unlikely to be the case due to the presence of bulky fluorescent groups. Additional studies are needed to understand the nature of the physical interaction of CL with the peptide as a function of changes in surface pressure. Our experiments relied on 30 mN/m as this is the gold standard in the field for biological relevance; however, surface pressure in the inner mitochondrial membrane likely varies with curvature and matrix swelling. To the best of our knowledge, the surface pressure as a function of cristae curvature remains to be established but is an important area for study.
The finding of improved mitochondrial ultrastructure with elamipretide in cardiac ischemia-reperfusion injury agrees with previous work where elamipretide improved mitochondrial morphology in other diseases 70,82,83 . We specifically provide insight into mitochondrial structure using higher resolution, SBF-SEM, which is an imaging approach capable of creating high-resolution 3D images that provide quantitative data on mitochondrial cristae morphology. The technical operation is also less demanding than other imaging approaches for performing volume analyses (i.e., serial section TEM and focused ion beam SEM), and SBF-SEM analysis is within the capabilities of many labs that lack previous EM experience 84 . The relationship between changes in mitochondrial morphology and functional outcomes remains unclear as reported here and by others 85 . Therefore, there is a critical need to further understand how modest changes in morphology could be driving key changes in the biophysical properties of the inner mitochondrial membrane to control respiratory function, an area for future investigation.
The finding that cristae contact sites were disrupted in the post-ischemic heart compliment altered contact sites seen in other diseases 86,87 . Elamipretide did not prevent the loss in total cristae with reperfusion (Fig. 4c), consistent with a lack of protection against decreased CL content (Table 1 and Fig. 5a, b), as these are inter-related 50,88 . A major advancement using this technology was that elamipretide partially restored the cristae network connectivity. This may help explain the protection of ATP synthase dimers with elamipretide, as ATP synthase and cristae architecture have shown interdependent declines in other pathologies 38 . However, more studies are needed to confirm this hypothesis as we did not specifically study the interdependence of cristae shape and ATP synthase. The modest reduction in RET we observed was also associated with improved cristae ultrastructure. Accordingly, it might be hypothesized that succinate accumulation induces RET, which has been associated with mitochondrial fragmentation 89 .
If any generalizations can be made about elamipretide's mechanism across studies, it is that this peptide appears to exert Fig. 6 Proposed model in which elamipretide (depicted in magenta-right panel) aggregates CL (depicted in yellow) to preserve cristae ultrastructure in diseased mitochondria. Preserved cristae integrity was associated with protection of complex V (red) band density. The other electron transport chain complexes are depicted in green, yellow, and purple (middle panel). Therapeutic approaches that target CL may conserve inner mitochondrial membrane integrity, maintain cristae contact sites, sustain intra and intermitochondrial networks, and improve mitochondrial function during disease states.
biological effects predominantly when there is an underlying pathological burden present. In these cases, it tends to 'normalize' mitochondrial function that is dysfunctional prior to elamipretide exposure. The ability of elamipretide to prolong PTP opening 15,18 , improve exercise capacity 79 , promote CL remodeling 15,70 , reduce apoptotic signaling 90 , stimulate state 3 respiration 21 , or improve the activity of several different ETC complexes 15,[68][69][70] is only observed in diseased, aged, or damaged (respiratory control ratio <2) mitochondria. There appears to be translational support for this concept, as a recent clinical trial with elamipretide showed the greatest 6-min-walk benefit in patients who began the study with the largest functional impairments 91 .
A clinical trial using elamipretide in first time ST-segment elevation myocardial infarction patients did not reduce infarct size 92 . Exclusion of~40% of the patient population due to patent arteries at the time of reperfusion, and group differences in baseline characteristics may have influenced the results. As we and others have recently noted, a number of factors likely to contribute to lack of translation in cardiac ischemia-reperfusion paradigms 34,93 .
As the sequence of elamipretide contains two non-natural amino acids, there is no known homology between this peptide and endogenous assembly or mitochondrial fusion factors, although the upregulation of cristae assembly factors (such as OPA1) has been recently observed after elamipretide treatment 94,95 . Given the cationic and aromatic repeats, and the fact that many mitochondrial assembly proteins have conserved RYL or RYK motifs 44,[96][97][98] , it is tempting to speculate that the peptide is acting as an enzymatically resistant, cell-permeable analog of an endogenous assembly factor(s). Such a mechanism may explain why there are little/no observable effects in healthy mitochondria across studies with elamipretide as noted above. Healthy inner membranes may be sufficiently intact such that the addition of an adhesion factor has a negligible effect. This type of mechanism may also promote the utility of this peptide (and related analogs) across mitochondrial pathologies that share the commonality of structural abnormalities 3,4 . Clearly further testing is warranted to better elucidate the efficacy of elamipretide across other pathologies.
Our study provides insight into mitochondrial structurefunction derangements in acute ischemia-reperfusion by combining functional measurements in mitochondria, fibers, and the intact heart, with new imaging modalities examining cristae architecture. Results indicate that while elamipretide does not prevent the reduction of CL concentration, post-ischemic treatment can improve functional and morphological characteristics of mitochondria even with decreased CL content. These characteristics include improved electron transport chain complex respiration, decreased H 2 O 2 production, increased mitochondrial density, increased cristae connectivity, enhanced cristae networking within and between mitochondria, and finally, improved inner membrane integrity. The use of biomimetic models has profound potential to expand our understanding of the consequences of phospholipid alterations. Future studies utilizing these approaches will advance the development of mitochondriatargeting strategies.
Methods
Animals. Male Sprague-Dawley rats (aged 2-3 months) were used in the study. All procedures received prior approval from the Institutional Animal Care and Use Committees at East Carolina University, Latvian Animal Protection Ethical Committee of Food and Veterinary Service, and Virginia Tech. Animals were housed in a temperature and light-controlled environment and received food and water ad libitum. Prior to excision of the heart, animals received an intraperitoneal (ip) injection of a ketamine/xylazine solution (90 mg/kg/10 mg/kg, respectively), and hearts were excised following the diminution of animal reflexes via midline thoracotomy and placed in ice-cold saline.
Materials. All phospholipids were purchased from Avanti Polar Lipids Inc. Elamipretide and the TAMRA-elamipretide conjugate were synthesized by New England Peptide. All organic solvents were HPLC grade and all other reagents were purchased from either Fisher Scientific or Sigma.
I/R and respirometry of permeabilized ventricular fibers. Excised hearts were perfused on one of four modified Langendorff apparatus (AD Instruments) per our established protocols 99,100 . Hearts were exposed to 20/120 min of global ischemia/ reperfusion, respectively. For the elamipretide treatment, hearts received 10μM elamipretide beginning at the onset of reperfusion, which is a well-established cardioprotective paradigm in our models 18,78,80,101 . Myocardial oxygen consumption was measured at the end of reperfusion in a subset of hearts per our established protocols 102 . At the end of reperfusion, hearts were split into the experimental groups described below. Two different protocols were employed to determine mitochondrial function after ischemic stress in ventricular muscle fibers. The first set determined mitochondrial function in fibers isolated after cardiac ischemia-reperfusion ("Post I/R Studies"). The second set of studies isolated cardiac fibers from a freshly isolated (normoxic) heart, and then induced anoxiareoxygenation on the isolated fibers ("A/R Studies"). Detailed methods for the fiber studies are provided in the Methods Supplement. Below (Supplemental Fig. 5) is an overview of the permeabilized experimental flow.
Isolated mitochondria and BN-PAGE. Mitochondria were isolated from the left ventricle and succinate-derived reverse electron transport determined using our established protocols 103 . Detailed protocols for the separation of native mitochondrial respiratory chain complexes by BN-PAGE are described in the Methods Supplement. Supercomplex flux control coupling factor was measured as described 97 .
Electron microscopy of mitochondria. A subset of hearts exposed to ischemiareperfusion (as described above) were utilized for transmission electron microscopy imaging (Virginia Tech Morphology Service Core Laboratory, Virginia-Maryland College of Veterinary Medicine) using modifications to established protocols 104,105 . Serial block-face scanning electron microscopy was done in collaboration with Renovo Neural, Inc. (Cleveland, Ohio). Detailed methods are provided in the Methods Supplement.
Mass spectrometry. A subset of hearts was taken immediately at the end of reperfusion and the left ventricle was snap frozen, pulverized using a liquid nitrogen-cooled mortar/pestle, and analyzed for CL content and composition using shotgun lipidomics per our established methods 106 . Detailed protocols for the mass spectroscopy studies are provided in the Methods Supplement. For select experiments, the total CL concentration was decreased 25-50% by mass to reflect changes seen from mass spectroscopy studies. In a subset of studies peptide was added immediately after spotting the lipid monolayer or prior to imaging.
Statistics and reproducibilty. All data were analyzed using and GraphPad Prism 8 and are presented as mean ± sem. Data were distributed normally, which allowed for parametric analyses. Statistical analyses were conducted using a one-way ANOVA followed by a Bonferroni post-hoc, with P-values ≤ 0.05 considered significant. One trace from RET studies was excluded because the hydrogen peroxide emission was greater than two standard deviations away from the mean.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. | 2023-02-23T15:39:21.837Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "24df0d393439621d67865bf7d4dba7a00eac40e4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-020-1101-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "24df0d393439621d67865bf7d4dba7a00eac40e4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
125031766 | pes2o/s2orc | v3-fos-license | Search of a systematic behaviour for the weakly bound complete fusion suppression caused by breakup
We discuss the effect of breakup on the complete and total fusion cross sections of weakly bound nuclei, both stable and radioactive, at near barrier energies. We show that there is suppression of the complete fusion of non-halo nuclei and total fusion of neutron- halo nuclei at energies above the barrier, whereas there is some enhancement at sub-barrier energies. We investigate a systematic behaviourfor this effect for several weakly bound systems.
the calculations, the barrier height increases and the calculations have better agreement with the data, what confirms that the real part of the polarization potential produced by the direct breakup is repulsive [6][7].
Theoretically, the most suitable calculations involving breakup are the so-called CDCC (continuum discretized coupled channel) calculations, since the breakup feeds states in the continuum. It has been shown that if one wants to describe the behaviour of elastic scattering angular distributions of weakly bound nuclei, it is essential that continuum-continuum couplings are included in the CDCC calculations [4,8]. However, if one wants to investigate the effect of breakup on the fusion cross section, it may be more interesting to perform much simpler coupled channel calculations which do not take into account the breakup, and so the difference between data and theoretical predictions should correspond to the breakup effect.
The starting point in the comparison of fusion cross section data with theoretical predictions is the choice of a systematic bare interaction potential. We adopt the double-folding parameter-free São Paulo potential (SPP) [9,10], using reliable nuclear densities of the nuclei involved in the collisions [11,12]. This potential has described successfully several reactions with systems in different mass ranges, including weakly bound nuclei [13,14]. In order to compare data for several systems in a single plot, it is necessary to eliminate the differences associated with trivial factors, like sizes, charges and anomalous densities. Canto et al. [15,16] proposed the use of dimensionless quantities, with a method, which allows reaching a systematic understanding of this subject, since it allows the comparison of any kind of system in the same graphic. This method uses a benchmark curve, called the Universal Fusion Function (UFF), given by where and . Here V B , R B and ћω are the height, radius and curvature of the parabolic barrier, respectively, is the fusion cross section and F(x) is called fusion function. This method was later extended for the analysis of total reaction cross section [17].
The above reduction method has a simple meaning. For systems where channel coupling effects can be neglected and the fusion cross section is well approximated by Wong's formula [18], F(x) becomes the Universal Fusion Function (UFF). This method consists of using the UFF as the benchmark curve for comparisons with fusion data. First, one evaluates the barrier parameters for the particular system under study using the SPP potential. The experimental fusion function is then determined from the experimental fusion cross section.
is then compared with . However, the above described procedure has two shortcomings: The first is that Wong's approximation is not valid for light systems at sub-barrier energies. The second is that a comparison of with the UFF indicates the global effect of channel coupling on the fusion cross section. In this way, breakup couplings are entangled with couplings with other bound channels. In order to single out the effects of breakup coupling and eliminate deviations arising from the inaccuracy of Wong's formula at sub-barrier energies, it is necessary to renormalize the experimental fusion function by multiplying by , where the fusion function with is obtained from a coupled-channel calculation including couplings to all relevant bound channels.
In Figure 1, using the complete fusion data available in the literature we have the comparison between the UFF and the renormalized functions fusion. For a clear view of the behaviours of the fusion functions above and below the barrier (V B corresponds to x=0), the results are displayed in linear and logarithmic scales. The results for total fusion of stable and proton halo systems are shown in Fig. 2. At above-barrier energies, the experimental fusion functions are very close to the UFF. One concludes that the breakup process does not significantly affect the total fusion cross section above the barrier. This means that fusion of one of the fragments with the target (incomplete fusion) is comparable with fusion of the whole projectile.
Inspecting Fig. 1(a), we conclude that the data for the 6,7 Li + 209 Bi [19,20] and 9 Be+ 208 Pb [20,21] suppression corresponds to multiplying the experimental fusion function by the constant factor 0.7. Actually, one can observe that the attenuation factor is slightly larger for the 6 Li + 209 Bi system than for 7 Li + 209 Bi, as observed in Refs. [19,20], where it is pointed out that the smaller the breakup threshold energy, the larger is the complete fusion suppression. For the 7 Be, 7 Li+ 238 U systems, similar results are observed [22]. The data in figure 2 are from Refs [20,21,[23][24][25]. In Fig. 3, we show total fusion functions for the 6 He+ 209 Bi, 6 He+ 238 U and 11 Be+ 209 Bi systems. The data are from Refs. [23,[26][27]. Inspecting Fig. 3(a) we conclude that the results are rather similar to the ones for complete fusion of stable weakly bound projectiles, shown in Fig. 1(a).
We conclude that the complete fusion cross section is systematically suppressed at energies above the Coulomb barrier. The suppression is of about 30% of the complete fusion cross section. The influence of the breakup coupling on the total fusion cross section at above-barrier energies depends on the nature of the projectile. It has no appreciable effect in collisions of stable weakly bound projectiles. On the other hand, in collisions of neutron-halo nuclei, the total cross sections are suppressed by about 30%, similarly to the case of complete fusion of stable weakly bound nuclei. At sub-barrier energies, both the complete and total fusion cross sections are enhanced, owing to the coupling with the breakup channel.
It is very interesting to observe that for each weakly bound projectile, the complete fusion suppression factor is independent of the target mass [28]. An analytical relation of the suppression factor with the breakup threshold energy was recently derived by Wang et al. [28]. However, a physical explanation for that expression is still missing. | 2019-04-21T13:05:17.373Z | 2015-07-15T00:00:00.000 | {
"year": 2015,
"sha1": "843f8a1783a34ecaff76b2bb9ca63bbd2345b9ae",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/630/1/012017/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "315e1dcc5307f4d0d0081c449747cef3f4660fa7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237454767 | pes2o/s2orc | v3-fos-license | Perspectives on Kidney Disease Education and Recommendations for Improvement Among Latinx Patients Receiving Emergency-Only Hemodialysis
IMPORTANCE In most states, undocumented Latinx immigrants with kidney failure receive dialysis in acute care settings on an emergency-only basis. How much kidney disease education Latinx immigrants receive and how to improve kidney disease education and outreach among Latinx populations are unknown. OBJECTIVE To understand the kidney disease educational gaps of Latinx individuals who need but lack access to scheduled outpatient dialysis. DESIGN, SETTING, AND PARTICIPANTS This qualitative study used semistructured interviews in a Texas hospital system from March 2020 to January 2021 with 15 individuals who received emergency-only dialysis when they were first diagnosed with kidney failure. Demographic information was collected, and a thematic analysis was performed using the constant comparative method on interviews after they were audio-recorded, translated, and transcribed verbatim. Data analysis was performed from April 2020 to February 2021. MAIN OUTCOMES AND MEASURES Subthemes and themes from semistructured interviews. RESULTS All 15 persons interviewed (9 male individuals [60%]; mean [SD] age, 51 [17] years) identified as Hispanic, 11 (73%) were born in Mexico, and none reported knowing about their kidney disease more than 6 months before starting dialysis. The themes identified were (1) lack of kidney disease awareness, (2) education provided was incomplete and poor quality, (3) lack of culturally concordant communication and care, (4) elements that Latinx patients receiving emergency-only dialysis want in their education, (5) facilitators of patient activation and coping, and (6) Latinx patient recommendations to improve community outreach. CONCLUSIONS AND RELEVANCE Latinx adults receiving emergency-only dialysis are usually unaware of their kidney disease until shortly before or after they start dialysis, and the education they receive is poor quality and often not culturally tailored. Participants made feasible recommendations on how to improve education and outreach among Latinx communities. JAMA Network Open. 2021;4(9):e2124658. doi:10.1001/jamanetworkopen.2021.24658
5-fold higher 1-year mortality and 14-fold higher 5-year mortality, 1,2 worse morbidity and quality of life, 3,4 and higher acute care utilization and health care costs. 1 Most individuals who receive emergency-only dialysis are undocumented immigrants, and the majority of undocumented immigrants are from Latin America. 5,6 There are an estimated 5500 to 8857 undocumented immigrants with kidney failure in the United States. 7,8 Latinx individuals who receive emergency-only dialysis also frequently lack primary care and pre-kidney failure nephrology care and report limited English-language proficiency. [9][10][11][12][13][14] They often have suboptimal control of diabetes and hypertension, low use of recommended medications, and low chronic kidney disease awareness, leading to a 50% higher risk of kidney failure. 9,14 It is unclear what education Latinx immigrants with kidney failure receive about kidney disease because of fragmented care. Patient perspectives on how to improve kidney disease education and outreach within this population have not been studied. Our objective was to explore the perspectives of Latinx individuals receiving emergency-only hemodialysis on the kidney disease education they received prior to the onset of kidney failure and obtain patient recommendations on how to improve education and outreach among Latinx communities.
Study Design
We conducted 15 semistructured interviews with Latinx adults with kidney failure who relied on emergency-only dialysis. All interviews were conducted one-on-one in Spanish over the phone from March 2020 to January 2021. The consent form was read in Spanish by the interviewer. Participants provided verbal consent and received a $20 gift card to a local grocery store. We followed the Standards for Reporting Qualitative Research (SRQR) reporting guideline for reporting our findings. 15 The study was approved by the University of Texas at Austin, Dell Medical School institutional review board.
Setting and Participants
Participants were eligible if they were aged 18 years or older, reported speaking Spanish as their primary language, and relied on emergency-only dialysis at 2 hospitals in Austin, Texas, affiliated with the Dell Medical School. Participants were not asked about their immigration status. The majority of participants initially had a tunneled catheter for vascular access. To receive dialysis, patients had to be critically ill, defined as the presence of uremic symptoms, volume overload, or electrolyte abnormalities. Patients usually stayed in the hospital overnight for 2 consecutive hemodialysis sessions, were then discharged, and repeated the cycle 1 to 2 weeks later.
Participants were recruited using convenience sampling. The research team was notified when individuals presented for emergency-only dialysis, and patients meeting eligibility criteria were invited to participate. The invitation to participate occurred over the phone after the individual had received at least 1 dialysis session to ensure they had mental capacity. Collection of demographic information and the semistructured interview occurred over the phone following hospital discharge. Race and ethnicity data were self-reported.
Interview Guide
We conducted semistructured interviews using open-ended questions. We asked participants to describe how they found out about their kidney disease, the education they received, what would have helped them and their families understand their diagnosis and dialysis, and suggestions for improving patient education and outreach among Latinx communities (Box).
Data Collection
Three Spanish-speaking members of the research team conducted the semistructured interviews (S.D., D.C., and F.B.). Interviews were audio recorded, translated, transcribed verbatim, and Box. Interview Guide 1. Following completion of informed consent, inform the participant that there are no right/ wrong answers, the answers they provide will not be shared, this information will not affect their medical care, and it is okay to skip any questions.
Missed Opportunity for Prevention
Participants largely blamed lifestyle behaviors and lack of knowledge for their kidney failure. They expressed remorse over not knowing the importance of nutrition, and wished they had known how Participants reported most education happened after they started dialysis, and that prior to developing symptoms of kidney failure, they were largely unaware they had kidney disease.
Dialysis was unexpected "I came into the hospital with shortness of breath, nausea, dizziness, and I could barely really move my hands and feet … the next day the doctors told me that I had end-stage renal failure. It was really shocking to me because nobody had ever said anything about my kidneys before." "I didn't know I had kidney issues … I started to feel a little fatigued and had quite a bit of coughing. When I entered the hospital, they didn't let me go. It turned out … I had fluid in my lungs because of my kidneys… the same day they put a catheter in me for dialysis." Missed opportunity for prevention "As [Hispanic people], we don't understand it, if they say, this is harmful, don't do it, we don't understand. Maybe we let things happen and then we have consequences. In the end, you realize that you could have done something about your diet or your nutrition, and you didn't want to do anything, so now you face the consequences." "I didn't receive that education where I was told to take care of myself … it hurts us a lot because we're not educated in the sense of food or anything like that." Theme 2: Education provided was incomplete and poor quality The education provided typically occurred while participants were admitted for dialysis and were acutely ill. It was often fragmented, and learning occurred by piecemeal over time through discussions with clinicians, nursing staff, other patients, and through their own trial and error.
Treatment options influenced by nonmedical factors "Only dialysis [was] discussed, and they weren't very clear about a transplant, because I don't have documents, and since I don't have that, I can't enroll in a waiting list. I don't have insurance to get dialysis. I'm going to get dialysis only when I feel bad." "They didn't tell me anything … because if I go to the [dialysis] clinic nobody will treat me. They won't take me in because I don't have any money or insurance of anything." Education not delivered at appropriate level of health literacy "I don't even know what else to ask them about. The only thing I'd like to hear from them are some positive answers, right? Like, 'you'll recover from this, you will get better, you just need to do this and that …' I'm not sure if I'm still hurt from the dialysis I got, but I don't know what they did to me, whether they opened me up." "Since we don't have any experience … sometimes you can't even answer correctly … or many people don't have the capacity of answering with words … our minds go blank." Education provided while experiencing distress "I don't really ask much about dialysis. They tell me a lot of things, but it's not that I don't care, it's just that sometimes people talk to me about dialysis, and when I get there, I am in a serious condition, so I am asleep. I sleep all day, so they can talk to me, but it's as if I ignored them, because they talk to me but I'm not paying attention." "You're interested in feeling better, not in learning more."
Theme 3: Lack of culturally concordant communication and care
The education was often provided in another language, was not culturally tailored, and did not incorporate their support system.
Education did not occur in preferred language "When they talk to us in English … we don't know anything, but we just say yes out of respect. Participants highlighted things they wished they were told when they started dialysis. To improve clinician messaging, they suggested using language that is direct and culture concordant, using visual aids for complex topics like vascular access, and incorporating traditional foods into dietary counseling.
Give overall prognosis and describe expected symptoms "I've always liked to have people tell me things clearly, directly … instead of saying one thing and then another, and to keep me wrapped around it. I prefer them to tell me how things are, so that I can know if I can have high hopes or not. If the report is serious, okay, then I'll know." "Explain to them the process of everything, from when you start … that there's going to be a time when they're going to start getting cramps … and when you finish dialysis … you actually get dizzy and you feel quite exhausted." Use culturally concordant language "[The doctors should] speak in Spanish so I can understand them because sometimes they speak in English and I don't understand it. There are some things I understand and some things I don't understand, but for me, it would be much better in Spanish." "I think it would help me more if it was explained to me by someone who speaks Spanish fluently."
Education Provided Was Incomplete and Poor Quality Treatment Options Influenced by Nonmedical Factors
The main treatment option discussed for participants' kidney failure was emergency-only dialysis.
Participants demonstrated little understanding of other options, including in-center hemodialysis, home dialysis modalities, transplantation, and palliative care. They understood their options were restricted because of citizenship and insurance status.
Education Not Delivered at Appropriate Level of Health Literacy
Participants experienced confusion during discussions with clinicians. Their limited understanding of kidney disease made it difficult to process what they were told, ask questions, and make decisions.
The large volume of information conveyed at one time made it especially difficult to process what was happening. Most participants believed they were in kidney failure because of their nutritional habits, but did not demonstrate a clear understanding of what caused their kidney disease or how it could have been prevented.
Education Provided While Experiencing Distress
Participants reported they often felt too ill to engage in discussions with clinicians. Since most education happened while they were experiencing symptoms of volume overload and uremia, they felt too sick to understand, inquire about options, or make decisions. They reported being only interested in feeling better, not in understanding what was happening to their bodies, and would have accepted any treatment option without resistance.
Lack of Culturally Concordant Communication and Care Education Did Not Occur in Preferred Language
Participants reported the education they received frequently did not happen in their preferred language. Interpretation services were sometimes used when clinicians did not speak their language, but not consistently. They did not advocate for interpretation services when needed.
Education Did Not Incorporate Support System
Participants' felt their family members and social supports understood that their kidneys were not working, but they had little understanding of what they were experiencing physically and emotionally or what to expect. Participants expressed concern about the impact of their diagnosis on family members, especially children.
Give Overall Prognosis and Describe Expected Symptoms
Participants wanted to know how kidney failure and dialysis would affect their everyday lives, how long they were expected to live, and whether they should have hope for recovery or change. They wished they were told what symptoms to expect from kidney failure, and both before, during, and after dialysis. Understanding the expected variety of symptoms and their trajectory would have normalized their experience and alleviated anxiety.
Use Culturally Concordant Language
Participants preferred to receive news and education in their native language by native speakers.
Hearing news and having discussions with native speakers would have facilitated greater understanding, and provided an environment that enabled them to express preferences and ask questions.
Use Visual Aids
Some topics, even if explained in the participants' native language, were difficult to understand.
Participants reported that seeing pictures of central venous catheters or arteriovenous fistulas would have improved their understanding before they underwent vascular procedures.
Incorporate Traditional Foods into Dietary Counseling
Participants made several changes to their diet after starting dialysis. They demonstrated knowledge about the importance potassium and fluid intake, how dietary factors affected their need for dialysis, and that they were able to delay treatments through avoidance. Understanding which culturally traditional foods they needed to avoid or were safe to eat was helpful, including desserts, produce (ie, cactus), and tortillas.
Facilitators of Patient Activation and Coping Encouragement and Normalization of Emotional Experience
Participants experienced depressed mood and grief about their diagnosis, and about the life changes required of them to undergo emergency-only dialysis. They expressed frustration about the inability to work, and the fact that their lives revolved around hospital admissions. It would have helped them to know that symptoms of depression and anxiety were common. They appreciated encouraging messages from medical staff, and wanted clinicians to inform family members to encourage them as well. Most participants reported they were not doing dialysis for themselves, but for their families.
Faith
Spirituality and religion motivated participants when they were feeling ill or depressed. Despite the hardship they were experiencing, spirituality enabled participants to explain what they could not otherwise understand.
Latinx Patient Recommendations to Improve Community Outreach Promote Diet and Lifestyle Education
Participants felt their community needed a better understanding of how to eat and incorporate healthy behaviors like physical activity. They acknowledged it was common to believe one did not need to engage in healthy behaviors unless they had a medical condition. Bonding over meals is an important cultural tradition, so highlighting the need for universal healthy eating practices might be an opportunity to reach numerous people.
Personal Testimonials Influence Self-Care
Participants felt the best way to help their community understand the importance of kidney health was by telling their personal stories about dialysis. If others understood what happened to them and what it was like to have kidney failure, they might be motivated to make changes or be proactive about their medical care.
Videos, Social Media, and Community Locations
When asked where community outreach should occur, participants suggested places where many individuals frequent and feel safe, such as churches, schools, and the consulates. Videos were preferred over written materials, and they believed using television, soap operas, and social media would have higher impact.
Discussion
Latinx adults who rely on emergency-only dialysis receive fragmented and substandard care, and little is known about their kidney disease education or their perspectives on how education and outreach might be improved. Participants often were unaware they had kidney disease until they had severe or end-stage disease. In some cases, they learned of their diagnosis when they initiated emergent dialysis. The education they received was poor quality, and often not culturally tailored.
Participants wanted to know their overall prognosis, what they will experience both physically and emotionally with dialysis, and they wanted their families involved in education and decision making.
Use of patient testimonials and nontraditional platforms such as social media and classes in community locations may optimize the impact of community outreach efforts. Our findings have important implications for clinical practice (Figure).
To our knowledge, this is the first study that describes Latinx undocumented immigrant perspectives on the kidney disease education. Our findings are similar to past studies that document low kidney disease awareness in the Latinx population, which has historically been underserved by the health care system. [18][19][20] In a cross-sectional study evaluating predictors of kidney disease awareness among people with chronic kidney disease in the National Health and Nutrition Examination Survey, 65% of those without kidney disease awareness spoke Spanish as their primary language, compared with 28% of those with kidney disease awareness. 20 Our findings are also similar to past studies that document the importance of family and social supports in this population, the need to involve family in disease education and decision making, 3,4,[21][22][23][24] Our findings suggest that educating the patients' support system about kidney failure is nearly as important as educating the patient for Latinx individuals. Participants felt their family members had limited understanding of what they were experiencing, and concern about how their illness affected family members added to their stress. If their family understood what they experienced before, during and after dialysis, that might alleviate distress and help family members provide the support they needed. Incorporating family members in the education and decision-making process may also promote patient understanding when they feel too ill to engage with clinicians.
Patient activation is historically low among Latinx populations, [33][34][35] and efforts to improve patient activation and engagement in care are critical to prevent kidney failure. We found that participants were motivated when they were encouraged and felt supported by the medical staff, and when they focused on working hard, exhibiting strength for the sake of their families, and drawing on their faith. Emphasizing facilitators during community outreach may also encourage earlier incorporation of healthy lifestyle behaviors. For example, individuals might be more willing to make healthy dietary changes early in the course of their kidney disease if they feel they are doing it for their family, or if family members are also making changes.
Increasing access to primary care, nephrology care, and scheduled outpatient dialysis for immigrant populations is essential. Some states have increased access to primary and specialty care by expanding Medicaid benefits to undocumented immigrants. 8,36 In 12 states, undocumented immigrants with kidney failure are able to receive scheduled outpatient dialysis. This was accomplished by changing the scope of each states' Emergency Medicaid scope, so that outpatient dialysis was covered in addition to emergency-only dialysis in the inpatient setting. 7,37 Not only would this enable access to the kidney failure education provided at outpatient dialysis clinics, but paying for scheduled dialysis would potentially save $5768 per person per month and improve morbidity, mortality, mental health symptoms, and quality of life. [1][2][3]7,38 Limitations Some limitations bear mention. This study used convenience sampling, and the majority of participants were Mexican. The Latinx population is heterogeneous, and we may not have captured themes or experiences of Latinx individuals from other countries. Mexican Americans make up the largest Latinx group in the United States, among whom the age-adjusted prevalence of chronic kidney disease has increased over the past 2 decades, making our findings highly relavent. 28, 39 The education provided may differ in other cities and states, especially those that offer more treatment options for this population. However, our findings are applicable even in states that cover scheduled outpatient dialysis for undocumented immigrants, because Latinx populations continue to face barriers to primary and nephrology care across the country. 9 Social desirability bias may have caused some participants to censor negative experiences. Our interviewers were bicultural, and used techniques to facilitate rapport and comfort during the interview. We did not assess specific educational materials. Although we cannot generalize findings to broader groups, our recommendations may help Latinx immigrants regardless of their documentation status.
Conclusions
This qualitative study found that Latinx adults who received emergency-only dialysis received disjointed medical care. The education they received about kidney disease frequently happened too late, was not culturally tailored, and was often poor quality. Describing overall prognosis and what to expect physically and emotionally is preferred, and using culturally concordant language with the assistance of visual aids when needed would facilitate understanding. Using patient testimonials and providing education on television, social media and at community locations might improve the breadth and impact of outreach efforts. Messaging that incorporates ideals that are important for this population, such as work ethic, strength for the sake of family, and faith, may motivate patients | 2021-09-10T06:18:11.646Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "e93fe047b1fd3e279aaf01c391f5d400b24380da",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2784028/novick_2021_oi_210721_1631113885.95369.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "134b9a0e28dcc17d3e04c02d1971005fb7f3fe93",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119315345 | pes2o/s2orc | v3-fos-license | Etienne B\'ezout on Elimination Theory
B\'ezout's name is attached to his famous theorem. B\'ezout's Theorem states that the degree of the eliminand of a system a $n$ algebraic equations in $n$ unknowns, when each of the equations is generic of its degree, is the product of the degrees of the equations. The eliminand is, in the terms of XIXth century algebra, an equation of smallest degree resulting from the elimination of $(n-1)$ unknowns. B\'ezout demonstrates his theorem in 1779 in a treatise entitled"Th\'eorie g\'en\'erale des \'equations alg\'ebriques". In this text, he does not only demonstrate the theorem for $n>2$ for generic equations, but he also builds a classification of equations that allows a better bound on the degree of the eliminand when the equations are not generic. This part of his work is difficult: it appears incomplete and has been seldom studied. In this article, we shall give a brief history of his theorem, and give a complete justification of the difficult part of B\'ezout's treatise.
1 The idea of Bézout's theorem for n > 2 The theorem for n = 2 was known long before Bézout. Although the modern mind is inclined to think of the theorem for n > 2 as a natural generalization of the case n = 2, a mathematician rarely formulates a conjecture before having any clue or hope about its truth. Thus, it is not before the second half of XVIIIth century that one finds a clear statement that the degree of the eliminand should be the product of the degrees even when n > 2.
Lagrange, in his famous 1770-1771 memoir Réflexions sur la résolution algébrique des équations [31], proves Bézout's theorem for several particular systems of more than two equations, by studying the functions of the roots remaining invariant through some permutations. In the same year 1770, Waring enunciates the theorem for more than two equations in his Meditationes algebraicae [53], without demonstration. Up to our knowledge, these are the first occurences of Bézout's theorem for n > 2.
Bézout probably knew those works of Lagrange and Waring. He is directly concerned by Lagrange's memoir, where Lagrange nominally criticizes the algebraical methods of resolution of equations in one unknown, that Bézout had conceived in the 1760's. Waring says, in the preface to the second edition of his Meditationes algebraicae, having sent a copy of its first edition, as soon as 1770, to some scholars, including Lagrange and Bézout 2 . Thus Bézout's theorem was already in the mind of those three scholars as soon as 1770. 3 On the contrary, Bézout was not yet aware of the formula of the product of the degrees in 1765. His works [2,4] of the years 1762-1765 about the resolution of algebraic equations show several examples of systems, of more than two equations, where the method designed by him in 1764 leads him to a final equation of a degree much higher than the product of the degrees of the initial equations, because of a superfluous factor. The discovery of Bézout's theorem for n > 2, still as a conjecture, is thus clearly circumscribed in the years 1765-1770.
Bézout's method of elimination and the superfluous factors
In elimination theory, early works for n = 2 already show two different and complementary methods ( [15,13,3]). One of them relies upon symetrical functions of the roots. This method was used by Poisson to give an alternative demonstration of Bézout's Theorem in 1802. As we have chosen to concentrate on Bézout's path, we won't describe this method in this article. 4 The other method, the one used by Bézout, is a straightforward generalization 5 of the principle of substitution used to eliminate unknowns in systems of linear equations ; this principle is still taught today in high school.
This method does not dictate the order in which to eliminate unknowns and powers of the unknowns. When Bézout uses this method in 1764, for n > 2, he eliminates the unknowns one after the other. This necessarily leads to a superfluous factor increasing the degree of the final equation far above the product of the degrees of the initial equations. This difficulty is easily illustrated by the following system of three equations : Eliminating z between (1) and (2), one obtains Eliminating z between (1) and (3), one obtains Eliminating 4xy between (4) and (5), one has x = y 2 . Substituting it for x in (4), the final equation is 4y(y 2 − 1) = 0 The root y = 0 does not correspond to any solution of the system above. 6 In fact, the true eliminand should be y 2 − 1 = 0. Bézout was well aware of the difficulty. In 1764, he says : (...) when, having more than two equations, one eliminates by comparing them two by two; even when each equation resulting from the elimination of one unknown would amount to the precise degree that it should have, it is vain to look for a divisor, in any of these intermediate equations, that would lower the degree of the final equation; none of them has a divisor; only by comparing them will one find an equation having a divisor; but where is the thread that would lead out of the maze? 7 At the time in 1764, Bézout had not yet found the exit out of the maze. Fifteen years later, in his 1779 treatise, Bézout gets rid of this iterative order by reformulating his method in terms of a new concept called the "sumequation" : 6 not even an infinite solution, in P 3 . 7 Cf. [3], p. 290; and also [5], p. vii.
We conceive of each given equation as being multiplied by a special polynomial. Adding up all those products together, the result is what we call the sum-equation. This sum-equation will become the final equation through the vanishing of all terms affected by the unknowns to eliminate. 8 In other words, for a system of n equations with n unknowns 9 , viz.
Bézout postulates that the final equation resulting from the elimination of (n − 1) unknowns is an equation of smallest degree, of the form The application of the method is thus reduced to the determination of the polynomials φ (i) . First of all, one must find the degree of those polynomials, as well as the degree of this final equation. This is the "node of the difficulty" according to Bézout. Once acertained the degree of the final equation, elimination is reduced to the application of the method of undetermined coefficients, and thus, to the resolution of a unique system of linear equations. Hence the need for Bézout's theorem predicting the degree of the final equation.
Although this idea of "sum-equation" seems a conceptual break-through reminding us of XIXth century theory of ideals, the immediate effect of this evolution is the rather complicated structure of Bézout's treatise ! For didactical reasons maybe, he introduces his concept of "sum-equation" only in the second part of his treatise ("livre second"). In the first part of it, his formulation is a compromise with the classical formulation of the principle of substitution. We shall analyse this order of presentation in section 5.
A treatise of "finite algebraic analysis"
In the dedication of his Théorie générale des équations algébriques, Bézout says that his purpose is to "perfect a part of Mathematical Sciences, of which all other parts are awaiting what would now further their progress" 10 . In the introduction to the treatise, Bézout opposes two branches of the mathematics of his days: "finite algebraic analysis" and "infinitesimal analysis". The former is the theory of equations. Historically, it comes first ; according to Bézout, infinitesimal analysis has recently drawn all the attention of mathematicians, being more enjoyable, because of its many applications, and also because of the obstacles met with in algebraic analysis. Bézout says : The former itself [infinitesimal analysis] needs the latter to be perfected. (...) The necessity to perfect this part [algebraic analysis] did not escape the notice of even those to whom infinitesimal analysis is most redeemable. 11 In his view, the logical priority of algebraic analysis adds thus to its historical priority. The composition of his treatise is almost entirely algebraic: • Bézout only briefly alludes ( § 48) to the geometric interpretation of elimination methods as research of the intersection locus of curves 12 ; • he does never make any hypothesis about the existence or the arithmetical nature of the roots of algebraic equations 13 .
This position of his treatise as specialized research on algebraic analysis is quite singular for his time 14 . 10 Cf. [5]. 11 Cf. [5], p. ii. 12 although he knew well Euler's memoir of 1748, Démonstration sur le nombre des points où deux lignes des ordres quelconques peuvent se rencontrer. 13 It seems that the very notion of root does never make appearance anywhere in his demonstrations. The word appears in § § 48, 117, 280-284, but never crucially. Bézout knew, of course, that the known methods of algebraic resolution of equations do not apply beyond fourth degree, as Lagrange had explained it exhaustively in his Réflexions sur la résolution algébrique des équations. Moreover, at the time, the status of complex numbers and the fundamental theorem of algebra where still problematic.
14 H. Sinaceur has commented upon the use of the term "analysis" in XVIIIth century ( [44], p. 51): The term "analysis" is a generic concept for the mathematical method rather than a particular branch of it. It was then normal that no clear distinction should exist between algebra and analysis, nor any exclusive specialization. Moreover, the analysis of equations, also called "algebraic analysis", could be considered as a part of a whole named "mathematical analysis".
A classification of equations
In fact, Bézout does not only demonstrate his theorem for generic equations. When the equations are not generic, the degree of the eliminand may be less than the product of the degrees of the equations. Bézout progressively studies larger and larger classes of equations, by asking that some coefficients vanish or verify certain conditions. He thus builds a classification that allows a better bound on the degree of the eliminand according to the species of the equations. The case of generic equations is thus encompassed, as a very special case, in a research of gigantic proportion. In this regard, Bézout says: Whatever idea our readers might have conceived of the scale of the matter that we are about to study, the idea that he will soon get therefrom, will probably surpass it. 15 An example In § 62 of his treatise, Bézout proves everything that was known before, in the case n = 2. For two equations with two unknowns x 1 , x 2 of the form where the A k 1 k 2 and the A k 1 k 2 are undetermined coefficients, the degree of the final equation resulting from the elimination of x 2 is D = tt −(t−a 1 )(t −a 1 ). Cramer in 1750, Euler, then Bézout himself in 1764, had known this result. Specifying a 1 = t, a 1 = t , one obtains the case of two "complete equations", i. e. generic of their degrees : In this case, the degree of the eliminand is the product of the degrees of the initial equations.
Orders and species What Bézout calls a "complete polynomial" is a polynomial, generic of its degree. Non-generic polynomials are called "incomplete". Bézout discriminates between several "orders" of incomplete polynomials. He thus defines the "incomplete polynomials of the first order" as those verifying the following conditions : 16 1 o that the total number of unknowns being n, their combinations n by n should be of some degrees, different for each equation 2 o that their combinations n − 1 by n − 1 should be of some degrees, different not only for each equation, but also for each combination 3 o that their combinations n − 2 by n − 2 should be of some degrees, different not only for each equation, but also for each combination; etc. 17 Among polynomials of this order, Bézout distinguishes several "species" (by the way, the two equations already mentioned in the example above are from the "first species of incomplete equations"). He says: As it is not possible to attack this problem from the forefront (the problem of incomplete polynomials of first order), I took it in the inverse order, first supposing the absence of the highest degrees of the combinations one by one, then the absence of those and of the highest degrees of the combinations two by two, etc., and also supposing some restrictive conditions in order to facilitate the intelligence of the method (...) We shall soon describe the "restrictive conditions" alluded to.
Bézout's symbolic notations for incomplete polynomials are such: For example, the second line describes a polynomial that we could write today, in a slightly modernized notation but still keeping with Bézout's unusual underscripts: where u, x, y, z, ... are the unknowns. Finally, in section III of book I, Bézout introduces the second, third, fourth orders, etc. of polynomials, represented by this notation: (u a,a,a,... ...n) t,t,t,...
where a ≤ā ≤ā ≤ ... and t ≥t ≥t ≥ .... We could write such a polynomial under the form: k≤ā,...,k+k +k +...≤t The whole book I of Bézout's treatise is thus progressing in an increasing order of generality. Bézout says, several times, that the equations studied earlier are particular cases of the new forms under study 18 .
Four important cases
We don't want to describe in full generality all the cases studied by Bézout. We shall concentrate on the four large following classes of equations: • complete equations, for all n • first species of incomplete equations, for all n • second species of incomplete equations, for all n • third species of incomplete equations, for n = 3 Only for those four classes, Bézout gives an explicit formula for the degree of the eliminand when each proposed equation is generic within its species. We shall give a detailed summary of Bézout's demonstration for the second species, slightly modernized as regards symbolic notations and algebraic terminology 19 .
To describe our notations, let us consider a system of n equations in n unknowns : where the f (i) are elements of a polynomial ring C = K[x 1 , x 2 , ..., x n ]. Bézout himself is using several kinds of indices : upper index means equation number, and lower indices mark unknown quantities 20 . We shall also use multi-index notations and write k = (k 1 , k 2 , ..., k n ) and The support supp(f (i) ) of a polynomial is the set of points k ∈ Z n such that the monomial x k has non-zero coefficient in f (i) . The main breakthrough of Bézout is thus to distinguish cases with respect to the convex envelop of supp(f (i) ).
Let t, a 1 , a 2 ,..., a n be integers verifying the following conditions (the "restrictive conditions" alluded to, in the quotation above) : For n = 3, such a convex set is the top polyhedron on figure 4 at the end of this article. For any such t and a, we define the sub-vector space of C over K of polynomials with support in E t,a : An incomplete equation of the first species is a generic member of C ≤t,a . As for systems of equations, let it be given for any index i ∈ {1, 2, ..., n} such a set of integers t (i) , a n , and f (i) be a generic member of equations, results will be derived from the case of second species ; we also provide a more elementary proof in the appendix when n = 3. As for the third species of incomplete equations, we shall say more in section 8, with a demonstration in section 11. 20 cf. [5] §62.
In other words, a Finally, the genericity of f (i) actually means that supp(f (i) ) = E t (i) ,a (i) and that the non-zero coefficients of f (i) are indeterminates adjoined to a base field. For example, let us say K is purely transcendant over Q : . The equations f (i) = 0 are what Bézout calls a system of "incomplete equations of the first species".
As for the "second species of incomplete equations" 21 , let t, a 1 , a 2 ,..., a n , b be non-negative integers satisfying the following restrictive conditions: These integers define a convex set E t,a,b in Z n : For any such t, a, b, we define the sub-vector space C ≤t,a,b of C over K of polynomials with support in E t,a,b . An incomplete equation of the second species is a generic member of C ≤t,a,b . As for the "third species of incomplete equations" 22 , when n = 3, let t, a 1 , a 2 , a 3 , b 1 , b 2 , b 3 be non-negative integers satisfying the following restrictive conditions: These integers define a convex set E t,a,b in Z 3 : An incomplete equation of the third species is a generic polynomial equation with support in any such convex set. Bézout further subdivides the third species into eight "forms", according to the algebraic signs of the quantities His second, third and seventh forms are exchanged under permutation of the unknowns, as well as his fourth, fifth and eighth forms. The four lower polyhedra on figure 4 at the end of this article represent supports belonging to the first, the third, the fifth and the sixth forms.
Three different views on elimination
The early article by Bézout (1764) was based on the following idea. The elimination process is split into a sequence of operations. Each operation consists in multiplying some of the previously obtained equations by suitable polynomials, and then building the sum of the products. This idea was already known. Yet, it is perfected in book II of Bézout's 1779 treatise. There, Bézout suggests to represent directly the final equation resulting from the elimination, as such a sum of products of the initial equations by suitable polynomials. This representation could well seem natural to the modern reader used to the concept of "ideal" inherited from end-of-nineteenth-century algebra. Analysing the proofs of different cases of Bézout's theorem such as he wrote them in his treatise, we are going to study three different views along the path leading to Bézout's concept of sum-equation.
We shall limit ourselves in this section to the case of three equations with three unknowns: of respective degrees t (1) , t (2) , t (3) . Bézout postulates at first the existence of a unique final equation resulting from the elimination of two unknowns (e. g. x 2 and x 3 ), of minimal degree D. Among the three different views that we are going to expound, the two former ones consist in multiplying f (1) by a "multiplier-polynomial" φ (1) with undetermined coefficients, of degree (T − t (1) ), where T is a large enough integer, and then making all monomials "vanish" in the product φ (1) f (1) , except monomials 1, x 1 , x 2 1 , ..., x D 1 . Those monomials are building the final equation that we are looking for 23 . This vanishing could be operated in two steps : first use equations (2) and (3) to make vanish as many monomials as possible, and then use the classical method of undetermined coefficients to do the rest of it. After having used equations (2) and (3), in order to apply the method of undetermined coefficients, there should remain as many vanishable terms (each one of them gives an equation), as undetermined coefficients in φ (1) . But the situation is complicated by the fact that only some of the undetermined coefficients provided by φ (1) could, according to Bézout, serve the purpose of elimination, several of them being "useless". Finally, Bézout ends up with a formula: (number of terms remaining in φ (1) f (1) )−D = nbr. of useful coefficients in φ (1) He would then use this formula to calculate D.
As we can see, this method is quite ambiguous, as long as we don't give a more precise meaning to the word "useless". Bézout says that the number of "useless" coefficients in φ (1) is precisely the number of monomials that we could make vanish in φ (1) using (2) and (3). This is, at best, mysterious, as long as we don't give a precise meaning to all this in terms of dimensions of vector spaces. Bézout does not say anything to clarify this situation, as he reserves the effective calculation for book II.
Substituting monomials We have said that there are three different views on elimination in Bézout's treatise, and we have just expounded the common setting of the first two of them. What differentiates them is the way to count the number of monomials that we could make vanish in a given polynomial, thanks to given equations. In his first proof 24 , Bézout says that: This reminds us of Newton's method of elimination of highest powers of the unknown 25 , generalized to any number of equations and unknowns. It is remarkable that in 1764, after having said that Euler's and Cramer's methods 23 Bézout proceeds in this way for "incomplete equations of the first species" for example, cf. [5], § 59-67. 24 As he does for "complete equations" (complete, i. e. generic of their degrees), cf.
of elimination 26 can only be used on systems of two equations, Bézout adds that Newton's method has the same shortcoming: In fact, Newton's method does not require to compare equations two by two. Nevertheless, it has no advantage over Euler's and Cramer's method for systems with more than two equations: then, the final equation is mixed with useless factors. 27 In 1779, Bézout does not mention explicitely "Newton's method". He rather speaks of "the principle of substitution"; and the way he uses this idea is quite ambiguous. Although the method itself is described as an effective mean of calculating the final equation, it is diverted to the calculation of the degree of the final equation. Bézout does never seem to worry about the effective calculation of the final equation. 28 After his proof for the case of "complete equations", he adds: 26 When Bézout mentions Euler in [3], he is surely refering to [15]. For a quick overview of Euler's method in [15], let there be two equations of degrees m and n in y: This expression is homogeneous of degree mn in A,B,C,...,a,b,c,..., and symetrical with regard to both sets of roots. We thus could obtain it in terms of the elementary symetrical functions of the roots, i. e. in terms of the coefficients P ,Q,...,p,q,... ; moreover the degree in x of every coefficient coincides with its degree in the roots, so that the degree in x of the final equation is also mn. 27 cf. [3], p. 290. 28 A simple example could illustrate this problem. Suppose deg f (2) = deg f (3) = 2 and deg φ (1) = 4, and let us make vanish as many terms as possible in φ (1) , thanks to equations (2) and (3). In a naive interpretation of Newton's method, let us start and make vanish terms divisible by x 2 3 in f (2) and φ (1) thanks to (3). If one then makes vanish terms divisible by x 2 2 in φ (1) thanks to (2), other terms divisible by x 2 3 will re-appear. A clever use of a "monomial order" would remedy this situation, but this was unknown to Bézout. Today, Groebner basis computations rely upon the idea of ordering monomials. About the history of Groebner basis, cf. [14] p. 337-338.
The idea of substitution is the nearest approximation to the elementary ideas of elimination in systems of equations of the first degree 29 . Although we could apply the same idea to incomplete equations, we are going to present another point of view, that can be applied in a general way, whereas the principle of substitution would need modifications and particular attentions if we should keep up with it. 30 Bézout is announcing now a second view on elimination.
Using multiplier-polynomials In his second proof 31 , Bézout explains that deleting terms in a given polynomial, thanks to equations (2) and (3), amounts to the use of new multiplier-polynomials: We ask how many terms we could make vanish in a given polynomial, thanks to these equations, without introducing new terms.
Suppose that there is only one equation; if, having multiplied it by a polynomial (...), we add the product (...) to the given polynomial: it is obvious that 1 o this addition will not change anything to the value of the given polynomial.
2 o Supposing the multiplier-polynomial is such as not to introduce new terms, we shall be able to make vanish, in the given polynomial, as many terms as there are in the multiplierpolynomial, because each of them brings one coefficient (...) 32 Thus, in order to make terms vanish in φ (1) f (1) thanks to (2), Bézout is using a polynomial multiplier φ (2) of degree T − t (2) , and he studies the sum (2) . Then, to make terms vanish thanks to (3), he studies the . As we can see, the order of proceeding for effective calculation is still imbued with ambiguity.
The sum-equation The first and second views described above are, at best, heuristic views. They could, in no way, be seen as rigorous proofs, nor effective algorithms. The third view on elimination in Bézout's treatise is 29 For t (1) = t (2) = t (3) = 1, this method embodies what is now termed as "Gaussian elimination". 30 cf. [5], § 54. 31 as he does for "incomplete equations of the first species", cf. [5], § 59-67. 32 cf. [5], § 60. explained in book II. It is the only one that we shall refer to, when summarizing the calculation of the degree of the final equation, in next section. At some point before or during the writing of his treatise, Bézout must have become aware of the following fact : the set of polynomials that are sums of products of n given polynomials by multiplier-polynomials is closed under this operation. That is to say, the result obtained after iterating several such operations is again a sum of products of the given polynomials by multiplierpolynomials. All steps become, so to speak, united: From now on, we shall study elimination in a way that differs from what preceeds, but not essentially.
Let us think that every given equation is multiplied by a specific polynomial, and that we add up all those products. The result is called "sum-equation". The sum-equation becomes the final equation through the vanishing of all terms containing any unknown that we should eliminate.
We shall now 1 o settle the form of every multiplier-polynomial. 2 o Determine how many coefficients, in each multiplier-polynomial, could be considered as useful for the elimination (...) 33 To calculate the degree of the final equation, one thus has to: • multiply each equation f (α) = 0 by a polynomial multiplier φ (α) with undetermined coefficients, of degree T − t (α) ; • build the "sum-equation"; • ask that all terms vanish, except 1, In the event of a single solution, the method of undetermined coefficients implies that: nbr. of equations ≥ (nbr. of undetermined coefficients)−(nbr. of useless coeff.) 33 Cf. [5], § 224.
Bézout's demonstration: second species of incomplete equations
We shall now summarize Bézout's demonstration concerning the degree of the eliminand of n incomplete equations of the second species in n unknowns: where, for all i, the polynomial f (i) is a generic member of C ≤t (i) ,a (i) and the degrees t (i) , a (i) verify the restrictive conditions given on page 10 above.
To start with, Bézout takes it for granted that there exists a unique "final equation" of lowest degree resulting of the elimination of (n − 1) unknowns, i. e. an eliminand, that could be represented as : where the φ (i) are conveniently chosen "multiplier-polynomials" 34 . This being sayed, we are not going to discuss its existence here. The important point is that Bézout is studying the linear map : Doing so, he restricts himself to finite sub-vector spaces by putting an upper bound on the degrees of the φ (i) . For any set of integers T , A 1 , A 2 ,..., A n , B as above, let us call (f (1) , ..., f (n) ) ≤T,A,B the linear map defined by We shall also sometimes omit the indices A and B when we fear no confusion. The total number of undetermined coefficients in the φ (i) polynomials is Bézout says that this number is the number of "useful coefficients", plus the number of "useless coefficients" 35 . In other words : Now imf ≤T,A,B is of special interest since any eliminand would belong to it for large enough values of T , A and B. Moreover, if there exists an eliminand in We are thus naturally led to calculate 36 In § 233, Bézout describes how to count the number of "useless coefficients", i. e. dim ker f ≤T,A,B . He says : If one remembers what has been said in Book I, one will understand that, the number of useful coefficients in the first multiplierpolynomials of the equations undergoing elimination, will always be equal to the number of coefficients in this polynomial, minus 35 Cf. [5] §224. 36 Let us re-phrase this argument in Bézout's terminology : according to his third view on elimination, the method of undetermined coefficients is leading to the coefficients of the final equation, and these coefficients are the solution of a system of linear equations. One should have, in the event of a single solution : Hence, as written above : As Bézout does never prove the existence of the eliminand, then, what he does actually calculate is the right side of this inequality, i. e. dim cokerf ≤T,A,B . the number of terms that could be made to vanish in this polynomial, thanks to the n − 1 other equations, n being the total number of equations; That the number of useful coefficients in the second multiplierpolynomial, will be the total number of coefficients of this polynomial, minus the number of terms that could be made to vanish in this polynomial, thanks to the n − 2 last equations; That the number of coefficients useful in the third multiplierpolynomial, will equal the number of terms of this polynomial, minus the number of terms that could be made to vanish in this polynomial, thanks to the n − 3 other equations; and so on up to the last one, where the number of useful coefficients will be precisely equal to the number of its terms. 37 The argument is inductive. Let us call (1), ..., (n) the n equations. In order to calculate the number of coefficients that could be made to vanish in φ 1 using equations (2), ..., (n), one should use new multiplier-polynomials. To paraphrase what Bézout tells us : dim(ker f ≤T,A,B ) = nbr. of useless coefficients = nbr. of coeff. to vanish in φ 1 thanks to (2), ..., (n) + nbr. of coeff. to vanish in φ 2 thanks to (3), ..., (n) + ... + nbr. of coeff. to vanish in φ n−1 thanks to (n) Alas, proving it lies beyond Bézout's means. He seems to have been aware of the difficulty. The problem could be reduced to proving the following statement: Although Bézout goes to great lengths studying this situation 38 , there is, as far as we could understand, no proof of this statement in his treatise. For the time being, let us admit this statement (see section 11 for the proof). Then we can write: By recurrence on the number of equations, we thus have, as written above: Let us now use the following notation for finite differences 39 of a given polynomial P (T, A, B): 38 See [5], § 107-118, where he tries to convince his reader that it is impossible to increase the number of terms vanishing in φ (1) by "fictitious introduction" (introduction fictive) of terms of higher degree. 39 Bézout's own notation for higher order finite differences could be defined by recurrence as follows: The relation between his notation and ours could thus be expressed by: From (6) and (7), by gathering terms, one finds: where we omit the indices A, B for brevity. For example, the third equality should read: This recurrence formula is the heart of Bézout's computations. As dim C ≤T,A,B is a polynomial of degree n in T, A, B, after applying n times the operator ∆, one must obtain a constant independant of T, A, B. Eventually, Bézout finds 40 : In 1782, three years after Bézout, Waring takes over this formula in the preface to the second edition of his Meditationes algebraicae 41 . Most importantly, 40 More details of this calculation will be given in the proof of prop. 8, section 11. 41 Waring writes (p. xvii -xx of the preface to the second edition): si sint (h) aequationes (n, m, l, k, &c.) dimensionum respective totidem incognitas quantitates (x, y, z, v, &c.) involventes ; et sint p, q, r, s, &c. ; p , q , r , &c. ; p , q , &c. ; maximae dimensiones, ad quas ascendunt incognitae quantitates x, y, z, v, &c., in respectivis aequationibus (n, m, l, k, &c.) dimensionum ; tum aequationem, cujus radix est x vel y vel z, &c. haud ascendere ad plures quam simul sumptarum haud superent dimensiones a, a , a , &c. in predictis aequationibus ; tum ae-in 1779, Bézout had also noticed that this n-th order finite difference could be written as an alternate sum 42 : Later, Cayley will shed light upon this alternate sum.
7 Complete equations and the first species of incomplete equations As was said above, one could, from the formula for the degree of the eliminand of n incomplete equations of the second species, derive the formulae for complete equations and for the first species of incomplete equations. For n incomplete equations of the first species, i. e. equations in x 1 , x 2 , ..., x n of the form: the degree of the final equation resulting from the elimination of Waring forgets to mention the "restrictive conditions" (see p. 10 above). With his notations, those conditions require that: , one obtains the well-known formula for the case of n "complete equations", i. e. generic of their degrees : In fact, for general n, only this case of Bézout's theorem will be the object of rigorous study in XIXth century (cf. Serret [43], Schmidt [41], Netto [34] vol. 2). 8 Bézout's theorem for three incomplete equations of the third species As for the "third species of incomplete equations", when n = 3, Bézout is hitting another major problem : dim C ≤T,A,B is not any more a polynomial in T, A, B. It is, so to speak, polynomial by pieces. Let us write For each of the eight "forms" corresponding to different algebraic signs of those three quantities, dim C ≤T,A,B is a different polynomial in T, A, B. More precisely, calling P i (T, A, B) = dim C ≤T,A,B when the values of T, A, B belong to the i-th form, one has: Suppose that the argument developed above for the second species of incomplete equations is also valid for the third species, then one should have, as above: The rest of the computation could only be done under the assumption that all vector-spaces C ≤... actually involved in this expression belong to the same "form". Let us write Bézout calculates D 1 , D 2 , ..., D 8 . The actual eight formulae occupy no less than eight full pages of the treatise 44 . He then proposes a test, or rather, "symptoms" 45 to reject some of those eight values. The other values are "admissible"; as such, all of them must be equal to the degree of the eliminand, according to Bézout. In section 11 below, we shall prove that Bézout's choice was right. 44 Cf.
[5] §119-127. 45 See the "symptoms enabling us to recognize, among the different expressions of the value of the degree of the final equation, those that one should choose or reject", in [5] §117. Here again, Bézout's own justifications lack evidence; they rely upon undemonstrated facts about the sum-equation, as in footnote 38 p. 19 above.
The theory of the resultant in XIXth century
We have just presented Bézout's slowly maturated treatise and the fertile historical context of its publication. We are struck by the lack of immediate posterity of this book: sixty years separate the publication of Bézout's treatise and the first revival of what was reckoned, in XIXth century, as the "theory of elimination". Bézout's treatise was complex, it was clearly perceived as such by one of his early readers, and this fate follows it until today. Poisson recognized the importance of Bézout's work but immediately pointed out to the gap between the strength of the theorem and the "difficulties" of its demonstration : This important theorem is Bézout's, but the way he proves it is neither direct nor simple; nor is it devoid of any difficulty. 46 Three mathematiciens produced, so to speak, a new beginning in elimination theory, between 1839 and 1848: their names are Sylvester, Hesse, Cayley 47 . The main object of study is, rather than the eliminand, the "resultant" of n homogeneous polynomials in n indeterminates.
Before entering into the works of those three scholars, it is to be noted that two special cases progressed in first half of XIXth century: linear equations, thanks to the theory of determinants, and the case of two equations in one or two unknowns. When eliminating one unknown between two equations in two unknowns, one obtains the eliminand. When eliminating one unknown between two equations in one unknown (or between two homogeneous equations in two unknowns, which is the same thing), one obtains the resultant. The discriminant is the resultant of a polynomial and its derivate. The interest in the discriminant is motivated by the study of the euclidian algorithm for polynomials, following Sturm's researches about the roots of polynomials over R. The case of two equations also benefits from the meth- 46 Cf. [39] p. 199. See also Brill and Noether, saying about Bézout's book that it is "as well-known as lacking readers", and that, by the time of Jacobi, "most of it had fallen into oblivion", cf. [8] p. 143 and 147. 47 Some historians and mathematicians have said that Sylvester and Richelot (1808-1875, student of Jacobi) had discovered the "dialytic" method of elimination, although this method was not very different from Bézout's method when dealing with only two equations ; it is also said that Hesse had re-discovered this method, in 1843. Cf. Max Noether, [36], p. 136, about Sylvester [46], [47] and [48], and Richelot [40]. Eberhard Knobloch [28,29] noticed that Leibniz already knew this method ; perhaps Leibniz was even closer to Sylvester's ideas, than Euler and Bézout. In what follows, we shall only draw a comparison between the works of Sylvester, Richelot and Hesse, and those of Bézout from 1779 concerning the sum-equation, with an emphasis on the case of n equations when n > 2.
ods of analysis, for example in Ossian Bonnet's works culminating in 1847 when he defines the intersection multiplicity of two curves in one point.
Sylvester Sylvester's researches are stimulated by Sturm's theorem. When applying the euclidian division algorithm to two polynomials in one indeterminate, the successive remainders are also called "Sturm functions". In 1840, Sylvester gives a formula in terms of determinants to calculate Sturm functions. As was known long before, the last remainder, being of degree 0, can be seen as the result of the elimination of the indeterminate. This is what Sylvester calls the "dialytic method of elimination". There is no demonstration in this short article. Maybe Sylvester did not know, in 1840, of Euler's and Bézout's works about elimination. He does not refer to them ; later, in 1877, he himself says that he had discovered the dialytic method by teaching to a pupil 48 .
In 1841, Sylvester develops the dialytic method in the case of three quadratic homogeneous equations in three unknowns x, y, z. The interest in homogeneous equations is crucial to our subject, and it could well be explained by the fusion between projective geometry and algebraic geometry under the influence of Möbius and Plücker. Sylvester delelops several versions of his dialytic method. In one of them 49 .. when a very young professor, fresh from the University of Cambridge, in the act of teaching a private pupil the simpler parts of Algebra, I discovered the principle now generally adopted into the higher text books, which goes by the name of the Dialytic Method of Elimination". 49 One thus obtains 10 equations in the 10 monomials of degree 3. If one considers those monomials as the 10 independant unknowns of a system of 10 linear equations, elimination is reduced to the calculation of a single determinant. As compared to the method of the sum-equation, this method is sparing the use of undetermined coefficients. Moreover, it is a symbolic method. It uses the ambivalence of the symbolic expression of the monomials: each monomial is both a monomial in the unknowns of the initial system, and an independant unknown of a system of linear equations. This allows the transfer of the symbolic methods of linear algebra (determinants, and soon, matrices) to the algebra of forms of higher degree 50 .
Contemporary readers must have been surprised by the use of the jacobian determinant. In general, for equations of higher degree and systems with more unknowns, there still remains to explain the appropriate choice of linear equations. In an article [49] published in the same year, Sylvester gives a general method. We are not going to describe it entirely. Suffice to say that, after having obtained a first set of equations called augmentatives by multiplying each initial equation by the monomials (like the 9 equations of degree 3 above), Sylvester builds other equations called secondary derivatives, as follows. He writes each of the n initial equations under the form x α F + y β G + z γ H + ... where x, y, z,... are the n unknowns to eliminate, F , G, H,... are polynomials, and α, β, γ,... is any system of integers allowing such a representation. He thus obtains a system of n equations: 50 Sylvester is probably refering to this in particular when he says that the "great principle of dialysis, originally discovered in the theory of elimination, in one shape or another pervades the whole theory of concomitance and invariants", cf. [50], p. 294.
The following determinant is one of the secondary derivatives: The many choices possible for α, β, γ, etc. allow as many secondary derivatives.
Hence there is no need of augmentatives. The expression obtained by eliminating dialytically between the m equations of degree (m − 1) is none other than the determinant of a matrix that Bézout had already studied [3] in 1764. Bézout's matrix might thus have inspired this general method to Sylvester. This matrix also played an important role in his famous later memoir On a theory of syzygetic relations 51 . Still in 1841, Sylvester also considers the possibility of building augmentatives of degree at least t i − n + 1, where t i is the degree of the i-th initial equation. In this case, the augmentatives suffice to build a determinant, and there is no need of secondary derivative. This is the grounding for a method developed by Cayley in 1848 (see below).
One must say that Sylvester's elimination method often brings out a superfluous factor; Sylvester doesn't give any mean of detecting and isolating this factor. His method leads directly to the resultant in but a few cases, as the two cases mentioned above for two or three equations. Despite of this drawback, Sylvester has clearly circumscribed a domain of research not limited to the resultant or the eliminand. 52 Hesse Applying elimination to the study of plane cubics in 1844, Hesse goes back to the formalism of Bézout's "sum-equation". He knew of Bézout's treatise, and he does mention it 53 . Let there be three quadratic equations in two unknowns: In order to eliminate the two unknowns, Bézout would have used multiplierpolynomials of degree 2. Following the way of thinking of XIXth century algebraists, let us homogenize the sum-equation of degree 4, thus getting a third unknown z. Then there exist multiplier-polynomials A, B, C such that: We must keep in mind this sum-equation when studying the other equations derived by Hesse: • First of all, Hesse translates as a sum-equation the method of Sylvester using the jacobian determinant. If φ is the jacobian determinant of U , V , W , one could obtain a sum-equation of degree 3 thanks to multiplierpolynomials of degree 1: where deg A = deg B = deg C = 1 and δ is a constant 55 .
• Hesse then observes that the jacobian itself can be obtained by a sumequation with multiplier-polynomials of degree 2: AU + BV + CW = zφ 52 In 1997, Jean-Pierre Jouanolou gave a complete study of the secondary derivatives that he calls "formes de Sylvester", in [27], § 3.10. 53 Cf. [22]. Hesse also knew of Richelot and Sylvester, and of the works of Euler. He extends to n > 2 equations the method of Euler-Cramer using symetrical functions, as Poisson had done in 1802, although he probably didn't know of Poisson's article. 54 It is to be noted that Sylvester had also mentioned this sum-equation, translated in his own dialytic formalism where one would multiply by monomials of degree 2, in a footnote in [48], p. 64-65. 55 Cf. [22], § 8-10.
• He also proves that the partial derivatives of φ could be obtained in the same fashion, under the form AU + BV + CW + δφ = z ∂φ ∂x • One is thus allowed to calculate the resultant R with multiplier-polynomials of degree 0, thanks to the partial derivatives of the jacobian determinant 56 : As modern elimination-theorists would say 57 , Hesse thus studied the resultant, the jacobian, and the partial derivatives of the jacobian, as Trägheitsformen, or inertia forms, of the ideal (U, V, W ). We won't describe how Hesse applied these calculations to the case where U , V , W are the partial derivatives of the homogeneous polynomial defining a plane cubic. where m = (x, y, z). This langage is still in use today, cf. [26]. 58 Cf. [18] p. ix.
Cayley asks for the number of "arbitrary constants" in Θ. In modern language, this number is dim(U, V, ...) r Let us follow Cayley's application of the dialytic method. Let us multiply U by each monomial of degree r − m (for large enough r), and V by each monomial of degree r − n, and so on. Let C be the ring of polynomials. We shall use the compact notation : This number is thus the number of linear equations to be solved, when using the dialytic method. The dim(C r ) monomials of degree r will be the independant unknowns of this system of linear equations. We shall represent this system by a matrix where each column corresponds to one equation 59 . This matrix (cf. figure 1 at the end of this article) is the matrix of a linear map : Going back to the concept of "sum-equation" or "involution" as Cayley used to say, one sees that f 1 is defined by : In order to eliminate dialytically, one needs dim C r independant columns ; then, one could extract a square sub-matrix and build the determinant. To check that the elimination is possible, Cayley sets out to calculate The relations of linear dependancy between columns are given by families of coefficients that Cayley writes as lines of coefficients under the original matrix (cf. figure 2). In terms of the sum-equation, these relations constitute 59 The matrices mentioned here and below, and the solution of this problem of linear algebra, only appeared in a second article [11] published by Cayley in 1848. Moreover, we should rather speak of "matricial figure" than "matrix", because this mathematical object was not yet seen as an operator, and its properties were not yet completely developed in the 1840's. The figures 1, 2 and 3, at the end of our article, are lose reproduction of figures in [11].
the kernel of f 1 . Cayley admits without proof 60 that ker f 1 is generated by elements of C r−m of the following form: where M ∈ C r−m−p . . .
Hence, it is generated by dim C r−m−n vectors. The height of the second bloc in the matricial figure is thus dim C r−m−n , and ker f 1 is generated by the image of a second map f 2 : By iterating this procedure, Cayley obtains a figure that we reproduce on figure 3 ; but he does not explain the following steps. Following the path suggested by Cayley, one should endeavour to write the sequence of linear maps thus obtained, and prove that it is an exact sequence 61 : where imf i+1 = ker f i . Cayley concludes rashly: For large enough values of r, this quantity can be expressed in terms of binomial coefficients. As a matter of fact, N = 0 : hence, according to Cayley, dialytic elimination is possible, for any given number of homogeneous equations with as many unknowns. What also distinguishes XIXth century authors from their predecessors including Bézout, is the endeavour to give explicit formulae for the result of elimination. Cayley is looking for a formula of the resultant. In 1847, he gives the following formula: Cf. [10], p. 261, "the number N must be diminished by...". 61 Cf. [11], p. 371. Of course, the fact that the sequence is exact is not at all obvious, and Cayley did not prove it. The first historical proof will be mentioned in section 10 below.
where, for all i, Q i is a subdeterminant from the matrix of f i . The choice of these subdeterminants obeys the following rule. There are as many columns in the matrix of f i , as lines in the matrix of f i+1 . The rule is that the set of columns occurring in Q i must be the complement of the set of lines occurring in Q i+1 . As for the last one, say Q j , it is the determinant of a maximal square submatrix of the matrix of f j , and it must be chosen such that all Q i are nonzero. Several authors (Salmon, Netto) seem to have tried proving Cayley's formula, until Macaulay resigned this task and found another formula, simple but "less general" , of the form
Koszul complex
We have seen the role of an exact sequence leading to an alternate sum of dimensions in Cayley's work about the resultant, although Cayley describes only the two first maps of the sequence, and he omits any proof of exactness.
On the other hand, let us look back at Bézout's work on complete equations. Bézout's alternate sum 64 obtained through finite differences is quite similar to Cayley's alternate sum. There is just one more unknown in the equations given, because Bézout is studying the eliminand, whereas Cayley is studying the resultant. Bézout's result could be deduced from Cayley's through homogenizing.
As mentioned previously, Serret and Schmidt gave a rigorous proof of Bézout's theorem in the case of the eliminand of n complete equations in n unknowns. Their method could also apply to the case of the resultant. But Serret and Schmidt bypass the need to study the exact sequence above: their proof is an a posteriori proof of the degree of the eliminand.
Hurwitz The first in-depth study of the two first maps of the sequence described by Cayley was given by Hurwitz in 1913. Hurwitz follows the ideas of Mertens [33] who had already given a complete but complicated theory of the resultant in 1886. In this paragraph and the next, we shall consider r homogeneous polynomials f 1 , f 2 , ..., f r of degrees t 1 , t 2 , ..., t r in a polynomial ring C = K[a, b, ...][X 0 , ..., X n ] over a number field, where X 0 , ..., X n on one hand, a, b, ... on the other hand, are indeterminates. The indeterminates a, b, ... will serve the purpose of building "generic" polynomials in X 0 , ..., X n . Let us call a αk the coefficient of X tα k in f α . Consider the ideal M = (X 0 , ..., X n ). Hurwitz introduces a new concept : he says that f ∈ C is a Trägheitsform if there exists un integer k ≥ 0 such that M k f ⊂ (f 1 , ..., f r ).
The smallest integer k such that M k f ⊂ (f 1 , ..., f r ) is called the "rank" of f (Stufe). A Trägheitsform is said "proper" if its rank is non zero. Hurwitz gives a general study of the Trägheitsformen. Let us first consider the case where n = r. Hurwitz proves that the Trägheitsformen of rank σ are of degree t α − n − σ + 1. He shows that all Trägheitsformen of rank 1 belong to the ideal (J, f 1 , ..., f n ) where J is the jacobian determinant of f 1 , ..., f n ; in rank > 1, he succeeds in giving explicit formulae at least for some of the Trägheitsformen, those that are linear in the coefficients of each of the polynomials f 1 , ..., f n . He proves the existence of a Trägheitsform of degree 0 which generates the ideal of all Trägheitsformen of degree 0: this is the resultant. In the case r < n, Hurwitz succeeds in proving that (f 1 , ..., f r ) has no proper Trägheitsform. Consider the following assertions: (I n ) for all r < n, the ideal (f 1 , ..., f r ) has no proper Trägheitsform Hurwitz proves I n ⇒ II n ⇒ I n+1 . We recognize in II n the assertion made by Cayley about the exactness of his sequence in degree 1. Finally, without any condition on r, n, Hurwitz also proves, with a similar argument, that (f 1 , ..., f r ) has no proper Trägheitsform of degree > t α − r.
Koszul's complex An explicit description of the exact sequence touched upon by Bézout, and later conjectured by Cayley, is to be found in the work of the algebraist Koszul taking over the tools of differential geometry in the late 1940's. Koszul's complex is an avatar of de Rham's complex of differential forms 66 . Let there be given r homogeneous polynomials f 1 , f 2 , ..., f r ∈ C = K[X 0 , ..., X n ]. Suppose that for all r ≤ n, f r does not divide zero modulo Ce i the free C-module of rank r. Koszul's complex is the sequence of maps: where each map is defined by the external product : Hence, in degree T , the following is an exact sequence of vector spaces : From this one could calculate dimensions of the vector spaces involved: where we recognize the alternate sum already known to Bézout and Cayley. 66 Retrospective studies on Koszul's work are to be found in Annales de l'Institut Fourier, 37 (1987). See the allocution by H. Cartan [9]. 67 Cf. [42], p. 59, where Serre calls M -sequence such a sequence of polynomials. Some authors speak of a regular sequence (cf. [20], II.8, p. 184). See [6] for the main properties of Koszul's complex.
Toric varieties
A theorem by D. N. Bernshtein [1] published in 1975 also gives the degree of the eliminand for systems of equations with support in a convex set. Moreover, this theorem leads to the same kind of alternate sum as was found in Bézout's calculations. Bernshtein uses tools and concepts unknown to Bézout (infinite series in several variables, Minkowski's volume). The following year, an article [30] by Kushnirenko shows how to build a Koszul complex that leads directly to the afore-mentioned alternate sum. It is closely related to the theory of "toric varieties" developed in the 1970's.
We shall now use toric varieties and a Koszul complex ; but rather than giving a full account of Kushnirenko's results about equations with support in a convex set, we shall merely give a complete proof of Bézout's theorem for n incomplete equations of the second species (cf. section 6 above), thereby also subsiding to the gap in Bézout's own demonstration.
Let there be r incomplete equations of the second species, with n un- knowns : such that supp(f (i) ) = E t (i) ,a (i) ,b (i) for 1 ≤ i ≤ n, using the notations on p. 10 above. The theory of toric varieties will provide us with an algebraic variety X(∆) which is a compactification of the torus (C × ) n obtained by glueing affine varieties. This variety will allow a geometric interpretation of the vector spaces of polynomials studied by Bézout, as sets of global sections of some fiber bundles on X(∆). Let us start with a few preliminaries.
Proposition 1 Let P be the convex envelop in R n of the support E t,a,b of an incomplete equation of the second species. Then E t,a,b = P ∩ Z n .
Proposition 2
Let P and Π be the convex envelops in R n of the supports E t,a,b and E θ,α,β of two incomplete equations of the second species. Then E t+θ,a+α,b+β is also the support of an incomplete equation of the second species, and the polytope (P + Π) is its convex envelop in R n . Demonstration. By linearity, it is clear that (t + θ, a + α, b + β) verify the "restrictive conditions" on p. 10. Let P be the convex envelop of E t+θ,a+α,b+β in R n . The polytope (P + Π) is defined by Now using the calculations in the demonstration of prop. 1, one sees 70 that all vertices of P belong to P + Π. Hence P ⊂ P + Π.
On the other hand, as P , Π and P are all defined by sets of linear inequations similar to those on p. 10, it is also obvious that every x+ξ ∈ P +Π belongs to P . Hence P = P + Π. q. e. d.
Example For n = 3, such polytopes have 8 faces and 12 vertices, cf. an example on figure 5.
Construction of X(∆) We briefly recall the construction of an algebraic variety over C associated with a fan. The theory of toric varieties associates to every strongly rational convex cone σ an affine algebraic variety U σ . A strongly rational convex cone is a subset σ of R n generated over R + by a finite family of vectors with rational coordinates, and such that σ ∩ (−σ) = {0}. A fan is a family ∆ of such cones, such that each face of a cone in ∆ is also a cone in ∆, and that the intersection of two cones in ∆ is a face of each. A fan ∆ gives rise to an algebraic variety X(∆) by glueing together the corresponding affine pieces. We are going to work with a fan ∆ closely related to the polytopes described above.
Description of the fan ∆ Let {e 1 , e 2 , ..., e n } be the canonical basis in R n . The maximal cones of the fan ∆ are simplicial cones generated over R + by families of n vectors. There are (n 2 + 2n − 3) such cones, corresponding to the following families of n vectors : 70 Beware that this crucial fact won't work for the third species of incomplete equations. The fan ∆ is the set of all cones generated by sub-families of these families.
Remark This fan could also be described as the fan of cones over the faces of a polytope dual to the polytope P occuring in the previous propositions, and it has been calculated as such. See [17] p. 26. As a matter of fact, the resulting fan does not depend upon the particular choice of t, a, b. There is a correspondance σ → u(σ) between the maximal cones of ∆ and the vertices of P : When considering several polytopes P (i) , we shall write u (i) (σ) the vertex of P (i) corresponding to the maximal cone σ. For a polytope Π, we shall use the greek letter υ(σ).
Example For n = 3, see a representation of the fan ∆ on figure 6: each triangle represents one maximal cone.
Affine open sets U σ ⊂ X(∆) For each cone σ of ∆, one puts Here χ 1 , χ 2 , ..., χ n are the indeterminates over the base field C. If τ ∈ ∆ is a face of σ ∈ ∆, there is a natural mapping U τ → U σ embedding U τ as a principal open subset of U σ (cf. [17], p. 18), and one can thus build a variety X(∆) by glueing all affine pieces U σ 1 and U σ 2 along U σ 1 ∩ U σ 2 = U σ 1 ∩σ 2 . In other words : Construction of a vector bundle O(D P ) on X(∆) If P is the convex envelop in R n of the support of an incomplete equation of the second species, one can define a line bundle O(D P ) on X(∆) as follows. The bundle is trivial on each U σ , ie. C × U σ . On the intersection of two maximal cones σ 1 and σ 2 , the change of map is given by : These are isomorphisms because χ u(σ 1 )−u(σ 2 ) is a unit of A σ 1 ∩σ 2 . The compatibility of the changes of map on U σ 1 ∩σ 2 ∩σ 3 comes from the fact that In other words, the sheaf of germs of sections of the vector bundle is isomorphic to the ideal sheaf generated over A σ by χ u(σ) ∈ C[χ 1 , χ −1 1 , ..., χ n , χ −1 n ] for every maximal cone σ ∈ ∆. Demonstration. On one hand, we must prove that χ u is a regular section of O(D P ) over U σ , i. e. χ u−u(σ) ∈ A σ , for every maximal cone σ and every u ∈ P ∩ Z n . It is easily verified when u is a vertex of P using the description of the vertices on p. 38, and this is enough. On the other hand, if u ∈ Z n verifies (∀σ) χ u−u(σ) ∈ A σ , one must prove that u ∈ P . Suppose it is not the case. Hahn-Banach theorem implies the existence of a hyperplane separating u from the convex polytope P : there exists v ∈ R n and a ∈ R such that For the cone σ containing v, this contradicts (∀σ) χ u−u(σ) ∈ A σ . Hence it is impossible that u ∈ P .
Proposition 4 Let P (1) and P (2) be the convex envelops of the supports of two incomplete equations of the second species, with P (1) ∩ Z n = E t (1) ,a (1) ,b (1) and P (2) ∩ Z n = E t (2) ,a (2) ,b (2) . Let f (2) ∈ Γ(X(∆), O(D P (2) )). Multiplication by f (2) induces a natural map (2) Demonstration. This map is defined locally on each affine subset U σ by : In the following, we shall write f (2) Koszul complex One could thus define, locally on every open affine subset U σ , a whole complex isomorphic to a Koszul complex. Let Π be the convex envelop of the support of any incomplete equation of the second species 71 , and f (1) , f (2) ,..., f (r) as above. Let L σ be the A σ -module defined by Avoid degenerate cases where some of the vertices υ(σ) coincide.
The Koszul complex gives a sequence of maps of A σ -modules : . . .
Those maps glue with each other on the intersections U σ 1 ∩ U σ 2 into a sequence of maps of sheaves over X(∆): Theorem 1 This sequence of maps of sheaves over X(∆) is an exact sequence.
Demonstration. We prove it locally on every open affine subset U σ . In fact, each f (i) has a non-zero constant term because χ u (i) belongs to the support of f (i) . One can thus use the following trick by Mertens, in order to prove that f (1) , f (2) , ..., f (r) is a regular sequence. One must prove that, for all s ≤ r, f (s) does not divide zero modulo ( f (1) , f (2) , ..., f (s−1) ). Suppose Let us recall that our base field is an extension of Q, and that the coefficients of our polynomials f (i) are indeterminates over Q (cf. p. 10). The constant term in f (i) is such an indeterminate, call it c (i) . The base field K could thus be written as k(c (1) , ..., c (r) ) where k is an extension of Q. There is an isomorphism 72 : k c (1) , ..., c (r) χ 1 , χ −1 1 , ..., χ n , χ −1 The bottom ring is a polynomial ring, it is an integral domain, and thus 72 This is inspired by Mertens [33] p. 528-529.
Theorem 2 For large enough k, the fibre bundle O(D kΠ ) is very ample. As a consequence, X(∆) is a projective variety embedded in P |kΠ∩Z n |−1 (C). For such k, there exists N such that the following sequence is exact : Demonstration. The existence of k and of a very ample O(D kΠ ) is a consequence of the non-degeneracy of Π. See [17] p. 69-70 for a proof. For such k, write O(1) = O(D kΠ ). A theorem of Serre states that, if F is a coherent algebraic sheaf on a projective variety, then for N large enough, F ⊗ O(N ) has trivial cohomology. This implies that, for N large enough, the following sequence is exact : Corollary (Bézout's theorem for the second species) For k and N as in theorem 2, when n = r, the dimension of the cokernel of the last map 73 is It is an upper bound on the degree of the eliminand of f (1) , ..., f (n) .
Demonstration. The alternate sum of dimensions of the vector spaces appearing in the exact sequence above can be expressed as the following finite difference of order n: where T, A, B are the degrees occuring in Λ n M , i. e.: i. e. the map defined by Λ n−1 L σ → Λ n L σ over every U σ .
Calculating |E T,A,B | is an easy combinatorial problem. One has: Let us recall the well-known formula as well as the finite difference of a product: Using these formulae enables us to calculate the quantity above and prove the result stated.
Proof of the statement on p. 18 For r polynomials with n indeterminates, the map (f (1) , f (2) , ...f (r) ) ≤T,A,B of section 6 is none other than the last map 74 of the sequence in theorem 2. For values of T , A, B ensuring the non-degeneracy of Π, and for k and N as in theorem 2, the sequence is exact, so the kernel of this map must be the image of the next map : if (φ (1) , ..., φ (r) ) is an element of the kernel, then there exists a family of polynomials such that, in particular, q. e. d.
Third species of incomplete equations, n = r = 3 For polynomials f (1) , f (2) , f (3) of the third species in three indeterminates, most of the arguments above are still valid, although there is a major problem with proposition 2. In the demonstration of proposition 2, the coordinates of the vertices 74 i. e. the map defined by Λ r−1 L σ → Λ r L σ over every U σ .
of P were linear in t + θ, a + α, b + β and could be written as the sums of the coordinates of the corresponding vertices of P and Π, the three polytopes having the same form; but, as we said in section 8 p. 22, there are eight different forms of polynomials of the third species. The convex envelops of their supports are polytopes of differents forms (cf. figure 4). In order to fix the demonstration of proposition 2, one is going to study a larger class of polytopes, from which the eight forms of polytopes of the third species are only degenerate forms. These polytopes are represented on figure 7. If (t, a, b) belongs to the third species, such a polytope is the convex envelop of a set E t,a,b,s ⊂ Z 3 defined by : Proposition 5 If (t, a, b) belongs to the third species, put for 1 ≤ i ≤ 3: where the indices are modulo 3 (for example b 4 = b 1 ). Then E t,a,b,s = E t,a,b . Demonstration. It is clear that E t,a,b,s ⊂ E t,a,b . Moreover, if (k 1 , k 2 , k 3 ) ∈ E t,a,b , one has: Hence 2k i + k i+1 + k i+2 ≤ min(t + a i , b i+1 + b i+2 ) = s i , which concludes the proof.
A new fan One subdivides the fan ∆ on figure 6, using new rays through the following vectors: The new fan is represented on figure 8; it is compatible with the new class of polytopes. Propositions 1 to 4 and theorem 1 and 2 are valid for this fan and these polytopes.
Proposition 6
If P is the convex envelop of E T,A,B,S , then Demonstration. This combinatorial formula derives by truncation from any of the eight formulae given in section 8 for polytopes of the third species.
Remark On the other way around, by specializing S i = min(T +A i , B i+1 + B i+2 ) in the formula above, one could also derive the eight formulae given in section 8. For example, if T + A 3 > B 1 + B 2 , the corresponding term in the expression above is : as an easy computation would reveal. This term appears, as it should, in the formulae for the 2d, the 4th, the 6th and the 8th forms.
Corollary Analog to theorem 2 and its corollary, when k and N are large enough, the dimension of the cokernel of the last map in the Koszul complex gives the following upper bound on the degree of the eliminand: ). Demonstration. Use prop. 6 and calculate: (2) ,a (2) ,b (2) ,s (2) ∆ t (1) ,a (1) ,b (1) ,s (1) |E T,A,B,S |.
Bézout's own formula for the third species In his treatise, Bézout does not use the truncated polytopes that we have described above. He calculates everything under the hypothesis that all polytopes appearing in the exact sequence belong to one and the same form, among the eight forms pertaining to the third species of incomplete equations. He thus finds eight different formulae, that could be derived from the formula of previous corollary by specializing the s (j) i to their corresponding values. One might doubt that any of those eight formulae could apply to the cases where f (1) , f (2) and f (3) belong to distinct forms, because the 9 parameters s (j) i could each be specialized in two distinct ways (either t (j) + a (j) i+2 ), which makes 256 possible outcomes. Nevertheless, calculation reveals that, for every i, the last term in square brackets in the sum above, i. e.
only takes two possible values after such specialization. Indeed, write h ≤ 0 for two or three among the three possible values of the index j, the term in square brackets vanishes identically ; but if h (j) i ≤ 0 for at most one value of j, then the term in square brackets is equal to h i . Finally, the upper bound on the degree of the eliminand is: where ε i = 0 or 1. This coincides with the eight formulae given by Bézout in his treatise [5] § 119-127.
Comparing methods
The geometrical origin of Cayley's researches, unlike Bézout's, might obscure the identity of methods. Both scholars met with the same mechanisms of linear algebra, having to do with the same unsolved problem : the exactness of a sequence of linear maps. Could Cayley, in some way or another, have known of Bézout's treatise ? He does not mention it ; but he knew of Waring's Meditationes algebraicae, the second edition of which contains a very brief summary of Bézout's ideas. Cayley's method is exactly the same as Bézout's, informed by the theory of determinants, Sylvester's dialytic method, and the new-born matrix symbolism. The alternate sum of dimensions is present in Bézout's, in Waring's, and in Cayley's works.
Despite of these similarities, with Sylvester, Hesse and Cayley, elimination theory is on a new track characterized by: • The important role of projective and algebraic geometry in focusing on homogeneous polynomials.
• The annexion of elimination theory within the growing theory of invariants and the systematic search for invariants.
• The calculatory trend aiming at explicit formulas, mainly depending on determinants and using matrix algebra.
It would be misleading to see the concept of ideal where it is not. Ideals appeared in algebra at the cross-road with number theory in a research trail starting with the Disquisitiones Arithmeticae of Gauss, up to Kummer, Dedekind, Weber and Kronecker at the end of XIXth century. Yet, there is novelty in Bézout's treatment of "sum-equations" in 1779. This novelty and the lack of rigour in Bézout's treatise, as well as its refusal of geometry 75 , had endangered its reception. These obstacles partially overcome, Hesse's and Cayley's articles definitely give a posterity to Bézout's treatise.
The peculiar dialectic between general statements and the many generic cases was also present in Bézout's treatise, and it is both a weakness and a strength. For exemple, the fact that the degree of the eliminand is always less than or equal to the product of the degrees of the given equations, is a general statement. A perfectly grounded universal proof of this statement had to await until the end of XIXth century (Serret, Schmidt, Hurwitz); but Bézout had already understood that this upper bound is the exact degree of the eliminand in the generic case of n "complete equations". He had also known of other generic cases where the exact degree of the eliminand is less than the product of the degrees and could be precisely acertained. As proven above, the formulae found by Bézout in many cases are confirmed by the theory of toric varieties and the method of Bernshtein and Kushnirenko. 75 In this respect, Euler had clearly seen the relation between elimination and projection on the axis of a cartesian coordinate system. The problem of particular cases due to points of intersection at infinity could not be solved before the introduction of projective methods in algebraic geometry. We use the same notations as above.
∑ dim C r−m dim C r | 2016-08-15T17:34:47.000Z | 2016-06-12T00:00:00.000 | {
"year": 2016,
"sha1": "479a339f2871dc300d7ba9847c8f4bd63e61577e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "479a339f2871dc300d7ba9847c8f4bd63e61577e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258082375 | pes2o/s2orc | v3-fos-license | Characterization of target gene regulation by the two Epstein-Barr virus oncogene LMP1 domains essential for B-cell transformation
ABSTRACT The Epstein-Barr virus (EBV) oncogene latent membrane protein 1 (LMP1) mimics CD40 signaling and is expressed by multiple malignancies. Two LMP1 C-terminal cytoplasmic tail regions, termed transformation essential sites (TES) 1 and 2, are critical for EBV transformation of B lymphocytes into immortalized lymphoblastoid cell lines (LCL). However, TES1 versus TES2 B-cell target genes have remained incompletely characterized, and whether both are required for LCL survival has remained unknown. To define LCL LMP1 target genes, we profiled transcriptome-wide effects of acute LMP1 CRISPR knockout (KO) prior to cell death. To then characterize specific LCL TES1 and TES2 roles, we conditionally expressed wildtype, TES1 null, TES2 null, or double TES1/TES2 null LMP1 alleles upon endogenous LMP1 KO. Unexpectedly, TES1 but not TES2 signaling was critical for LCL survival. The LCL dependency factor cFLIP, which plays obligatory roles in blockade of LCL apoptosis, was highly downmodulated by loss of TES1 signaling. To further characterize TES1 vs TES2 roles, we conditionally expressed wildtype, TES1, and/or TES2 null LMP1 alleles in two Burkitt models. Systematic RNAseq analyses revealed gene clusters that responded more strongly to TES1 vs TES2, that respond strongly to both or that are oppositely regulated. Robust TES1 effects on cFLIP induction were again noted. TES1 and 2 effects on expression of additional LCL dependency factors, including BATF and IRF4, and on EBV super-enhancers were identified. Collectively, these studies suggest a model by which LMP1 TES1 and TES2 jointly remodel the B-cell transcriptome and highlight TES1 as a key therapeutic target. IMPORTANCE Epstein-Barr virus (EBV) causes multiple human cancers, including B-cell lymphomas. In cell culture, EBV converts healthy human B-cells into immortalized ones that grow continuously, which model post-transplant lymphomas. Constitutive signaling from two cytoplasmic tail domains of the EBV oncogene latent membrane protein 1 (LMP1) is required for this transformation, yet there has not been systematic analysis of their host gene targets. We identified that only signaling from the membrane proximal domain is required for survival of these EBV-immortalized cells and that its loss triggers apoptosis. We identified key LMP1 target genes, whose abundance changed significantly with loss of LMP1 signals, or that were instead upregulated in response to switching on signaling by one or both LMP1 domains in an EBV-uninfected human B-cell model. These included major anti-apoptotic factors necessary for EBV-infected B-cell survival. Bioinformatics analyses identified clusters of B-cell genes that respond differently to signaling by either or both domains.
(A)Relative mean + standard deviation (SD) live cell numbers from CellTitreGlo analysis of n=3 replicates of Cas9+ GM12878 LCLs, transduced with lentiviruses that expressed control or LMP1 sgRNAs and puromycin selected, for 0 versus 2 days.
(B) RT-PCR analysis of CCL22 mRNA abundance in Cas9+ GM12878 post-transduction with lentiviruses that expressed control or LMP1 sgRNA and puromycin selected for 2 days, as in Fig. 1B.Values from cells with control sgRNA were set to 1, and mean fold-change of CCL22 mRNA abundance + SD in cells with LMP1 sgRNA are shown.p-values were determined by one-sided Fisher's exact test from two independent experiments, each with two technical replicates.***p<0.001.The same RNA used for RNA-seq (Fig. 1B) was used for these qPCR experiments.
(C) RT-PCR analysis of EBI3 mRNA abundance in Cas9+ GM12878 post-transduction with lentiviruses that expressed control or LMP1 sgRNA and puromycin selected for 2 days, as in Fig. 1B.Values from cells with control sgRNA were set to 1, and mean fold-change of EBI3 mRNA abundance + SD in cells with LMP1 sgRNA are shown.p-values were determined by one-sided Fisher's exact test from two independent experiments, each with two technical replicates.***p<0.001.The same RNA used for RNA-seq (Fig. 1B) was used for these qPCR experiments.
(D) RT-PCR analysis of IRF4 mRNA abundance in Cas9+ GM12878 post-transduction with lentiviruses that expressed control or LMP1 sgRNA and puromycin selected for 2 days, as in Fig. 1B.Values from cells with control sgRNA were set to 1, and mean fold-change of IRF4 mRNA abundance + SD in cells with LMP1 sgRNA are shown.p-values were determined by one-sided Fisher's exact test from two independent experiments, each with two technical replicates.***p<0.001.The same RNA used for RNA-seq (Fig. 1B) was used for these qPCR experiments.
(E) Scatter plot analysis cross-comparing the significance of changes in LCL dependency factor expression upon GM127878 LMP1 KO versus the CRISPR screen significance score for selection against sgRNAs in LCL vs Burkitt dependency factor analysis (25).Shown on the Y-axis are -log10 transformed P-values from RNAseq analysis of GM12878 LCLs transduced with lentiviruses expressing LMP1 versus control sgRNA (as in Fig. 1F), versus -log10 transformed P-values from CRISPR LCL vs Burkitt cell dependency factor analysis (25).Higher Y-axis scores indicate more significant differences in expression for the indicated genes in GM12878 with LMP1 vs control sgRNA.Higher X-axis scores indicate a stronger selection against sgRNA targeting the indicated genes in GM12878 LCLs versus P3HR1 Burkitt cells over 21 days of cell culture.Shown are genes with p<0.05 in both analyses.
(F) Volcano plot analysis visualizing KEGG Hodgkin lymphoma pathway gene -Log10 (P-value) on the yaxis versus Log2 transformed fold change in mRNA abundances on the x-axis of GM12878 genes in cells expressing LMP1 versus control sgRNA (as in Fig. 1F).P-value <0.05 and >2-fold change mRNA abundance cutoffs were used.Mean ± SD of fold change plasma membrane annexin V values from n=3 independent experiments, using GM12878 with the indicated control or LMP1 sgRNA and rescue cDNA expression.Values in GM12878 with control sgRNA and no LMP1 rescue cDNA were set to 1. (A) Heatmap analysis of CRISPR defined LCL dependency factor gene relative row Z-scores from RNAseq of GM12878 expressing LMP1 sgRNA and the indicated rescue cDNA, as in Fig. 3.The Z-score scale is shown at bottom, where blue and red colors indicate lower versus higher relative expression, respectively.Two-way ANOVA P-value cutoff of <0.05 and >2-fold gene expression cutoffs were used.
LMP1
(B) Heatmap analysis of KEGG Hodgkin Lymphoma pathway gene relative row Z-scores from RNAseq of GM12878 expressing LMP1 sgRNA and the indicated rescue cDNA, as in Fig. 3. Two-way ANOVA P-value cutoff of <0.05 and >2-fold gene expression cutoffs were used.
(C) RT-PCR analysis of CCL22 mRNA abundance in GM12878 LCLs transduced with lentivirus expressing LMP1 sgRNA and induced for WT, TES1m or TES2m rescue cDNA expression for 6 days, as in Fig. 3A.The same RNA used for RNA-seq in Fig. 3A ( (A) Volcano plot analysis of host transcriptome-wide genes differentially expressed in BL-41 cells conditionally induced for WT LMP1 expression for 24h by 250 ng/ml Dox versus in mock induced cells.Higher X-axis fold changes indicate genes more highly expressed in cells with WT LMP1 expression, whereas lower X-axis fold changes indicate higher expression in cells mock induced for LMP1.Data are from n=3 RNAseq datasets.
(B) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (A) amongst genes more highly expressed in BL-41 with WT LMP1 (red) vs amongst genes more highly expressed with mock LMP1 induction (blue).
(C) Volcano plot cross-comparison of Log2 transformed fold change of host mRNA levels in BL-41 cells (Xaxis) versus Akata cells (Y-axis) uninduced versus induced for WT LMP1 by 250 ng/ml Dox for 24 hours.Selected genes highly WT LMP1 induced in both Burkitt contexts are highlighted in red, whereas selected genes suppressed by LMP1 in both Burkitt contexts are highlighted in blue.
(D) Volcano plot analysis of host transcriptome-wide genes differentially expressed in BL-41 cells conditionally induced for DM versus WT LMP1 expression for 24h by 250 ng/ml Dox.Higher X-axis fold changes indicate genes more highly expressed in cells with WT LMP1 expression, whereas lower X-axis fold changes indicate higher expression in cells induced for DM LMP1.Data are from n=3 RNAseq datasets.
(B) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (D) amongst genes more highly expressed in BL-41 with WT LMP1 (red) vs amongst genes more highly expressed with DM LMP1 induction (blue).
(C) Volcano plot cross-comparison of Log2 transformed fold change of host mRNA levels in BL-41 cells (Xaxis) versus Akata cells (Y-axis) induced for DM versus WT LMP1 by 250 ng/ml Dox for 24 hours.Selected genes highly WT LMP1 induced in both Burkitt contexts relative to levels in cells with DM LMP1 expression are highlighted in red, whereas selected genes suppressed by WT LMP1 in both Burkitt contexts are highlighted in blue.A) Volcano plot analysis of host transcriptome-wide genes differentially expressed in BL-41 cells conditionally induced for TES1m vs WT LMP1 expression for 24h by 250 ng/ml Dox.Higher X-axis fold changes indicate genes more highly expressed in cells with WT LMP1 expression, whereas lower X-axis fold changes indicate higher expression induced for TES1m LMP1.Data are from n=3 RNAseq datasets.
(B) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (A) amongst genes more highly expressed in BL-41 with WT LMP1 (red) vs amongst genes more highly expressed with TES1m LMP1 induction (blue).
(C) Volcano plot analysis of host transcriptome-wide genes differentially expressed in BL-41 cells conditionally induced for TES2m vs WT LMP1 expression for 24h by 250 ng/ml Dox.Higher X-axis fold changes indicate genes more highly expressed in cells with WT LMP1 expression, whereas lower X-axis fold changes indicate higher expression induced for TES2m LMP1.Data are from n=3 RNAseq datasets.
(D) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (A) amongst genes more highly expressed in BL-41 with WT LMP1 (red) vs amongst genes more highly expressed with TES2m LMP1 induction (blue).
(E) Volcano plot analysis of host transcriptome-wide genes differentially expressed in Akata cells conditionally induced for TES1m vs TES2m LMP1 expression for 24h by 250 ng/ml Dox.Higher X-axis fold changes indicate genes more highly expressed in cells with TES1m LMP1 expression, whereas lower Xaxis fold changes indicate higher expression induced for TES2m LMP1.Data are from n=3 RNAseq datasets.
(F) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (E) amongst genes more highly expressed in Akata with TES1m LMP1 (red) vs amongst genes more highly expressed with TES2m LMP1 induction (blue).
(G) Volcano plot analysis of host transcriptome-wide genes differentially expressed in BL-41 cells conditionally induced for TES1m vs TES2m LMP1 expression for 24h by 250 ng/ml Dox.Higher X-axis fold changes indicate genes more highly expressed in cells with TES1m LMP1 expression, whereas lower Xaxis fold changes indicate higher expression induced for TES2m LMP1.Data are from n=3 RNAseq datasets.
(B) Enrichr analysis of KEGG pathways most highly enriched in RNAseq data as in (G) amongst genes more highly expressed in BL-41 with TES1m LMP1 (red) vs amongst genes more highly expressed with TES2m LMP1 induction (blue).(A) Volcano plot analysis of host genes differentially expressed upon WT LMP1 induction in Akata (X-axis) versus upon LMP1 KO in GM12878 (Y-axis).Shown are Log2 transformed mRNA fold change values for Akata cells mock induced versus induced for LMP1 WT expression for 24 hours (X-axis) versus upon expression of LMP1 vs control sgRNA in GM18278 for 48 hours.Genes more highly expressed in mockinduced Akata have higher x-axis values, whereas genes more highly expressed in Akata induced for WT LMP1 have lower x-axis values.Likewise, genes with higher expression with control sgRNA expression have higher y-axis values, whereas genes with lower expression in GM12878 Figure S1 Figure S2 A 4 Figure S3A LMP1 cDNA: Figure S3.Characterization of TES1 vs TES2 LCL dependency factor and Hodgkin lymphoma pathway targets.
Figure S9 .
FigureS9 GM LMP1 KO and Akata Uniduced vs WT Figure S10 A) K-means heatmap analysis of RNAseq datasets from n=3 replicates generated in EBV-BL-41 Burkitt cells with conditional LMP1 WT, TES1m, TES2m or DM expression induced by 250 ng/ml doxycycline for 24 hours.The heatmap visualizes host gene Log2 Fold change across the four conditions, divided into six clusters.A two-way ANOVA P value cutoff of <0.01 and >2-fold gene expression were used.# of genes in each cluster is indicated at right.
Genes more highly expressed in Akata with TES1m than WT LMP1 expression have higher x-axis values, whereas genes more highly expressed in Akata induced for WT LMP1 than TES1m have lower x-axis values.Likewise, GM12878 genes with higher expression with TES1m rescue have higher y-axis values, whereas genes with lower expression in GM12878 with TES1m than WT rescue have lower Y-axis values.P value <0.05 and >2-fold gene expression cutoffs were used.(C)Volcanoplotanalysis of host genes differentially expressed upon TES2m vs WT LMP1 induction in Akata (X-axis) versus upon rescue of LMP1 KO GM12878 with TES2m versus WT LMP1 (Y-axis).Shown are Log2 transformed mRNA fold change values for Akata cells induced for TES2m versus WT LMP1 expression for 24 hours (X-axis) versus upon rescue of GM18278 LMP1 KO with TES2m vs WT LMP1 cDNA, as in Fig 3.Genes more highly expressed in Akata with TES2m than WT LMP1 expression have higher x-axis values, whereas genes more highly expressed in Akata induced for WT LMP1 than TES2m have lower x-axis values.Likewise, GM12878 genes with higher expression with TES2m rescue have higher y-axis values, whereas genes with lower expression in GM12878 with TES2m than WT rescue have lower Y-axis values.P value <0.05 and >2-fold gene expression cutoffs were used. | 2023-04-13T13:11:40.359Z | 2023-04-10T00:00:00.000 | {
"year": 2023,
"sha1": "2b131e060253bf076d6ee5da9dea9bf237444163",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mbio.02338-23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac71bd10e58003e976f87b4825727f4b4eacb504",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
266586398 | pes2o/s2orc | v3-fos-license | A rare case of radiation-induced breast angiosarcoma: a case report
Abstract We describe a rare case of a 77-year-old woman with radiation-induced breast angiosarcoma (RIAS) in whom radical surgery with negative margins determined that at 14-month of follow-up there is no evidence of either local or systemic recurrence without having to resort to adjuvant chemotherapy.
Introduction
Radiation-induced breast angiosarcoma occurs as a rare, severe and late complication in patients that underwent breast conservation surgeries associated with radiotherapy in less than 0,3% [1].Literature reported for secondary angiosarcoma 5-year survival rate lower than 22.5% [2].
The initial management of patients affected by radiation-induced breast angiosarcoma (RIAS) is complex due to the fact that it usually presents in the form of multifocal reddish-purple papular skin lesions underestimated by clinicians because of its benign presentation and skin changes are easily attributed to radiation even if radiation-induced breast angiosarcoma RIAS are frequently associated with a poor prognosis [3][4][5].An incisional biopsy of the skin and underlying mass is necessary and the treatment is surgical resection, however the role of chemotherapy has not yet been clearly defined.Angiosarcomas are aggressive, malignant and poor prognosis associated blood vessel cancers which originate from endothelial cells, they can arise spontaneously or in association with many factors [6].
Sporadic angiosarcoma of the breast is extremely rare, in contrast, radiogenic angiosarcoma of the breast is much more common among women with a history of breast irradiation with an incidence about 1% [7].Spontaneous (primary) and radiogenic (secondary) angiosarcomas are morphologically undistinguishable, but there are notable pathogenetic differences.For example, Lae et al. [8] compared the c-myc amplification on chromosome 8q24.21 in 32 radiogenic angiosarcoma specimens and 15 sporadic angiosarcoma specimens amplification (5-to 20-fold) of the c-myc oncogene was found in all radiogenic angiosarcomas cases but only in one sporadic angiosarcoma demonstrating a specific oncogenic pathway for radiogenic angiosarcoma.Secondary angiosarcoma can also occur in the event of chronic lymphedema after breast surgery and lymphadenectomy (Stewart-Treves syndrome) even if this condition has significantly decreased due to improved surgical techniques [9,10].
This study aims to offer details on the management in a multidisciplinary context of the rare tumor in question in a particular case in which adjuvant treatment was not proposed to the patient.
Case report
A 77-year-old patient was diagnosed with a left breast carcinoma in 2008.There was a family history of breast cancer: his mother was diagnosed left invasive ductal breast carcinoma at 80 years old, both her sisters were diagnosed right in situ ductal carcinoma.The patient medical history described: pharmacological treated arterial hypertension, senile arthrosis and hiatal hernia.The patient had undergone several surgeries: cholecystectomy, hysteroannessiectomy, bladder plastic and in 2008 received a quadrantectomy of the upper-outer left breast quadrant with radical lymphadenectomy of the left armpit due to invasive ductal carcinoma pT2N2G3, Estr.40%, Pr.Neg, Ki67 12%, HER2 triple positive as a final diagnosis.She underwent 3 cycles of adjuvant chemotherapy with Docetaxel associated with Trastuzumab and adjuvant radiotherapy (50 Gy) followed by regular oncological follow-up with clinical examinations every 6 months, a chest X-ray, a bilateral ultrasound check of the breast and axilla, an abdominal ultrasound check, a bone scintigraphy and dosing of tumor markers (☐FP, CEA, Ca125, Ca 15.3, TPA).There were no signs of recurrence both local and systemic until today.From October 2021 the patient reported the presence of a singular periareolar purple cutaneous dyschromia in inner quadrant transition (IQT) of the left breast about 2 centimeters in diameter and physical examination described sclerotic breast skin due to results of RT, retracted nipple, an ecchymotic lesion of about 2 cm in IQT in the periareolar area under which a nodular ligneous lesion of about 2 cm in diameter adhering to the underlying planes can be appreciated.Patient underwent in December 2021 mammographic exam in which was described predominantly fibroadipose mammary structure (BI-RADS: B), neither accumulations of a suspicious nature nor microcalcifications with evolutionary characteristics were appreciated.The finding appears superimposable on the previous exam of 2020.The patient was proposed at the end of January 2022 a bilateral ultrasound check of the breast and axilla that reported the absence of solid lesions and absence of axillary involvement.There was no ultrasound correspondence with clinical findings.All that considered the team decided then to perform a punch biopsy of the lesions which histological exams reported the absence of epidermal involvement but the presence in the dermis of fissures and anastomosed vascular channels lined by endothelium with atypical, enlarged and hyperchromatic nuclei positive for ERG, negative for GATA 3, smooth muscle actin and e-Cadherin.The morphological and immunohistochemical finding was compatible with angiosarcoma.The patient at the same time underwent an MRI test of both breasts with and without contrast where no significant signal alterations to the right breast were detected.No evident adenopathies are described in the axillary and bilateral internal mammary chains.The Breast Unit team also decided to perform a chest and abdomen CT exam with and without contrast.CT abdomen exam revealed the absence of suspicious lesions, no detectable metastatic disease was found.In the left breast there is a slight inhomogeneous fixation, especially in the lower quadrants and external, without evidence of clear hyperaccumulation of a focal nature.The Breast Unit multidisciplinary team after evaluating patient specifications and considering the outcomes presented by literature proposed to the patient a left mastectomy with a primary surgical wound closure that was performed in February 2022.No complications occurred.The patient's general conditions were good and five days later the patient was discharged.Radical lymphadenectomy of the left armpit was performed in 2008.Pathological specimen reported 3-cm-large cutaneous breast angiosarcoma with negative surgical margins.Considering the complete surgical excision of the lesion, the histological characterization, no systemic involvement and patient's older age, adjuvant treatment was not proposed.Actually there is no evidence of recurrence in follow-up 14 months after surgery.Patient is still continuing follow-up each 3 months with CT total body, ultrasound check of soft tissues and abdomen, and an oncological medical examination for the next two years.
Discussion
Radiation-induced angiosarcomas (RIAS) represents a diagnostic challenge because of its initial presentation and cutaneous changes are frequently attributed to radiation [11].For these reasons it's important to consider RIAS as differential diagnosis in any cutaneous changes of the breast in patients who underwent radiotherapy [12].In literature RIAS is usually described to occur about 10 years after breast irradiation [13,14].Early diagnosis and radical surgical treatment are potentially curative but due to the rarity of breast sarcomas, there are no prospective randomized trials to guide therapy [15].Radical surgery of the tumor either by local resection or mastectomy is the most commonly cited treatment [16,17] and complete tumor resection is associated with an improved prognosis.In contrast to the well established role of surgery, the value of re-irradiation and systemic chemotherapy is less clear [18,19].It is then important to increase knowledge regarding the presenting symptoms and underline the importance of following up patients who have undergone breast conserving cancer therapy for many years after.RIAS frequently have a nonspecific mammographic and sonographic appearance [20], 33% of patients with angiosarcoma have normal mammography pattern [21], for these reasons MRI plays an important role in the assessment of extent of RIAS [22].An incisional biopsy of the skin and underlying mass is the most accurate and fastest way to obtain a diagnosis [23].RIAS arises in differential diagnosis primarily with radiotherapy-associated atypical vascular lesions present as clinically harmless fleshcolored erythematous papules or plaques.Atypical vascular lesions within an irradiation site are suggested to be in a state of morphologic continuum, which may progress to more aggressive malignant angiosarcoma.An important criterion for the differential diagnosis is represented by the evaluation of the proliferation index with Ki-67 which is greater than 10% in the forms of angiosarcoma.Histologically, angiosarcoma shows multilayered nuclei, atypia and mitosis.For these reasons immunohistochemical staining for MyC can be used to aid categorization since MyC is usually amplified in RIAS but not in atypical vascular lesions [24].In our case, the macroscopic presentation of the tumor was a purple plaque in which the cutting surface showed the presence of multiple hemorrhagic areas extending in superficial dermis.(Figures 1 and 2) The microscopic exam reported the absence of epidermis involvement but the presence in the dermis of fissures and anastomosed vascular channels lined by endothelium with atypical, enlarged and hyperchromatic nuclei positive for ERG, negative for GATA 3, smooth muscle actin and e-Cadherin.Hematoxylin-eosin stain revealed irregular staghorn branched blood vessels in the tumor lined by atypical endothelial cells typically plump.(Figure 3) Ki-67 immunostain showed strong nuclear positivity of neoplastic cells >20% and positivity of epidermal basal cells.(Figure 4) ERG immunostain of angiosarcoma revealed strong nuclear marker expression.(Figure 5) The most cited treatment for breast angiosarcomas in literature is surgical resection (mastectomy) that aims to obtain negative margins [25].Prognosis has been historically poor especially for RIAS with a median survival of 25 months [26] All that considered, RIAS still represents a challenge to the Breast Unit team in deciding the best therapeutic In our patient only the prompt and correct diagnosis, in a multidisciplinary context where the lesion has been quickly identified and investigated, associated with a radical surgical approach of the breast affected by the RIAS, as suggested by last evidences in literature [27], allowed an improvement in the prognosis of our patient without having to resort to adjuvant chemotherapy and avoiding all the possible complications of this treatment which are more serious in elderly patients.
Conclusion
This case report offer details on the management in a multidisciplinary context of the rare tumor in question in a particular case in which adjuvant treatment was not proposed to the patient.Radical surgery with negative margins determined in our patient that at 14 months of follow up there is no evidence of recurrence both local and systemic.Literature is still poor in guidelines to manage patients affected by this rare and severe tumor, for these reasons our contribute could represent a particular case for larger studies to develop more robust data.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Any patient, service user, or participant (or that person's parent or legal guardian) in any type of qualitative or quantitative research, has given informed consent to participate in the research.
Authors confirm that they have obtained written informed consent to publish the details from the affected individual.
Figure 1 .
Figure 1.gross examination shows a small purple skin lesion with irregular margins in periareolar region in breast left inner quadrant transition and no evidence of skin ulceration.
Figure 2 .
Figure 2. gross examination shows purple plaque with hemorrhagic cut surface extending in superficial dermis.at the cut surface the lesion comes close to the epidermis with no evidence of epidermal invasion.
Figure 4 .
Figure 4. Ki-67 immunostain (40x), this picture shows strong nuclear positivity of neoplastic cells >20%.antigen Ki-67 is a nuclear protein that is associated with cellular proliferation and ribosomal Rna transcription.the Ki-67 percentage score is defined as the percentage of positively stained tumor cells among the total number of malignant cells assessed.
Figure 5 .
Figure 5. eRg immunostain (40x): strong nuclear expression of neoplastic cells.transcritional regulator factor (eRg) is a protein encoded by ERG (ETS family transcription factor).eRg is a highly sensitive immunohistochemical marker for vascular differentiation. | 2023-12-29T16:21:27.280Z | 2023-12-24T00:00:00.000 | {
"year": 2023,
"sha1": "020369b9a8ada96d65e0bc01f5cf83faed2203cd",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23320885.2023.2296697?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7adc41d5c002d5fc6c45afcd284db7953a29bcfc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
55463215 | pes2o/s2orc | v3-fos-license | Decay data evaluation project : Evaluation of 52 Mn and 52 mMn nuclear decay data
All nuclear decay data within the 52Fe-52m,52Mn-52Cr decay chain have been evaluated at IFINHH, Romania, as part of an IAEA coordinated research project (F41029) and incorporated into the Decay Data Evaluation Project (DDEP). Both 52Fe and daughter 52Mn are two potentially promising radionuclides to be incorporated into suitable radiopharmaceuticals for PET and SPECT imaging. The decay data evaluation of 52Fe has previously been published and reported to the IAEA Nuclear Data Section. Equivalent DDEP evaluations for 52Mn and 52mMn have also been completed recently, and are presented in summary form below. These improved decay data sets have also been reported to the IAEA in detail, and are highly suitable in dose rate calculations for their application in nuclear medicine.
Introduction
Established in 1995, the Decay Data Evaluation Project (DDEP) is an international initiative with the main objective "to provide carefully produced recommended data for applied research and detector calibrations", www.nucleide.org/DDEP.htm.This is achieved by performing peer-review evaluations for the following nuclear decay data of the radionuclides of interest: halflife, decay energy, decay modes and branching fractions, and radiation energies and emission probabilities.The complete DDEP database is hosted by the Laboratoire National Henri Becquerel (CEA/LNE-LNHB) from France. 52Mn and 52m Mn are two radionuclides with potential to be used in PET and SPECT medical applications.While 52m Mn has been used to monitor myocardial perfusion by PET, 52 Mn can also be applied to dual modality manganese-enhanced magnetic resonance imaging (MEMRI) applications, such as neural tractography and stem-cell tracking, [1,2].The aim of the present work was a complete characterization of the 52 Fe-52m,52 Mn-52 Cr decay chain.The presently reported results complement the evaluation of the 52 Fe nuclear decay data which was recently published [3].The work was done under the IAEA Coordinated Research Project "Nuclear Data for Charged-particle Monitor Reactions and Medical Isotope Production".
Evaluation steps and tools
The most recent ENSDF evaluation for the mass chain A=52 was studied [4].The energy of the nuclear levels, and the spin and parity values were adopted from this evaluation, while the total decay energies are from the atomic mass evaluation [5].All relevant published experimental data for 52m Mn and 52 Mn were a e-mail: aluca@nipne.roidentified and copies stored for subsequent use (eight and seventeen references, respectively).The cut-off dates for the references were January and March 2016, respectively.Both the compilation and evaluation of the nuclear decay data sets were then undertaken.Finally, the decay scheme parameters were analyzed and tested.
Results
52m Mn decays 98.295(42)% by electron capture and β+ to excited levels of 52 Cr, and by isomeric transition (IT) to the ground state of 52 Mn (1.705(42)%), Fig. 1. 52 Mn decays 100% by electron capture and β+ to excited levels of 52 Cr.Both radionuclides have important emissions of positrons (allowed transitions) and gamma-rays on a wide energy range.The half-life values adopted represent the weighted averages of three and seven experimental values for 52m Mn and 52 Mn, respectively.
Recommended decay data
The results obtained for the main decay data of 52m Mn and 52 Mn are presented in Table 1: half-life, decay energy, and energies and emission probabilities of the different transitions.Other evaluated data: fluorescence yields: ω K ( 52 Mn) = 0.289 (5).The conversion electrons have very low emission probabilities of the order of 10 −4 (K shell) and less.
Normalization factor for the γ -ray transitions
The Normalization factor (F) is used to calculate the absolute γ -ray emission probabilities from the adopted γ -ray Table 1.Evaluated decay data for 52m Mn and 52 Mn. relative intensities.The Normalization Factor was calculated by imposing the following two conditions.52m Mn: 98.295(42)% of the transitions (all, with the exception of the IT transition) populate the ground state of the 52 Cr daughter: F = 0.98254(42).For 52 Mn: 100% of the transitions (with the exception of the IT decay) populate the ground state of the 52 Cr daughter: F = 0.999866(2).
Testing the consistency of decay schemes
According to the SAISINUC testing tools, the sum of all the energies involved in the 52m Mn decay (EC, γ , etc.), with the exception of the gamma-ray isomeric transition, is 5089.1 (36) keV, which is in very good agreement with the Q EC value: 5088.9 (19) keV. | 2018-12-13T01:53:25.392Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "9003f7d505f88cad68aba106562cae2ccda04fbf",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/15/epjconf-nd2016_08003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9003f7d505f88cad68aba106562cae2ccda04fbf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
213573674 | pes2o/s2orc | v3-fos-license | Targeting Subsets of Mammalian Neurons
Functional dissection of mammalian neuronal circuits depends on accurate targeting of constituent cell classes. Transgenic mice offer precise and predictable access to genetically defined cell populations, but there is the pressing need to target neuronal assemblies in species less amenable to genomic manipulations, such as the primate, which is an important animal model for human perception, cognition, and action. We have developed several virus-based methods for accessing all forebrain inhibitory interneurons as well as the major excitatory and inhibitory neuron subclasses. These methods rely on the wealth of emerging single-cell transcriptome data and harness gene expression variations to refine neuron targeting. Our approach enables nuanced functional studies, including in vivo imaging and manipulation, of the diverse cell populations of the mammalian neocortex, and it represents a timely blueprint for transgenics-independent interrogation of functionally significant cell classes.
Introduction
A cell targeting strategy is shaped by how a neuronal population is defined. For us, the principal goal is to access functionally homologous neuronal circuit elements. This choice tends to restrict other neuron attributes: intrinsic properties, connectivity, and the complement of expressed genes. In other words, we do not expect or aim to capture all cells within traditional, but functionally diverse cell classes often represented a single neurochemical marker, such as parvalbumin or somatostatin. Our methods for accessing such cell populations are based on single-cell transcriptome data and rely on interdependent adeno-associated viruses (AAVs) with different expression specificities to refine targeting. 1 An important benefit of using AAVs is that they are not pathogenic and can infect cells of many species, including nonhuman primates (NHPs).
Our AAVs are engineered with short promoters that support different patterns of gene expression. Identifying such promoters has been a major undertaking because the AAV payload is quite limited, few neuron class-specific promoters and enhancers have been described, and the relationship between a specific promoter sequence and the resulting gene expression pattern is poorly understood. As a result, promoter design is an iterative undertaking requiring extensive empirical testing.
To reduce the trial-and-error aspect of vector engineering, we have recently developed powerful methods for identifying and testing candidate regulatory elements. Along the way, we have made several important observations: (a) co-expressed genes offer multiple equally effective solutions to achieve expression specificity, (b) more genetic content is not necessarily better-a short regulatory domain can target subsets of neurons that share other key characteristics, (c) sequences conserved between species often function similarly, (d) promoter specificity can vary across brain regions as does the function of ostensibly similar neurons, (e) protein expression variability can be harnessed using intersectional approaches to refine neuron targeting, and (f ) promoter strength must be suited to neuron type and intended application.
As we often use multiple viruses in the same preparation, we have also worked hard to achieve uniform and reproducible infections. Variables, such as titer and serotype, are discussed in the following sections.
Intersectional Techniques
Functional studies have revealed that excitatory or inhibitory neurons rarely fall within neat neurochemical boundaries. 2,3 Likewise, neuronal gene expression is both promiscuous and variable 4,5 making it difficult to match single molecular markers to functionally distinct neuron classes. We overcome this obstacle by using interdependent viruses whose promoters support different gene expression patterns. In an example of a set intersection strategy, one promoter may be active in classes A and B and another in classes B and C. Neither promoter alone is sufficient to access class B, but an intersectional strategy that relies on both promoters will successfully isolate class B. In an alternative set difference strategy, the first promoter is active in classes A and B and the second promoter is active only in class B. We can then subtract the expression pattern of the second promoter from that of the first to access only class A neurons. In these examples, overlapping endogenous gene expression, normally a hindrance to cellular marker-based genetic targeting, is
Regulatory Motifs and Domains
Short promoter DNA motifs (~10 base pairs) are known to bind transcription factors and have been implicated in the regulation of eukaryotic gene expression, 7,8 but which motifs are needed for specific expression patterns is largely unknown. We therefore set out to develop an algorithm that can mine singlecell transcriptome data to identify candidate cell type-specific DNA regulatory sequences.
Gene expression variability is usually quantified as a continuous score-fold-change, test-statistic, P value-comparing biological classes. Unlike existing approaches, our de novo strategy termed Suffix Array Kernel Smoothing (SArKS) 9 and applies nonparametric kernel smoothing 10 to uncover promoter motifs that correlate with elevated differential expression scores. SArKS detects motifs by smoothing sequence scores over sequence similarity. A second round of spatial proximity smoothing extends and merges motifs to reveal multi-motif domains (MMDs) hundreds of base pairs long. The juxtaposition of such MMDs has allowed us to explore combinatorial aspects of promoter organization.
Importantly, we do not screen for the top motifs nor for the most abundant transcripts; all sequences are scored based on expression differences across chosen cell classes. In addition, SArKS neither relies on nor generates consensus sequences, so that biologically relevant sequence variations and motif context are preserved, enabling nuanced comparisons. When a particular MMD is demonstrated experimentally to improve or hinder cell type-specific targeting, its sequence is incorporated iteratively into the SArKS search algorithm to refine subsequent rounds of motif selection. The ability to assign valence to MMDs-the bias in favor of inclusion or exclusion in a particular expression pattern-is also our starting point for rational promoter design, including to achieve layer-specific expression.
Conservation of Noncoding DNA
SArKS examines differences in gene expression across cell classes based on cell-specific transcriptome data. Such data have now been collected from genetically defined cell classes in rodents, 11,12 but not from primates. Indeed, this chicken-andegg problem-needing cell-specific transcriptome data to be able to define and access cell classes-represents a significant hurdle in engineering vectors for NHP research. Fortunately, comparisons of distantly related vertebrate genomes have demonstrated that conserved noncoding DNA, especially in the vicinity of developmentally important genes, can support shared regulatory regimes. [13][14][15] To circumvent the lack of primate cell-specific data, we have used SArKS to identify candidate mouse regulatory domains and have then examined these domains for elevated rodentprimate sequence conservation. Our strategy is supported by the promiscuity of transcription factors, which are known to tolerate subtle sequence variations 16,17 and has helped us uncover human regulatory regions for accessing GABAergic and parvalbumin-expressing forebrain neurons in both rodent and primate. 1 While we and others are striving to collect transcriptome data from primate cells to aid the search for speciesspecific regulatory domains, we anticipate that the presence of cross-species sequence conservation within putative promoters will continue to be an important parameter when engineering viral vectors that are active in multiple species. One practical benefit of such conservation is that we can pre-screen many candidate promoters in mice.
Chromatin Accessibility
One important parameter that we consider when selecting differentially expressed genes for SArKS analysis is whether or not the chromatin is accessible in the vicinity of differentially expressed genes, where cell-specific transcription factors must bind. From an experimental perspective, genomic DNA may appear inaccessible because it is epigenetically modified, blocking transcription factor binding; alternatively, a bound transcription factor can render chromatin inaccessible while enabling transcription. We filter promoter regions that are not accessible in every cell population being compared because we wish to harness differential gene expression mechanisms supported entirely by cell-specific transcription factors. 18 Variable gene expression where the binding of a ubiquitous transcription factor is epigenetically regulated is at odds with our sequence-based strategy and cannot be reproduced when using viral vectors whose genomes are not similarly modified. However, a screen for inaccessible chromatin in the cells of interest may be a useful strategy when examining the effects of distal sequences, such as enhancers, on gene expression. 19 There, differential accessibility may indeed result from cellspecific transcription factor binding, 20 which can foster cellspecific expression. 21,22
Features of AAVs
In addition to gene and promoter, a common distinction among viral vectors is the serotype, the capsid that is selected during virus assembly. Because the sequence of events leading to AAV infection is not fully understood, the effect of a specific serotype on infectivity is hard to define. However, the lack of mechanistic insight has not dampened investigators' convictions about the importance of serotype. As much of this sentiment is based on personal experience, the best we can do is to offer our own.
The method for purifying a virus seems to be as important as a specific serotype. This may be because the coat proteins engaged in nuclear entry are especially sensitive to their environment as they undergo functional transformations. As a result, we generally distrust anecdotal claims about serotype potency: a serotype 9 AAV made in one laboratory may be Zemelman 3 neither comparable to a serotype 9 nor better than another serotype made elsewhere. The confusion is compounded by the multitude of promoters and protein variants the viruses encode.
When we compare serotypes, the contents are identical as is the purification scheme. Under these conditions, serotypes 1, 5, 8, and 9 injected into mouse forebrain through a craniotomy lead to similar levels of fluorophore expression within 10-14 days. Serotypes 2 and 7 are weaker. These relationships hold in NHPs, although the onset of expression is delayed. In primary cultures of rat hippocampal neurons, serotype 1 is best: serotype 5 first strongly labels glia and then neurons; serotypes 8 and 9 label neurons well, but nonuniformly-some neurons remain unlabeled, suggesting a bias that we prefer to avoid. Consequently, nearly all of our vectors are serotype 1. Subcortically in mice, serotype 1 has occasionally failed to label all neurons; in these rare instances, we have successfully used a mix of serotypes 1 + 2 or serotype 9 AAVs.
Our advice to colleagues is that dogma is much less important than experimental observations: if a particular reagent works, do not switch. Conversely, do not give up if a reagent does not work as expected-change the source or capsid or promoter. Viral vectors are not standardized and there is much we do not know about how they work, so testing several variants is often unavoidable.
Infectivity Limitations
In many instances, we achieve cell selectivity using mixes of 2 or more viruses. One concern is that AAVs may perform differently when used singly versus in a cocktail. To date, we have seen no evidence of altered selectivity or potency when using mixes irrespective of constituent serotypes or promoters. We do, however, address vector dilution by maximizing the vector that encodes the activity reporter or cell actuator at the expense of recombinasebearing vectors that provide targeting specificity. Surprisingly, we have also seen little evidence of reduced selectivity at injection site edges, even with the set difference strategy. 1 We hypothesize that neurons infected by single-virus particles appear unlabeled; the neurons we can score must therefore be infected by multiple viruses, preserving the regime of expression specificity.
We have also observed that a single region of NHP cortex can be re-infected repeatedly with the same or different AAV with no diminution of vector efficacy (unpublished observations and Seidemann et al 23 ).This is contrary to reports that implicate the immune response in re-infection failures. While we have not tested injected animals for the appearance of neutralizing antibodies, our findings are consistent with the abundance and low immunogenicity of naturally occurring AAVs in many mammals, including humans. Environmental contaminants and impurities associated with some AAV purification techniques, such as cell debris, leached column matrices and salts, as well as prostheses for neuron imaging and manipulation can irritate and injure injected tissues, causing experiments to fail.
The use of engineered retrograde AAVs that can infect axon fibers and terminals represents an anatomical restriction that can complement promoter-based cell targeting. In the one published example, the capsid amino acid modifications were identified through selection in mouse brain, but the mechanism of virus entry was not elucidated. 24 As is often the case, the resulting reagent does not generalize well outside the scope of the initial selection: whichever mouse receptor the engineered virus binds is clearly absent from many forebrain neurons, such as the hippocampal CA3 neurons, or is present only in neuron subsets, such as in the entorhinal cortex. At this early stage, it is difficult to predict how retrograde AAVs will perform in any particular system, and labeled neurons will have to be examined for evidence of bias imposed by this labeling strategy. Nonetheless, even if additional screens will be needed to improve its retrograde capability and to port it to primates, this class of AAVs adds a key functional component to cell-specific targeting.
Conclusions
Our efforts demonstrate that single rAAVs can access forebrain GABAergic neurons broadly and that interdependent viruses can be used to restrict access to specific excitatory and inhibitory subpopulations. The multi-virus techniques provide ample protein expression for nuanced functional studies of the diverse forebrain cell classes, including for in vivo imaging and manipulation studies in NHPs. The general strategies of identifying DNA sequences that are conserved between rodents and primates and of relying on combinatorial methods to refine genetic targeting offer a timely blueprint applicable to many neuron classes and species for the transgenics-independent brain-wide interrogations of functionally significant cell populations. | 2020-03-05T11:10:19.375Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "15c0caf37ce0ac972b83c6a6f46f0f7badc81fb5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2633105520908537",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "292fba27d92e02febde21cc3faeba9b3ea043fb2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
229174863 | pes2o/s2orc | v3-fos-license | Assimilation of Cholesterol by Monascus purpureus
Monascus purpureus, a filamentous fungus known for its fermentation of red yeast rice, produces the metabolite monacolin K used in statin drugs to inhibit cholesterol biosynthesis. In this study, we show that active cultures of M. purpureus CBS 109.07, independent of secondary metabolites, use the mechanism of cholesterol assimilation to lower cholesterol in vitro. We describe collection, extraction, and gas chromatography-flame ionized detection (GC-FID) methods to quantify the levels of cholesterol remaining after incubation of M. purpureus CBS 109.07 with exogenous cholesterol. Our findings demonstrate that active growing M. purpureus CBS 109.07 can assimilate cholesterol, removing 36.38% of cholesterol after 48 h of incubation at 37 °C. The removal of cholesterol by resting or dead M. purpureus CBS 109.07 was not significant, with cholesterol reduction ranging from 2.75–9.27% throughout a 72 h incubation. Cholesterol was also not shown to be catabolized as a carbon source. Resting cultures transferred from buffer to growth media were able to reactivate, and increases in cholesterol assimilation and growth were observed. In growing and resting phases at 24 and 72 h, the production of the mycotoxin citrinin was quantified via high-performance liquid chromatography-ultraviolet (HPLC-UV) and found to be below the limit of detection. The results indicate that M. purpureus CBS 109.07 can reduce cholesterol content in vitro and may have a potential application in probiotics.
Introduction
Monascus purpureus is a filamentous fungus that produces a variety of secondary metabolites, including pigments, lipids, and monacolins. M. purpureus is most widely known for the fermentation of white rice to produce a deep red rice known as angkak or beni koji [1][2][3]. More commonly, the fermented product is called "red yeast rice", though Monascus species are more accurately molds. M. purpureus fermented rice is used in food preparation for flavoring, coloring, and preservation, and is also consumed in traditional Chinese medicine to improve ailments of circulation and heart health [2,[4][5][6][7]. Modern research explored the health claims, and found a plausible cause: M. purpureus can synthesize monacolins, naturally occurring compounds capable of decreasing cholesterol levels by inhibiting HMG-CoA reductase, the rate-limiting step in cholesterol biosynthesis [7,8]. The most potent of the Monascus monacolins, monacolin K, was isolated and patented as lovastatin and is widely prescribed to treat hypercholesterolemia [8]. Statins are effective treatments for high cholesterol; however, side effects, low tolerance, and the cost of the drug have led patients to pursue alternative options to lower their cholesterol levels [9]. As lyophilized red yeast rice (RYR) supplements emerged as a naturopathic alternative in the U.S., the Food and Drug Administration (FDA) restricted the applications [6,47,48]. M. purpureus was grown in a malt extract media (MEA) containing: 2% soluble Bactomalt extract (BD Bioscience), 2% glucose, 1% peptone, and pH adjusted to pH 7. Plate media contained 2% agar. Phosphate-buffered saline (PBS) solution contained 8.0 g NaCl, 0.2 g KCl, 1.44 g Na 2 HPO 4 , 0.24 g KH 2 PO 4 for every 1.0 L solution and was pH adjusted to pH 7. Where indicated, PBS was supplemented with 6.72 g/L of yeast nitrogen base without amino acids (BD Difco) or 5 g/L ammonium sulfate, and pH was adjusted to 7 before sterilization. Bile salt supplemented media contained 0.3% (w/v) oxgall (BD Difco).
Submerged Culture Preparation
A sterilized 5 mm cork-borer was used to remove an agar plug of M. purpureus CBS 109.07 grown on MEA plate media. The agar plug was subcultured into 4 mL of liquid MEA media and incubated at 30 • C. After four day incubation at 30 • C at 150 rpm, a colorless, spherical fungal pellet was formed. The pellet was transferred into new media with 0.3% oxgall at 37 • C and 60 rpm for growth curve, cholesterol assimilation, and citrinin production experiments.
Cholesterol Reagents
A stock solution of cholesterol (Lipids Cholesterol Rich from adult bovine serum; Sigma-Aldrich, St. Louis, MO, USA) at 10 mg/mL was used to prepare cholesterol assimilation assays and to prepare a 6-point calibration curve as described in Section 2.3.6 [37]. A stock solution of 5-α-cholestane (Sigma-Aldrich, St. Louis, MO, USA) at 2.5 mg/mL was used as an internal standard in the lipid extractions.
Culture Preparation for Growing, Resting, Dead, and Control Conditions
Growing, resting, and M. purpureus control conditions contained M. purpureus CBS 109.07 pellets that were homogenized using a sterilized glass douncer in a sterile 50 mL conical tube and divided into replicates. Dead culture conditions contained M. purpureus CBS 109.07 pellets that were autoclaved at 121 • C for 20 min under 15 psi pressure and transferred to fresh media. Growing and dead cultures contained 10 mL MEA media in sterile 50 mL borosilicate tubes; resting cultures contained 10 mL of PBS in sterile 50 mL borosilicate tubes; resting cultures supplemented with nitrogen sources contained 10 mL of PBS with ammonium sulfate or 10 mL of PBS with yeast nitrogen base. Cholesterol assimilation and dry weight experiments were supplemented with 0.3% oxgall and incubated at 37 • C and 60 rpm. With the exception of the M. purpureus control, all conditions were incubated with 120 µg/mL cholesterol. Media control without M. purpureus contained 120 µg/mL cholesterol and 0.3% (w/v) oxgall in 10 mL MEA.
Cholesterol Assimilation and Dry Weight Growth Curve
Cholesterol assimilation and dry weight experiments for each growth condition were prepared in triplicate, where three independent sets of culture were homogenized and divided into six sterile 50 mL borosilicate tubes to account for six timepoints (0, 24, 36, 48, 60, and 72 h). At designated time point, a 1.0 mL aliquot of culture supernatant was collected, centrifuged at 2000× g for 15 min, and stored in a 50 mL borosilicate glass tube with a PTFE-lined cap at −20 • C. The remaining 9.0 mL of the culture was then harvested and filtered via vacuum flask and a pre-weighed Whatman filter #1. The contents were allowed to air dry for five days and dry weight was measured on an analytical balance.
M. Purpureus Dormancy Experiment
Cholesterol assimilation and dry weight experiments were prepared in triplicate, where three independent sets of cultures were homogenized and divided into five sterile 50 mL borosilicate tubes to account for five timepoints (0, 24, 48, 72, and 96 h). M. purpureus was incubated in PBS buffer, pH 7 with 0.3% oxgall and 120 µg/mL cholesterol. At designated time points, a 1.0 mL aliquot of culture supernatant was collected, centrifuged at 2000× g for 15 min, and stored in a 50 mL borosilicate glass tube with a PTFE-lined cap at −20 • C. M. purpureus samples were then washed twice under sterile conditions with 10 mL of PBS + 0.3% oxgall, and transferred to 10 mL MEA media with 0.3% oxgall and 120 µg/mL cholesterol. A 1.0 mL aliquot of culture supernatant was collected at the starting point (t = 0 h), and also at day 4 and day 7 of incubation in MEA media. At day 7, the culture was harvested and filtered via vacuum flask and a pre-weighed Whatman filter #1. The contents were allowed to air dry for five days and dry weight was measured on an analytical balance.
Resting M. purpureus Supplemented with Nitrogen Sources Experiment
Cholesterol assimilation and dry weight experiments were prepared in duplicate, where two independent sets of cultures were homogenized and divided into three sterile 50 mL borosilicate tubes to account for three timepoints (0, 24, and 72 h). M. purpureus was incubated in PBS buffer supplemented with either ammonium sulfate or yeast nitrogen base without amino acids, pH 7 with 0.3% oxgall and 120 µg/mL cholesterol. At designated time points, a 1.0 mL aliquot of culture supernatant was collected, centrifuged at 2000× g for 15 min, and stored in a 50 mL borosilicate glass tube with PTFE lined cap at −20 • C. M. purpureus samples were then washed twice under sterile conditions with 10 mL of PBS + 0.3% oxgall, and transferred to 10 mL MEA media with 0.3% oxgall. At day 4, the culture was harvested and filtered via vacuum flask and a pre-weighed Whatman filter #1. The contents were allowed to air dry for five days and dry weight was measured on an analytical balance.
Cholesterol Extraction
Cholesterol assimilation samples were thawed and a stock solution of cholesterol (10 mg/mL) was used to prepare cholesterol standards in a range from 10-150 µg/mL in a borosilicate glass tube. To each sample and standard, 20 µL of 2.5 mg/mL internal standard 5-α-cholestane was added. Direct saponification was carried out on all samples and standards based on the method described by Fletouris et al. [49]. Four millimeters of methanolic 0.5 M KOH solution was added to each tube which was then capped and vortexed for 15 s. The samples and standards were heated for a total of 15 min in an 80 • C water bath, and removed every 5 min to vortex for 10 s. After cooling to room temperature, 4 mL of hexane was added for lipid extraction and vortexed for 1 min. After incubating at room temp for 10 min to permit phase separation, the entire hexane layer of each sample was transferred to a clean test tube. The hexane layer was evaporated using speed vacuum at −109 • C. Dried samples and standards were resuspended in 0.6 mL of hexane and transferred to autosampler vials for gas chromatography (GC) analysis.
Gas Chromatography Methods
Cholesterol was determined using gas chromatography (Shimadzu GC-2014, Kyoto, Japan) with a flame ionized detector (FID) and an autosampler [49]. The separation was completed using an SPB-1 column (15 m × 0.32 mm i.d.; film thickness 1.0 mm) (Supelco Inc., Bellefonte, PA, USA) using helium as a carrier gas at a flow rate of 2 mL/min. The oven temperature was set at 285 • C, injection port temperature at 300 • C, and flame ionization detector temperature at 300 • C. The injection volume was 1 µL with a split ratio of 20:1. Matrix effects were addressed by the addition of an internal standard of 5-α-cholestane to all samples and standards. In addition, extracting the standards for each set of experiments using the same process as for the samples helped account for any errors in the preparation process. This allowed for the determination of the limits of detection (LOD) and quantitation (LOQ) for the experimental conditions of 8.31 µg/mL and 27.71 µg/mL, respectively.
Calculations for Cholesterol Assimilation
The integrated peak areas for cholesterol and the internal standard 5-α-cholestane were used to determine a 6-point calibration curve for cholesterol and used to extrapolate cholesterol recovered.
The experimental calibration curve to determine LOD and LOQ were created by combining the calibration curves from five experiments to generate a linear calibration curve with R 2 = 0.9832. Cholesterol assimilated and % cholesterol assimilated were calculated as follows, where Cholesterol i represents cholesterol content recovered at t = 0 and Cholesterol f represents cholesterol content recovered at given time point.
Citrinin Reagents
Citrinin, high-performance liquid chromatography (HPLC)-grade, was purchased from Sigma-Aldrich. A stock solution of citrinin at 100 ug/mL was prepared in HPLC-grade methanol (J.T. Baker) and used to construct a 7-point calibration curve as described in Section 2.4.3. Acetonitrile and water used in chromatography were HPLC grade (J.T. Baker), and trifluoroacetic acid (TFA) (Sigma-Aldrich, St. Louis, MO, USA) was analytical grade.
Culture Preparation and Extraction for Citrinin Production
A 5 mm agar plug of M. purpureus was pre-cultured at 150 rpm and 30 • C for 4 days, and transferred to 10 mL of MEA + 0.3% or PBS + 0.3% oxgall and grown at 60 rpm and 37 • C, as described in culture preparations for cholesterol assimilation assays Section 2.3.2. At 24 h, 72 h, and 14 days, cultures were extracted for citrinin as described in Liu and Xu, with some modifications [50]. Briefly, 10 mL cultures were dounced and extracted with 10 mL of ethanol (1:1). Samples were then vortexed for 5 min and sonicated for 20 min. Samples were spun down at 4200× g for 10 min. Supernatant was collected, dried down, and resuspended in 1 mL HPLC-grade methanol. Extraction method was validated with recovery controls, where M. purpureus grown in MEA + 0.3% oxgall at 24 h and 72 h were spiked with citrinin at 10 µg/mL and extracted as described previously.
High-Performance Liquid Chromatography Methods
Citrinin was determined using high-performance liquid chromatography, HPLC (Agilent 1100 liquid chromatograph) with a diode array detector (DAD) and an autosampler. The separation was completed using a Discovery C18 column (5 µm, 150 × 4.6 mm column) (Supelco Inc., Bellefonte, PA, USA) and an isocratic elution. The mobile phase consisted of acetonitrile:water containing 0.05% TFA and the volume ratio was 35:65 [50,51]. All samples, standards, and solvents were filtered through 0.22 µm membrane filters prior to HPLC analysis. The flow rate was 1 mL/min, and 20 µL sample was injected. The UV-DAD detection was monitored at 254 nm and 334 nm. The integrated peak areas at 334 nm for standard citrinin were used to determine a 7-point calibration curve and used to extrapolate citrinin recovered. The LOD and LOQ for the experimental conditions were 1.11 µg/mL and 3.70 µg/mL, respectively. Recovery of citrinin was determined by dividing citrinin concentration recovered by known citrinin concentration injected.
Statistical Analysis
For cholesterol assimilation and growth curves, growing, resting, and dead M. purpureus cultures and controls were conducted in triplicate for each time point. For citrinin assays, growing and resting M. purpureus cultures were conducted in duplicate for each time point. All GC-FID and high-performance liquid chromatography-ultraviolet (HPLC-UV) samples were measured in duplicate. Two-way ANOVA was carried out to examine the effect of M. purpureus × incubation time interaction on growing, resting, or dead conditions. Tukey's test was used to compare means. Significance was defined at p < 0.05 or p < 0.01. Standard deviation is calculated as either absolute error, or percent error through propagation of uncertainty from Equation (2) of Section 2.3.8. All statistical analyses were carried out using GraphPad Prism 8.0.
Cholesterol Assimilation
The in vitro removal of cholesterol by the filamentous fungi M. purpureus CBS 109.07 (hereon referred to in the Results section as M. purpureus) was analyzed at 37 • C in media containing 0.3% (w/v) oxgall and 120 µg/mL cholesterol. Three growth phases were assessed: growing, where culture is active in MEA media; resting, where culture is dormant in PBS buffer; and dead, where culture has been heat-killed and incubated in MEA media. At indicated time points, an aliquot of spent media was collected from three independent replicates.
After 36 h, M. purpureus removed 18.69 µg/mL or 18.78% of the cholesterol in spent media, which is a significant decrease from the initial concentration (Table 1, Figure 1, p < 0.01). The rate of cholesterol removal was most dramatic from 36 to 60 h, and cholesterol removed increased from 18.78% to 50.27% (p < 0.05). At 72 h, 69.65% cholesterol was removed. Table 1. Cholesterol assimilated in M. purpureus CBS 109.07 at different growth phases. All cultures were incubated at 37 • C with 120 µg/mL cholesterol and 0.3% (w/v) oxgall bile salts. Cholesterol assimilated was calculated from initial cholesterol, and determined from three independent trials conducted for each growth phase at each time point, and measured in duplicate via gas chromatography-flame ionized detection (GC-FID). Standard deviation calculated is absolute error (µg/mL) and percent error (%). a, b, c Means within a row are significantly different (p < 0.01). + Means significantly different from the initial value at t = 0 (p < 0.01).
After an aliquot of spent media was collected from three independent replicates, the remaining culture was harvested to measure dry weight (Table 2, Figure 2). Growth phases growing, resting, and dead were assessed and conditions included 0.3% oxgall and 120 µg/mL cholesterol. M. purpureus control was grown in MEA media without cholesterol. The dry weight of growing M. purpureus was significantly different from that of resting or dead M. purpureus (p < 0.05). The presence of cholesterol did not significantly enhance or inhibit the growth of M. purpureus [62] (p > 0.05). In both resting and dead conditions, M. purpureus had no significant growth (p > 0.05).
Reactivating Dormant M. Purpureus
M. purpureus incubated in PBS buffer, pH 7 with 0.3% oxgall and 120 µg/mL cholesterol does not assimilate cholesterol during a 96 h incubation (Table 3). To examine if resting conditions correspond to a dormant M. purpureus, cultures incubated in PBS were washed and transferred to MEA media with 0.3% oxgall and 120 µg/mL cholesterol. After four days of incubation in MEA, cholesterol assimilation was measured, but was not significantly different than MEA media control (p > 0.05). After seven days of incubation, cholesterol assimilation was initiated and cholesterol content is comparable to growing phase M. purpureus (Table S1). Dry weight of rescued M. purpureus was increased from resting M. purpureus ( Table 2). The length of time incubated in PBS before the transfer to MEA did not have a significant effect on the ability to assimilate cholesterol (p < 0.05). Table 3. Cholesterol content (µg/mL) and dry weight (mg) of M. purpureus CBS 109.07 incubated in phosphate-buffered saline (PBS) and rescued in MEA. All cultures were incubated at 37 °C with 120 µg/mL cholesterol and 0.3% (w/v) oxgall bile salts. Resting M. purpureus was washed in PBS + 0.3% oxgall and then transferred to MEA with 0.3% oxgall and 120 µg/mL cholesterol, and sample collected after 4 and 7 days. Data were determined from three independent trials conducted at each time point. Cholesterol content was measured in duplicate via GC-FID. Standard deviation in cholesterol content is absolute error.
Reactivating Dormant M. purpureus
M. purpureus incubated in PBS buffer, pH 7 with 0.3% oxgall and 120 µg/mL cholesterol does not assimilate cholesterol during a 96 h incubation (Table 3). To examine if resting conditions correspond to a dormant M. purpureus, cultures incubated in PBS were washed and transferred to MEA media with 0.3% oxgall and 120 µg/mL cholesterol. After four days of incubation in MEA, cholesterol assimilation was measured, but was not significantly different than MEA media control (p > 0.05). After seven days of incubation, cholesterol assimilation was initiated and cholesterol content is comparable to growing phase M. purpureus (Table S1). Dry weight of rescued M. purpureus was increased from resting M. purpureus ( Table 2). The length of time incubated in PBS before the transfer to MEA did not have a significant effect on the ability to assimilate cholesterol (p < 0.05). Table 3. Cholesterol content (µg/mL) and dry weight (mg) of M. purpureus CBS 109.07 incubated in phosphate-buffered saline (PBS) and rescued in MEA. All cultures were incubated at 37 • C with 120 µg/mL cholesterol and 0.3% (w/v) oxgall bile salts. Resting M. purpureus was washed in PBS + 0.3% oxgall and then transferred to MEA with 0.3% oxgall and 120 µg/mL cholesterol, and sample collected after 4 and 7 days. Data were determined from three independent trials conducted at each time point. Cholesterol content was measured in duplicate via GC-FID. Standard deviation in cholesterol content is absolute error. To determine if cholesterol is catabolized by M. purpureus as a carbon source, resting phase cultures in PBS were incubated with a nitrogen source, either ammonium sulfate or yeast nitrogen base without amino acids, at concentrations found in minimal media. We observed that resting M. purpureus in PBS supplemented with nitrogen sources had no significant cholesterol assimilation throughout a 24 and 72 h incubation (Table 4, p > 0.05). Table 4. Cholesterol content (µg/mL) of M. purpureus CBS 109.07 incubated in PBS with nitrogen sources. All cultures were incubated at 37 • C with 120 µg/mL cholesterol and 0.3% (w/v) oxgall bile salts. Cholesterol content was determined from two independent trials conducted for each growth phase at each time point, and measured in duplicate via GC-FID. All means were not significantly different from the initial value at t = 0 (p < 0.05). Standard deviation in cholesterol content is absolute error. Fungi samples were then washed twice and transferred to MEA + 0.3% oxgall. The dry weight at day 4 in MEA of M. purpureus previously incubated in yeast nitrogen base was a significant increase from the initial weight before rescue (Table 5, p < 0.05), supporting the observation that dormant M. purpureus can be reactivated. Time corresponds to duration incubated in PBS prior to rescue in MEA media. * Initial weight at t = 0. + Means significantly different from the initial weight at t = 0 (p < 0.05).
Citrinin Production in M. purpureus
The production of the mycotoxin citrinin was measured in M. purpureus grown under conditions used in cholesterol assimilation assays, where a 5 mm agar plug of M. purpureus was precultured in MEA at 150 rpm and 30 • C and then transferred to fresh media with oxgall and incubated at 60 rpm and 37 • C ( Table 6). Samples collected at 24 h and 72 h did not have citrinin production above the limit of detection. The culture broth in these samples were also colorless. After 14 days, M. purpureus grown in MEA + 0.3% oxgall began to produce red pigment and culture broth turned reddish. M. purpureus in resting phase did not become pigmented. Though 14 days is outside the incubation period for this study's cholesterol assimilation experiments, we extracted the red cultures, and measured 6.77 ± 1.02 µg citrinin per mL of culture broth. To validate extraction methods and show effectiveness, a set of M. purpureus cultures grown in MEA + 0.3% oxgall at 24 h and 72 h were spiked with citrinin at 10 µg/mL and extracted. Recovery of citrinin was 72.9% ± 3.8. Table 6. Citrinin production under experimental conditions. Citrinin concentration (µg/mL) of M. purpureus CBS 109.07 under experimental conditions of growing and resting phases was measured at 24 and 72 h via HPLC-UV. Two independent trials were conducted for each time point. Standard deviation in cholesterol content is absolute error.
Media Conditions After 24 h 1 After 72 h 1 After 14 Days
MEA + 0.3% oxgall ND ND 6.77 ± 1.02 PBS + 0.3% oxgall ND ND 1 ND is not detected with a peak below the limit of detection (LOD) of 1.11 µg/mL.
Discussion
As a natural source for monacolins, fermentation products of Monascus purpureus are widely used as alternative treatments for hypercholesterolemia. To the best of our knowledge, our data is the first to show that a strain of M. purpureus is capable of a cholesterol-lowering mechanism separate from its ability to produce monacolins and other secondary metabolites. We observed that active growing M. purpureus CBS 109.07 can assimilate cholesterol in vitro, and after 48 h incubation at 37 • C and high bile salt conditions, 36.38% of cholesterol content was removed. The removal of cholesterol by resting or dead M. purpureus CBS 109.07 was not statistically significant, and cholesterol was not catabolized as a carbon source. When resting cultures were washed and transferred to MEA media, M. purpureus CBS 109.07 became active and cholesterol assimilation and growth were observed. Citrinin production of M. purpureus CBS 109.07 incubated in growing or resting phase conditions at 24 h and 72 h was lower than the limit of detection, and we note that CBS 109.07 produced citrinin under our experimental conditions at day 14 when red pigment production was observed.
The ability of microorganisms to remove cholesterol in vitro from growth media is an indicator of probiotic potential, and the range of reduction percentage is wide and dependent on strain. We note that as with any therapeutic dosage, the concentration of microorganisms present will play a major role in cholesterol assimilation percentage. Miremadi et al. tested strains of Lactobacilli and Bifidobacteria and found 14 strains capable of removing cholesterol with a range of 34-65% assimilation after 24 h [41]. Eukaryotes capable of lowering cholesterol include strains of S. boulardii, S. cerevisiae, and I. orientalis, which after 48 h of incubation was observed to assimilate 90.6%, 96.8%, and 88.1% of cholesterol, respectively [37]. Strains of P. kudriazevii, Galactomyces sp., and Y. lipolytica were observed to assimilate 45.7%, 36.3%, and 30.9% of cholesterol, respectively, after 48 h of incubation in Chen et al. [45]. In the same study, the commercially available yeast probiotic S. boulardii lowered cholesterol by 36.5% at 48 h, and 41.5% cholesterol at 72 h. In this study, active growing M. purpureus CBS 109.07 was comparable to S. boulardii and was able to lower cholesterol from the media by 36.38% at 48 h, and 69.65% cholesterol at 72 h (Table 1). We note that at higher aeration and agitation, M. purpureus CBS 109.07 was able to assimilate a higher percentage of cholesterol (Table S2). When M. purpureus CBS 109.07 is resting or dead, cholesterol removal is not significant (Figure 1) and ranged from 2.75-9.29% removal after 72 h of incubation (Table 1). Other studies observed similar trends where resting and heat-killed cultures did not significantly reduce cholesterol [41,[63][64][65], suggesting a mechanism where actively growing strains are more efficient at removing cholesterol.
To eliminate the possibility that cholesterol assimilation by M. purpureus was an artifact of starvation and the uptake of available carbon sources, we allowed cultures to incubate undisturbed until the entire culture was collected at the designated time point. This procedure differed from other cholesterol assimilation studies, where one-tenth of the culture volume was removed at each time point and, thus, could significantly impact nutrient availability [38,39,41,63,[65][66][67]. We note that in our methods, the presence of cholesterol at 120 µg/mL did not enhance or inhibit the growth of M. purpureus CBS 109.07 as measured by dry weight (Figure 2).
We investigated the ability of M. purpureus CBS 109.07 to transition out of microbial dormancy after one to four days of incubation in PBS. Resting phase cultures incubated in PBS with 120 µg/mL cholesterol did not show significant cholesterol assimilation (Table 3). When washed and transferred to MEA media with 120 µg/mL cholesterol, previously resting phase M. purpureus CBS 109.07 cultures were able to restore cholesterol assimilation and growth at day 7 of rescue (Table 3) at levels comparable to growing cultures (Table S1, Table 2). There was no significant difference in reactivation of cholesterol assimilation between cultures that were incubated for one day or four days in PBS. We also observed that M. purpureus CBS 109.07 incubated in PBS with cholesterol and supplemented with nitrogen sources showed insignificant cholesterol assimilation between 24 to 72 h (Table 4), and were able to grow after transfer into MEA media for four days (Table 5). These results on resting phase reveal that M. purpureus CBS 109.07 incubated in PBS is indeed dormant and that cholesterol is not significantly taken up as a carbon source during dormancy. Additionally, the absence of cholesterol assimilation in PBS with nitrogen sources supports the assertion that M. purpureus CBS 109.07 does not metabolize cholesterol (Table 4). We posit that such an absence of cholesterol removal may reflect a mechanism where growing M. purpureus CBS 109.07 is more efficient at assimilating cholesterol.
Microorganisms can utilize the cholesterol-lowering mechanisms of active assimilation and passive adhesion to decrease host absorption of intestinal cholesterol [21,36,38]. Our results suggest that M. purpureus CBS 109.07 is capable of an in vitro active assimilation mechanism by growing cells. The dense pellet morphology of M. purpureus CBS 109.07 has made it difficult to measure the cholesterol content of the cell membrane, as similarly noted in biosorbent studies on other filamentous fungi such as Aspergillus niger and Penicillium sp. L1 strains [55][56][57]. Follow-up experiments will be conducted to lyse the Monascus membrane and examine membrane cholesterol content. Other cholesterol-lowering mechanisms by probiotic microorganisms include modulation of lipid metabolism and deconjugation of bile salts [36]. M. purpureus is capable of directly modulating lipid metabolism, as it synthesizes monacolins that directly inhibit HMG-CoA reductase, the committed step of cholesterol biosynthesis in the liver [7,8]. In future studies, we will assay M. purpureus CBS 109.07 for bile salt hydrolase (BSH) activity, the enzyme responsible for the deconjugation of bile salts found in many probiotic strains [68,69].
Like many strains within Monascus, Aspergillus, and Penicillium genera, M. purpureus CBS 109.07 can biosynthesize citrinin, with levels highly dependent on the growth conditions and amount of microorganisms used. In this study, the citrinin production in M. purpureus CBS 109.07 under growing and resting phase conditions replicated from our cholesterol assimilation experiments was below our limit of detection of 1.11 µg/mL (Table 6). Our results at 24 h and 72 h are consistent with published studies on other M. purpureus strains which measured citrinin production in different growth conditions over time. These studies observed delays in citrinin production, with detection of citrinin beginning as early as day 5 or late as day 10 [70][71][72]. Notably, the commencement of citrinin production corresponded to the commencement of red pigment production, and increases in agitation and aeration increased citrinin production [72,73]. M. purpureus CBS 109.07 studies in particular did not measure citrinin at early time points. However using thin-layer chromatography (TLC), they reported citrinin level to be 5 µg/mL after 14 day incubation in glucose media and unspecified agitation [30], and 65 µg/mL after 7 day incubation in ethanol media and 220 rpm agitation [74,75]. We used HPLC-UV to quantify citrinin production after 14 day incubation in MEA + 0.3% oxgall and 60 rpm. At day 14 under our conditions, the culture broth began to turn reddish, and we measured a citrinin concentration of 6.77 µg/mL [30,[74][75][76]. The differences in citrinin production between CBS 109.07 studies highlight how critical growth conditions are to the control of citrinin levels in Monascus strains [77,78]. In future studies, we can target the citrinin issue as many studies have successfully eliminated or reduced the levels of citrinin by disrupting the citrinin biosynthetic genes pksCT or ctnA in M. purpureus [70,[79][80][81].
To be beneficial for human health, probiotic microorganisms must be capable of surviving transit through the human gastrointestinal tract. Absorption of dietary cholesterol into the bloodstream occurs predominantly in the duodenum of the small intestine, where the pH varies from pH 6 to 7 and bile salts excreted from the bile duct assist in solubilizing cholesterol [82,83]. M. purpureus CBS 109.07 was cultured in media at physiological temperature and pH, and with a high bile salt concentration and low aeration and agitation. However we recognize the limitations of an in vitro study in reproducing gastrointestinal conditions. Additionally, the clinical safety of M. purpureus CBS 109.07 needs to be established-a potentially complicated issue if the restrictive regulations on Monascus pigments and red yeast rice supplements by the FDA and European Food Safety Authority are any indication [4,5,31,84]. In the current study, we are only beginning to raise the possibility of an application for M. purpureus CBS 109.07 in probiotics; we recognize that additional safety and gastrointestinal survival experiments are required, and that such advancement in understanding Monascus biochemistry may improve the restrictions on their usage in the US and EU [85]. We also recognize that other candidate strains of M. purpureus or other Monascus species may be found [86], and that CBS 109.07 may not be unique or exemplary. However, we note that the human consumption of M. purpureus CBS 109.07 has precedents, as several food-grade studies have considered CBS 109.07 an edible filamentous fungus and used it as the representative Monascus strain in human and animal food applications of mycoprotein [6,47,48].
Conclusions
Our findings demonstrate that M. purpureus CBS 109.07, which can biosynthesize statin-like monacolins, can also reduce cholesterol content in vitro via a mechanism of cholesterol assimilation at 37 • C with a high concentration of bile salts. The most effective removal of cholesterol occurred in growing M. purpureus CBS 109.07 cultures, while non-growing M. purpureus CBS 109.07 minimally adhered to cholesterol and did not metabolize cholesterol. Dormant cultures, once transferred from buffer to nutrient rich media, were able to be resume cholesterol assimilation at levels observed in active cultures. Citrinin production under our experimental conditions was not detected. Our results show that it is valuable to continue examining the cholesterol-lowering potential of active M. purpureus CBS 109.07 cultures, as further research may provide a possible insight in the treatment of hypercholesterolemia and will draw attention to the significance of filamentous fungi in human health and nutrition. | 2020-12-10T09:02:43.542Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "2f23d02507459558b8a2c633882daddb811b9a06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/6/4/352/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bcdd711c901c8c4b2247acad81e771e0c3ad147",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
214710661 | pes2o/s2orc | v3-fos-license | Design Thinking in STEM Education: A Review
Design thinking plays an important role in grooming creative and innovative ideas. Design and design thinking are attributed as vital traits in STEM education. Design ability is important in shaping students’ mindsets for future challenges and advancements in technology. STEM education will strengthen learning and will help to foster creativity, critical thinking, design thinking ability, problem solving and innovative skills. In this research, we try to focus on different factors and theories regarding design thinking and role of STEM education in inducing design thinking and innovative ideas. Design is not confined to just few academic and professional subjects. Design thinking ability helps in about each and every discipline of life. In this review, different research studies and their findings regarding importance of design thinking in STEM education have been explained.
Introduction
Design thinking is not an individual task rather it is based on multiple factors. Normally design is referred to a few professional and engineering branches such as architecture, technology, engineering, fashion etc. All of these professional branches are based on vast experience, years of trainings and useful practices. These designing trainings and practices lead to beneficial end products. Most of the school subjects like science and mathematics are not viewed as designing subjects rather designing is more associated with subjects like art and vocational trainings. Traditionally, it is believed that methods and mechanisms of science and mathematics are vital to acquire skills and knowledge (Banilower et al., 2013). Unfortunately, little or no attention is given to design thinking and activities in most of the traditional school systems. Design activities help to create new procedures, structures and mechanisms which are mostly associated with professionals, not students. For engineering and technology-based fields, designing is a key and the most important activity (Dasgupta, 2019;De Vries, 2018). It is important to learn designing and design mindset in these fields. Therefore, inclusion of technology and engineering based subjects in school education develops new and untapped prospects of what a student must be learning. Design activities and development of design-based mindset can benefit students in number of ways (National Research, 2009). Design thinking certainly nurtures existing traits such as problem solving and crisis management in students and helps in induction and development of new ingenious, creative and innovative skills. The eminence of design thinking has been endorsed by school education systems due to recent developments emphasizing the importance of STEM education (Bybee, 2014).
Traditionally, more importance is given to science and mathematics in existing education system as identified by the International Association for Evaluation of Educational Achievements (IEA). In fact, Science and mathematics are major subjects in K-12 education movement. It is believed in some educational circles that science and mathematics are not the proper subjects to induce design and design thinking abilities in students. Therefore, responsibility for such skills are left for engineering and technology subjects. Such beliefs are needed to be altered. While it is expected that inclusion of technology and engineering in K-12 education along with integration of science and mathematics will bring different new skills for students. These new skills will definitely include design and design thinking abilities due to successful merger of science, technology, engineering and mathematics (STEM) education. STEM education has been widely advocated to prepare students for technological challenges and to cope with rapid technological advancements in almost every field of life. The aim is to train students for the future challenges through induction of new skill sets through STEM education (Li et al., 2019;MacIsaac, 2019). Basic school level education is an important milestone for children which are open to numerous ideas with huge potential. STEM education has what it takes to nurture these fresh minds for the diversity and technological advancements in modern era. Nurturing and encouraging children through STEM education will help them to actively take part in shaping the future technologies and overcoming future challenges. For the successful integration of STEM education, teachers must have all essential tools which will assist children in development of useful skills to bolster learning abilities. Incorporation of STEM education in curriculum, cultural activities, teaching techniques and daily routine of school is essential to impart all these useful skills (Kennedy & Odell, 2014).
Design thinking is not only limited to academia and professional fields, it can be helpful in different aspects of life. We all incorporate design thinking in our routine life whether it is traveling plan, styling, fashion, house decoration. And in academia, for research designing, experimentation, designing curriculum, instructional manuals etc. Designing also means to pinpoint the issue and to design solution through problem solving skills. Experts say that children have natural ability and eagerness to design, make things and then tearing them apart to see how things work. Which leads to believe that everyone is creative, and everyone can design and engineer things at basic level (Cunningham, 2009;Cunningham & Lachapelle, 2014). Therefore, it is important to pay close attention to kid's design ability and ideas to aid and stimulate their creative ideas and design thinking. This will strengthen learning and help in fostering creativity, design thinking ability, critical thinking, problem solving and innovative skills (Cohen & Waite-Stupiansky, 2019). This review paper includes different studies and theories related to successful integration of STEM education for induction of design thinking in school disciplines and curriculum.
Design Thinking Models
Design thinking is not a new concept in various fields such as engineering. Various fields have different interpretation regarding the concept of design thinking (Dym, Agogino, Eris, Frey, & Leifer, 2005). For example, in business and management studies, design thinking is attributed to critical thinking and careful planning for creative and innovative ideas. In engineering, design is perceived as a routine matter. In educational sector, design related activities are believed to be a theoretical activity to have documentable results. Vague concepts for design and design thinking have led to add confusion and difficulties in path to operationalize this concept in curriculum and educational activities. Even these confusions created problems in the field of engineering (Cobb, Confrey, DiSessa, Lehrer, & Schauble, 2003;Wrigley & Straker, 2017). In engineering studies, trends have changed from Simon's influenced engineering model (Simon, 2019) to project based learning and various technical courses (Dym & Brown, 2012). To evaluate and understand the concept of design thinking in engineering studies and various other field of study, various methods and approaches have been coined. These approaches include design process modeling (Dym & Brown, 2012), comparing different expert approaches (Ahmed, Wallace, & Blessing, 2003), pinpointing design thinking skills and planning tactics (Wendell, Wright, & Paugh, 2017), taking into consideration different cognitive skills and features (Sweller, van Merrië nboer, & Paas, 2019), evaluating design team course of action and strategies (Hu, Du, Bryan-Kinns, & Guo, 2019). Previous research and studies for design thinking have provided us with detailed evaluation by focusing on multiple aspects. A study by Razzouk and Shute (2012) explained that experts in the field of design thinking exhibited increased efficiency due to experienced based knowledge and opting for solution-based thinking of underlying problem (Razzouk & Shute, 2012). Such valuable results help in development of design thinking expertise and guiding through various design thinking stages.
School education is different from professional studies like art, fashion, engineering, architecture, etc. as these professional studies impart a very specific skill set. Design thinking is identified as a vital shaping ability and learning tool (McFadden & Roehrig, 2019;Wrigley & Straker, 2017). Not only this, it also aids in selection of efficient framework for school education along with proper integration of STEM education (T. R. Kelley & Knowles, 2016). Design thinking is not only attributed to professional engineering and in designing of school education framework but also believed to be a generalized cognitive progression with creativity, experimental data, feedback data and redesigning, covering different fields of studies (Ahmed et al., 2003;Strimel, Kim, Grubbs, & Huffman, 2019).
In order to train students for design thinking, the concept is needed to be explored through various formal and informal educational and design activities. In the past, design thinking has been studied through different professional design course and practices. A study by Johansson, Sköldberg, Woodilla, and Çetinkaya (2013) termed this thinking process as 'designerly thinking'. They summed up all associated studies and theoretical explanations into five categories related to design and designerly thinking: initiation of scientific investigation, different reflexive practices, problem solving activities, pathway for reasoning of different aspects and meaning creation. They used the phrase "design thinking" for design practices beyond the professional means for people lacking background knowledge in the field of designing such as people working in management. Design thinking is therefore attributed as much simpler form of 'designerly thinking'. This design thinking is practicable for the students and educational activities in schools (Johansson-Sköldberg, Woodilla, & Çetinkaya, 2013). To identify main features of design thinking for detailed studies, Razzouk and Shute (2012) defined the design thinking as analytic and creativity tool, giving liberty and opportunities to a person for different experimentations, creation of different models, feedback mechanism and redesigning. These insights give us valuable information regarding design thinking suppressing the boundaries and limitations set by various designing disciplines (Razzouk & Shute, 2012). Li et al. (2019) explained that design thinking is the effective way to develop thinking models and prototypes in educational activities which will contribute to prepare students for modern challenges and problems (Li et al., 2019).
Design thinking is fairly new concept for educational systems and its importance is gaining recognition all across the globe. Still there are many unanswered questions and challenges such as how to effectively understand student's design thinking process and developments for designing curriculum more efficiently to help students in this thinking process (Wrigley & Straker, 2017). This is also an opportunity for research studies to gain detailed and in-depth knowledge for the factors contributing towards design thinking ability. Experimental studies can significantly contribute to understand various aspects of students' design thinking capabilities and developments (Dasgupta, 2019).
STEM Education in Development of Design Thinking
How planning and design thinking can and must be instructed or utilized has been a significant issue in various technical and professional fields. Various models have been proposed and developed for various purposes such as ways to educate and effective evaluations (Kretzschmar, 2003;Wright & Wrigley, 2019). Wrigley and Straker (2017) proposed a detailed educational instructive plan and steppingstone for the investigation of what course content should be taught and how it should contribute for designing thinking process through evaluation and learning new strategies. They contributed effectively by gathering and evaluating 51 courses related design thinking in various fields of business, art, management, engineering, innovation, etc. from around 28 universities globally. Their work and through evaluation led them to suggest five pedagogical steps for progress in design thinking covering every stage from lower to high order of thinkability and skills, anticipated for various design thinkability stages. These stages related to design thinking are classified and termed as the foundation stage, product stage, business stage, and professional stage (Wrigley & Straker, 2017). Taking a step further, they even explained these five stages of design thinking with Bigg's Structure of Observed Learning Outcome (SOLO) taxonomy (Biggs, 1996) which are comprehension of knowledge, application, detailed analysis, synthesis and final assessment. The model is proposed after detailed analysis of professional courses regarding design thinking process in these universities. The stages are specifically designed for various stages design thinking and to further enhance design thinkability.
Role of Engineering for Design Thinking in STEM Education
Studies have shown that during school education, students can learn design thinking and can refine their design thinking process through induction of STEM education. Now with the introduction of engineering subjects and courses in school curriculum enabled educational experts and researchers to have deep insight regarding effects of engineering in shaping design thinking of students. These engineering subjects can captivate students and further help them in learning and design thinking in STEM education (English & King, 2015;Kelly & Cunningham, 2019;McFadden & Roehrig, 2019). Kelly and Sung (2017) explained and investigated how are engineering subjects helping grade 5 students in learning science. Students were found to have spent more time in computational thinking in protocol sessions. Each student on average spent 34% of more time on computational thinking when presented with math embedded design problem. Pre and post sessions tests of students exhibited significant improvement in science related problems. Most of students were able to understand the concepts used in science problems but few struggled to apply the concept in similar new situations and problems. Results exhibited that students can learn problem solving and design decision skills through successful integration of engineering courses in STEM education. School teachers must be emphasized on using engineering design as a tool to improve thinking ability of students in science. Also, to use this knowledge of reasoning and problem solving beyond these designed tasks and situations (T. Kelley & Sung, 2017). Kelly and Cunningham (2019) explained how engineering design thinking gives the useful tools to enhance common and collective sense making, reasoning with facts and proper evidence and learning. Inclusion of engineering subjects aid in leaning science concepts more effectively due to frequent use of science-based concepts in engineering problems. Different physical, symbolic and even discursive artifacts for learning are some of identifiable epistemic tools drawn from engineering curriculum. These epistemic tools helped in development of models and prototypes, tradeoff between limitations and criteria in engineering-based design problems and proper communication with the use of conventional verbal, written and various symbolic models. It was analyzed that these useful epistemic tools shape learning and design thinking of students. Importance of these epistemic tools was explained with comparison and connecting what these practices aim to accomplish in terms of knowledge learning through this process (Kelly & Cunningham, 2019). Moreover, importance of the epistemic tools institutionalized in engineering was brought to light to be used in school education. There is a potential to point out, investigate and compare more identifiable epistemic tools from various STEM disciplines. New epistemic tools will further enhance students learn ability and design thinking development. STEM education is not believed to have cultural neutrality and role of culture in design thinking process and in various epistemic practices cannot be ignored. Culture will further boost up the design thinking process and increase learning for STEM education due to diversified classroom environment (Early Childhood, 2017).
Few researchers rather than solely putting mind on the use of designing in engineering practices, aimed in development of useful generic design practices in teaching approach to help students to enhance learning in STEM and STEAM (including Art) education (Chen & Lo, 2019;English, 2019). English (2019) reported and explained a 4-year longitudinal research based on 4 th grade class which included design thinking problem solving exercise and activity spanning across all four disciplines of STEM education. With the target to focus on a shoe designing task, students used initiatory problem data to further examine and analyze data regarding types of shoe, fabrics, various size range and foot lengths. Scientific knowledge through science curriculum was used to get more insight and related knowledge about material being used. Further information regarding shoe designing and manufacturing was gathered. Different groups of students were given tasks in shoe designing. The whole experiment and process resulted in illustrating a student ability to learn from basic research regarding any topic. And a student's ability to further use that knowledge as a beginner designer in initial designing, redesigning and final design selection (English, 2019). Above work, framework, process and education designing levels for design development process are well reported in research studies (Wrigley & Straker, 2017). Along with explaining design development stages and process, English also describe the level of awareness among students for learning STEM educational knowledge and how to use it when needed or required. Students were able to make decision based on knowledge and give corresponding explanations. Design activities reported in research studies are well structured and designed keeping in mind the specific goals and teaching support. To get positive results for design related activities in STEM education, teaching and instructional methods should be employed. More work is required to overcome certain mechanism and teaching limitations to better facilitate students in design thinking development process.
Design Thinking and Integrated STEM Education
Research studies have explained the importance and mutual benefits of design thinking and integrated STEM knowledge. Studies also explained a student's ability in learning and development of design thinking by successful integration of STEM curriculum design practices (English, 2019;Fan & Yu, 2017). Fan and Yu (2017) performed and executed an experimental study through comparison of learning outcomes in high school students' groups for engineering STEM education and technology education modules. With the careful control over course content and other important aspects, students' abilities were analyzed for the time period of 10 weeks. STEM engineering students were found to outperform students of technology education module in terms of conceptualized knowledge base, high order design thinking and project activities for engineering designs. More analyses and investigation revealed the main differences among application of design thinking practices in both modules (Fan & Yu, 2017). Following study also included and explained positive and practical effects of integrated STEM knowledge in high school education. English's study explained briefly the beneficial effects of integrated STEM education in curriculum and teaching. Fan and Yu took a step further by designing an engineering design experiment comparing education modules among various groups of students. Similar benefits in students to learn and develop design thinking were reported in the aforementioned study (English, 2019).
Design Thinking in Mathematics and Science
Design learning can not only be learnt through engineering and technical knowledge but also through subjects like mathematics and science which form the base for engineering practices. Design thinking is not a new term in educational sector (Burkhardt & Schoenfeld, 2003;Cobb et al., 2003). Students should be emphasized and engaged in design thinking process, generation of idea and critical thinking rather than just presenting with facts and already known procedures. Mathematics is perceived as different and non-experimental course in comparison with other STEM subjects (English, 2019). Views, instructional methods and learning processes are needed to be altered in case of mathematics. These changes can be made through the application of project-based learning (PBL) in STEM education to be used in mathematics as well. Various research studies can be seen focusing entirely on mathematics elementary teaching with detailed analysis through investigations, various projects and activities. Orona, Carter, and Kindall (2017) investigated and explained how standard measuring units can be integrated in design thinking related activities. The study focused on multiphase engineering design problem and applying it to problem solving activities for 2 nd grade mathematics. Students were engaged in question answering, brainstorming sessions, improving, learning and sharing. Students were asked to make relationship with hand measurement and different body and face features. The process efficiently integrated engineering designing practices and with standard measuring units (Orona, Carter, & Kindall, 2017).
Conclusion
From all above findings, it has been clear that design thinking must not confined to few limited professional or academic fields. The importance of design thinking and role of STEM in nurturing it cannot be denied. Introducing school children to science and technology fields help them in design thinking ability and design ideas. School education system must change the ways of learning and teaching science and mathematics in accordance with design thinking prospects. It has been found through numerous studies how STEM education can provide all useful tools and opportunities to students in learning design thinking ability to face future technological challenges and rapid advancements. STEM education can further provide students with number of opportunities in diverse environment covering all professional disciplines. STEM education employs professional design ideas to nurture and prepare young minds for future innovations. Design thinking certainly nurtures existing traits such as problem solving and crisis management in students and helps in induction and development of new ingenious, creative and innovative skills.
This review paper explains design thinking with different models through number of studies for school education. Although, the studies are mostly limited to professional disciplines and engineering, but now design thinking is gaining its due recognition in school education and in shaping young minds. Design thinking along integration of STEM education, has ability to provide sound foundation for development of new structures and models in current education system. Implementation of STEM education through prospective of design thinking has been discussed in detail in this review paper. It has been acknowledged that educational institutions have started to accept the importance of design thinking and STEM education. However, more work is required to overcome certain mechanism and teaching limitations to better assist students in design thinking development process. | 2020-03-19T10:29:16.061Z | 2020-02-09T00:00:00.000 | {
"year": 2020,
"sha1": "780999d49b0b75329ee89c1ed4c35d72a3205b39",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/irhe/article/download/17460/10821",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "632b2a37c3c65efa3207af16e12df50a6974cf98",
"s2fieldsofstudy": [
"Education",
"Engineering"
],
"extfieldsofstudy": [
"Psychology"
]
} |
254204594 | pes2o/s2orc | v3-fos-license | A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural environment and global warming. Early detection of fire ignition from initial smoke can help firefighters react to such blazes before they become difficult to handle. Previous deep-learning approaches for wildfire smoke detection have been hampered by small or untrustworthy datasets, making it challenging to extrapolate the performances to real-world scenarios. In this study, we propose an early wildfire smoke detection system using unmanned aerial vehicle (UAV) images based on an improved YOLOv5. First, we curated a 6000-wildfire image dataset using existing UAV images. Second, we optimized the anchor box clustering using the K-mean++ technique to reduce classification errors. Then, we improved the network’s backbone using a spatial pyramid pooling fast-plus layer to concentrate small-sized wildfire smoke regions. Third, a bidirectional feature pyramid network was applied to obtain a more accessible and faster multi-scale feature fusion. Finally, network pruning and transfer learning approaches were implemented to refine the network architecture and detection speed, and correctly identify small-scale wildfire smoke areas. The experimental results proved that the proposed method achieved an average precision of 73.6% and outperformed other one- and two-stage object detectors on a custom image dataset.
Introduction
When wildfires occur, the first thing observed in the air is a massive column of smoke. A reliable smoke alarm is essential for preventing fire-related losses. Rapidly spreading wildfires, exacerbated by climate change, can have far-reaching consequences on human communities, ecosystems, and economies, if not detected or extinguished quickly [1,2]. Smoke and flame detection are both applicable to wildfire monitoring. Smoke is the first visible indicator of a wildfire. Therefore, an early warning wildfire detection system must be able to detect smoke in natural environments. Smoke from wildfires has the following three primary properties: it is physically present, visually distinct, and dynamic. For a sensor to collect representative samples of smoke, it must be within close proximity of the smoke to detect its physical characteristics. In this study, we primarily focus on the other two properties, visually distinct and dynamic, which are perceptible to a camera. Initially, this section describes the economic issues and the reasons for the wildfires. Second, we analyze the current sensors and methods to detect wildfire smoke. Following this, a deep learning (DL) approach for detecting wildfire smoke is proposed.
Recently, many wild areas and forests have been burned or destroyed. Every year, wildfires destroy millions of acres of land, resulting in enormous losses to human life, vegetation canopies, and forest resources [3,4]. Wildfires are uncontrollable natural disasters that seriously threaten national economies. In addition, dry soil and crop destruction caused by fires have a negative effect on agricultural activities and crop productivity in areas closest high-power hardware. Recent advancements in computing power, sensing devices, and development software have allowed the use of UAVs for wildfire smoke detection using sophisticated DL-based computer vision algorithms. UAVs can quickly locate an issue, pinpoint its precise location, and alert the appropriate authorities. Hu et al. [36] proposed the MVMNet model for the accurate detection of targets in wildfire smoke. To extract color feature information from wildfire images, Guan et al. proposed the FireColorNet model based on color attention [37]. Fan et al. [38] proposed a lightweight network architecture for wildfire detection. YOLOv4-Light replaces YOLOv4's core feature extraction network with MobileNet and path aggregation network (PANet)'s regular convolution with a deep separable deep convolution. Federico et al. [39] established a faster R-CNN model for object detection and developed a DL model for detecting forest fires using the transfer learning of a pre-trained RetinaNet. However, UAV images captured from the above model cause target wildfires to appear tiny, with color and shape features that are not immediately visible, making early detection difficult. Thus, the aforementioned color-and shape-featurebased wildfire detection models cannot be directly applied to UAV images. The research on UAV-based methods for the early detection of wildfires faces significant challenges. Furthermore, the precision of wildfire detection based on UAV images is negatively affected by the deficiency of labeled UAV fire image samples. However, introducing DL methods into UAV image recognition is challenging because of the lack of sufficient fire annotation image samples; such methods require large amounts of high-quality annotation data to achieve satisfactory detection results.
Constructing a deep-level network model is required to extract more abstract features from images; however, training a deep neural network is a time-consuming and challenging approach. Furthermore, a large number of labeled samples are required to train a deep model. Consequently, this has become a significant bottleneck in identifying wildfires from UAV images; however, recent advances in transfer learning offer hope for a solution. Transfer learning [40,41] describes the application of a trained model to a different task and then using that model to model the new task by adjusting its parameters to better suit the new context. Overfitting training owing to insufficiently labeled samples can be avoided with transfer learning.
We propose a wildfire smoke detection and notification system based on the enhanced YOLOv5m model [42] and UAV images, which can help overcome the above-mentioned limitations. To identify smoke from wildfires, a core framework that was pre-trained using the common objects in context (COCO) dataset was used. The original network was enhanced by optimizing the network structure parameters, and the pre-trained weights were used as initialization parameters for the backbone network. Noxious gases can be accurately identified by applying the optimized network to the wildfire smoke dataset. Our previous findings [43] inspired this study. As described in Sections 3 and 4, we enhanced the performance of the traditional YOLOv5m network to facilitate rapid wildfire smoke detection and tested our findings on an artificial intelligence (AI) server. The main contributions of this study are as follows: • A fully automated wildfire smoke detection and notification system was developed to reduce natural catastrophes and the loss of forest resources; • A large wildfire smoke image dataset was collected using UAV and wildland images of wildfire smoke scenes to improve the accuracy of the deep CNN model; • Anchor-box clustering of the backbone was improved using the K-mean++ technique to reduce the classification error; • The spatial pyramid pooling fast (SPPF) layer of the backbone part was optimized to focus on small wildfire smoke; • The neck part was adjusted using a bidirectional feature pyramid network (Bi-FPN) module to balance multi-scale feature fusion; • Finally, network pruning and transfer learning techniques were used during training to improve the network architecture, detection accuracy, and speed. The remainder of this paper is organized as follows: Section 2 describes the literature on UAV-and DL-based wildfire smoke detection methods. The experimental dataset is presented in Section 3, along with a comprehensive analysis of the structure of the YOLOv5 model. Section 4 provides an in-depth discussion and an analysis of the experimental findings. Analysis of wildfire smoke detection based on various systems is discussed in Section 5. The shortcomings of the proposed system are addressed, and future directions are outlined in Section 6. Finally, Section 7 summarizes this work.
Related Works
Several vision-based methods have been proposed for wildfire smoke detection, the most prominent of which are image color and deep CNN. In addition to recent successes of DL in natural language processing [44] and image classification [45], significant progress has also been made in DL-based wildfire detection methods.
Conventional Image-Based Methods
Wildfire smoke is the most prominent characteristic of early-stage fire-detection systems. Early smoke detection has been a focus of interest for some researchers. Smoke color information was extracted using the energy function by Calderara et al. [46], allowing for wildfire smoke detection. Their camera-based system was adequate for detecting fires in an area of 80 m 2 during the day and night. Nevertheless, color-based smoke detection is not robust because certain smoke colors, such as black, gray, and white, are similar to background environments, including clouds, dust, and mountains. Some researchers have combined color, texture, and dynamic features to enhance smoke detection implementation [47]. Ye et al. [48] employed a pyramid approach to conducting multi-scale decomposition of smoke images and merged it using a support vector machine (SVM) to achieve smoke identification. This method improves the wildfire detection precision because it simultaneously considers the spatial and temporal information of the image sequences. Ye et al. [49] utilized the common motion characteristics of fire and smoke to identify the smoke. At the same time, Islam et al. [50] used color and motion to identify smoke using a combination of Gaussian mixture model (GMM)-based adaptive moving object detection and an SVM classifier. Their method achieved 97.3% accuracy but did not help detect accidental fires beyond the range of surveillance cameras. Previous color-based fire detection methods require extensive parameter tuning, which negatively affects detection stability. To lessen the importance of fine-tuning, Khalil et al. proposed a new fire detection method using the RGB and CIE L*a *b color models by integrating movement detection with tracking fire regions [51]. Their technique relied on a GMM that utilized segmented images of fire to identify only the motion of objects that matched the color of the fire. This step only detects moving fire pixels and ignores the other moving pixels.
Deep Learning and UAV-Based Wildfire Smoke Detection
In recent years, there has been an increase in the use of UAVs for a wide range of forestry tasks, such as exploration and saving procedures, forest scouting, forest firefighting, and forest resource surveys. Thus, they represent one of the most promising novel approaches for addressing the issue of wildfire smoke detection. Therefore, owing to their high flexibility, low price, ease of use, and ability to fly at various heights, UAVs systems are preferred over other available technologies. Owing to developments in hardware and software, it is now feasible to process intensive visual data directly from UAV. Interest in using deep learning-based computer vision techniques for detecting fire and smoke in forests and wildland areas has recently increased. UAVs can employ deep-learning algorithms to autonomously identify the origins of wildfires based on the following two key visual features: smoke and fire. Smoke and fire are the most useful visual cues for quickly and accurately detecting wildfires. Some researchers have concentrated on wildfire detection through fires [52,53]. In contrast, other studies have focused on wildfire detection using smoke features [54,55], which appear to be more appropriate for early wildfire detection because the fire in its early scene could be concealed, particularly in overgrown forests [56,57]. Several recent studies have aimed to simultaneously detect smoke and fire. Early wildfire detection using UAV and deep learning algorithms can be achieved in the following three main ways: wildfire image classification, wildfire detection based on object detection algorithms, and semantic segmentation-based wildfire detection. There are the following three primary ways to accomplish early wildfire detection using UAV and deep learning methods: wildfire detection based on object detection approaches, wildfire image classification, and wildfire detection using semantic segmentation [58]. However, training these methods requires a significant visual dataset and robust computational resources, such as hardware and software. We also need to be careful when selecting an appropriate network architecture and training it using an appropriate dataset. In Section 3, we explain and present the proposed early wildfire smoke detection system using deep learning techniques.
Materials and Methods
In this section, we explain the deep learning model used for wildfire smoke detection tasks, the dataset used for training, and the evaluation metrics employed in this study. Prior to the beginning of the task, the navigation procedure, selection of suitable models and algorithms, and the execution of the system must be completed. As shown in Figure 1, the UAVs camera is used to take photos or videos, and the computer performs preprocessing, feature extraction, smoke, and fire detection, and generates prediction results. and smoke in forests and wildland areas has recently increased. UAVs can employ deeplearning algorithms to autonomously identify the origins of wildfires based on the following two key visual features: smoke and fire. Smoke and fire are the most useful visual cues for quickly and accurately detecting wildfires. Some researchers have concentrated on wildfire detection through fires [52,53]. In contrast, other studies have focused on wildfire detection using smoke features [54,55], which appear to be more appropriate for early wildfire detection because the fire in its early scene could be concealed, particularly in overgrown forests [56,57]. Several recent studies have aimed to simultaneously detect smoke and fire. Early wildfire detection using UAV and deep learning algorithms can be achieved in the following three main ways: wildfire image classification, wildfire detection based on object detection algorithms, and semantic segmentation-based wildfire detection. There are the following three primary ways to accomplish early wildfire detection using UAV and deep learning methods: wildfire detection based on object detection approaches, wildfire image classification, and wildfire detection using semantic segmentation [58]. However, training these methods requires a significant visual dataset and robust computational resources, such as hardware and software. We also need to be careful when selecting an appropriate network architecture and training it using an appropriate dataset. In Section 3, we explain and present the proposed early wildfire smoke detection system using deep learning techniques.
Materials and Methods
In this section, we explain the deep learning model used for wildfire smoke detection tasks, the dataset used for training, and the evaluation metrics employed in this study. Prior to the beginning of the task, the navigation procedure, selection of suitable models and algorithms, and the execution of the system must be completed. As shown in Figure 1, the UAVs camera is used to take photos or videos, and the computer performs preprocessing, feature extraction, smoke, and fire detection, and generates prediction results.
Overview of the UAV-Based Wildfire Detection System
This study used UAV images, computer vision, and deep learning models to enhance the precision of early wildfire smoke detection results in cloudy, hazy, and sunny weather conditions. We propose an optimized YOLOv5 model and UAV image-based wildfire smoke detection and notification system. Typically, UAVs are equipped with cameras that
Overview of the UAV-Based Wildfire Detection System
This study used UAV images, computer vision, and deep learning models to enhance the precision of early wildfire smoke detection results in cloudy, hazy, and sunny weather conditions. We propose an optimized YOLOv5 model and UAV image-based wildfire smoke detection and notification system. Typically, UAVs are equipped with cameras that send data to a ground control station, which is analyzed using an AI system to detect the presence of smoke or fire. The proposed system employed deep CNNs to detect smoke regions with high accuracy and a strong processor to execute quick real-time image processing. Figure 1 shows the overall framework of the UAV-based wildfire-smoke detection system. Section 3.2 gives a more detailed explanation of the proposed system. Section 3.3 describes data collection and preprocessing, while Sections 3.4 and 3.5 explain transfer learning and evaluation metrics. In this study, we focused on developing an AI system for early wildfire smoke detection and compared its performance with that of YOLO models and other state-of-the-art methods.
When working with a UAV, it is essential to control and receive image and video data remotely. Therefore, the life-of-sight, 4G/LTE, and SATCOM communication methods were used to secure the capability of operating under various circumstances and the UAV operation at long distances from the ground control station due to the size of the forest area. A typical transmission structure contains a line-of-sight ground control station using a radio connection. It includes two datalinks (the primary one, used for image and video and telemetry exchange within 180+ kilometer range, and the backup one, for telemetry only), with automatic hopping between them in case of Global Navigation Satellite System (GNSS) or signals loss and advanced encryption standard AES-256 encryption. Secure VPN technologies, including TLS, IPSec, L2T, and PPTP, are used for data transport. This method allows the ground control station to connect with the UAV regardless of range restrictions and provide reliable cellular service. The modem concurrently enrolls itself in the networks of two distinct cellular network operators and then chooses the most reliable one. Line-ofsight communications have some disadvantages, considering the range and the possibility of weather interference. SATCOM has historically been considered a Beyond Line of Sight (BLOS) communication system that would guarantee a constant connection and reliable data transmission at predetermined distances. A highly directed L-band antenna ensures a small radio signature. Furthermore, it complies with BRENT, STU-IIIB, TACLANE, STE, and KIV-7 are only some of the encryption and secure communication standards. AI server computer is located in the ground control station to process received image and video data from UAVs.
The framework presented in Figure 1 is the fundamental procedure for detecting smoke from wildfires. The deep learning methods applied in this procedure have significantly facilitated the operations of feature extraction and detection by substituting traditional approaches [59].
After acquiring the image and performing the necessary optimization procedures during preprocessing, it is necessary to isolate pixels that describe the object of interest from the rest of the image. Smoke and fire feature extraction consisted of images taken at specific times of the day and with specific lighting conditions. Motion, colors, corners, edges, brightness levels, and intensities are image characteristics that were considered in the feature extraction process. To perform an in-depth analysis of the segmented image and locate the essential points of interest, the image was feature extracted, which means that the relevant operations are being executed on it. The resulting image was then fed into a trained model to locate patterns that will either validate or invalidate the existence of smoke. Figure 2 depicts the detailed procedure of the proposed approach. In the subsequent step, if the AI model produced a positive result, the system sends an alarm via the UAV or the ground support station to the personnel responsible for fire protection to take the necessary steps.
Proposed Wildfire Smoke Detection Method
In this section, we discuss the procedures of computer vision techniques based on deep learning that are executed on an AI server to detect smoke from wildfire images. Our strategy consisted of developing several computer vision techniques that use deep learning to accomplish our purposes.
Original YOLOv5 Model
A state-of-the-art real-time one-stage object detector, YOLOv5, is well suited to our needs, owing to its shorter inference time and higher detection accuracy. Because of its developers' commitment to improving the system, YOLO has become one of the most effective methods for detecting objects in both Microsoft COCO datasets and Pascal VOC (visual object classes). YOLOv5 consists of the following four main models: the extended YOLOv5x, benchmark YOLOv5l, and simplified preset models YOLOv5s and YOLOv5m. The primary distinction between these types of networks lies in the number of feature extraction modules and convolution kernels present at various nodes in the network, with a reduction in resulting model sizes and parameter counts.
Proposed Wildfire Smoke Detection Method
In this section, we discuss the procedures of computer vision techniques based on deep learning that are executed on an AI server to detect smoke from wildfire images. Our strategy consisted of developing several computer vision techniques that use deep learning to accomplish our purposes.
Original YOLOv5 Model
A state-of-the-art real-time one-stage object detector, YOLOv5, is well suited to our needs, owing to its shorter inference time and higher detection accuracy. Because of its developers' commitment to improving the system, YOLO has become one of the most effective methods for detecting objects in both Microsoft COCO datasets and Pascal VOC (visual object classes). YOLOv5 consists of the following four main models: the extended YOLOv5x, benchmark YOLOv5l, and simplified preset models YOLOv5s and YOLOv5m. The primary distinction between these types of networks lies in the number of feature extraction modules and convolution kernels present at various nodes in the network, with a reduction in resulting model sizes and parameter counts. Figure 3 illustrates the overall network architecture of Yolov5's system. The YOLOv5 model can be broken down into the following three primary parts: backbone, neck, and head. First, Cross-Stage-Partial (CSP) 1 and CSP2 were created for CSPNet [60] and had two distinct bottleneck CSP structures. The goal is to decrease the amount of duplicated information. Consequently, the model parameters and the number of floating-point operations per second (FLOPS) were scaled back. This has the dual effect of accelerating the inference process while simultaneously improving its precision, leading to a smaller mode size. Among them CSP1-also known as backbone-and CSP2-also known as Figure 3 illustrates the overall network architecture of Yolov5's system. The YOLOv5 model can be broken down into the following three primary parts: backbone, neck, and head. First, Cross-Stage-Partial (CSP) 1 and CSP2 were created for CSPNet [60] and had two distinct bottleneck CSP structures. The goal is to decrease the amount of duplicated information. Consequently, the model parameters and the number of floating-point operations per second (FLOPS) were scaled back. This has the dual effect of accelerating the inference process while simultaneously improving its precision, leading to a smaller mode size. Among them CSP1-also known as backbone-and CSP2-also known as neck-were used for feature fusion. Both processes are described below. Second, in addition to CSP1, the backbone features Convolution Layer + Batch normalization + Sigmoid Linear Unit (CBS) and spatial pyramid pooling fast modules. The spatial pyramid pooling fast module chained three 5 × 5 MaxPool layers together, iteratively processed the input through each layer, and finally performed a Concat on the combined output of the MaxPools before applying a CBS operation. Spatial pyramid pooling fast is faster than spatial pyramid pooling while producing the same results. Third, Neck employed a PANet [61]. Using an improved bottom-up path structure, PANet employed a new feature pyramid network (FPN) to transfer feature information at the lowest possible level.
Furthermore, adaptive feature pooling, which connects feature grids to all feature levels, directly reproduced valuable data within each feature level in the following layer. Thus, PANet can use precise localization signals to enhance the precision with which objects are located in the lower layers. Finally, the head enabled the model to predict the size of objects across various scales by the generation of three feature maps, one each for small, medium, and large objects. Furthermore, adaptive feature pooling, which connects feature grids to all feature levels, directly reproduced valuable data within each feature level in the following layer. Thus, PANet can use precise localization signals to enhance the precision with which objects are located in the lower layers. Finally, the head enabled the model to predict the size of objects across various scales by the generation of three feature maps, one each for small, medium, and large objects.
K-Means++ Clustering Technique for Determining Anchor Boxes
In object-detection approaches, high-precision detection requires an appropriate anchor box. Anchor boxes are a set of initial regions that share fixed dimensions and aspect ratios. The easier the model can be trained, the closer the predicted boundary box aligns with the actual boundary box and the closer it is to the actual boundary box. Consequently, the anchor parameters of the original YOLOv5 model must be modified to meet the needs of specific datasets during training. K-means clustering has been used in the field of clustering owing to its simplicity and efficiency. K-means clustering was used in the YOLOv5 model to obtain the k initial anchor boxes. This method requires artificially setting the initial clustering centers, which can lead to noticeable differences in the final clustering output. The main drawback of the K-means algorithm is that it requires inputs, such as the initial clustering center and the number of clustering centers, k. However, specifying the exact locations of the clusters and the initial cluster center in advance is notoriously problematic. In this study, the K-means++ algorithm was used to obtain k initial anchor boxes, which fixes the problems with the original K-means algorithm. To obtain an anchor box size better suited for detecting small objects, K-means++ optimizes the initial point selection and can thus significantly reduce the classification error rate.
The following is a detailed explanation of how K-means++ was used to find an anchor box: (1) Select a random central coordinate c1 from the given dataset ; (2) Determine the Euclidean square distance ( ) between each sampling point and center; (3) Compute the probability of each sampling point ( ) to serve as a new cluster center. The sampling point with the maximum probability was chosen to serve as the center of the new cluster. The probability was calculated using the following formula:
K-Means++ Clustering Technique for Determining Anchor Boxes
In object-detection approaches, high-precision detection requires an appropriate anchor box. Anchor boxes are a set of initial regions that share fixed dimensions and aspect ratios. The easier the model can be trained, the closer the predicted boundary box aligns with the actual boundary box and the closer it is to the actual boundary box. Consequently, the anchor parameters of the original YOLOv5 model must be modified to meet the needs of specific datasets during training. K-means clustering has been used in the field of clustering owing to its simplicity and efficiency. K-means clustering was used in the YOLOv5 model to obtain the k initial anchor boxes. This method requires artificially setting the initial clustering centers, which can lead to noticeable differences in the final clustering output. The main drawback of the K-means algorithm is that it requires inputs, such as the initial clustering center and the number of clustering centers, k. However, specifying the exact locations of the clusters and the initial cluster center in advance is notoriously problematic. In this study, the K-means++ algorithm was used to obtain k initial anchor boxes, which fixes the problems with the original K-means algorithm. To obtain an anchor box size better suited for detecting small objects, K-means++ optimizes the initial point selection and can thus significantly reduce the classification error rate.
The following is a detailed explanation of how K-means++ was used to find an anchor box: (1) Select a random central coordinate c 1 from the given dataset X; (2) Determine the Euclidean square distance D (x) between each sampling point and center; (3) Compute the probability of each sampling point P (x) to serve as a new cluster center. The sampling point with the maximum probability was chosen to serve as the center of the new cluster. The probability was calculated using the following formula: where D (x) is the distance in Euclidean space, and each data point in the dataset has to the cluster's center. Each point in the sample has a certain chance, denoted by P (x) , of becoming the next cluster's epicenter; (4) Once we have selected the center of the first K clusters, repeat Steps (2) and (3). We define C as the set of closest points and revise the mass center for each set of C for i ∈ {1, 2, 3,..., k}; (5) Repeat step (4) until the value of C does not change by more than the threshold, or the maximum number of iterations is reached.
Spatial Pyramid Pooling Fast
The newest version of YOLOv5, 6.1, includes the spatial pyramid pooling fast (SPPF) as the final module of the backbone. The SPPF module comprises three 5 × 5 MaxPool layers through which inputs are iterated; the combined output of the layers is then concatenated before the CBS operation is performed. Figure 4 shows a flowchart of SPPF. The image can learn features at multiple scales with the help of maximum pooling and jump connections, and then combine global and local features to increase the representativeness of the feature map. Maximum pooling is a method that uses a rectangular mask to extract the maximum value from a set of image regions. Although maximum pooling can help reduce irrelevant data, it often results in the discarding of less useful feature data.
or the maximum number of iterations is reached.
Spatial Pyramid Pooling Fast
The newest version of YOLOv5, 6.1, includes the spatial pyramid pool as the final module of the backbone. The SPPF module comprises three layers through which inputs are iterated; the combined output of the concatenated before the CBS operation is performed. Figure 4 shows a flow The image can learn features at multiple scales with the help of maximum jump connections, and then combine global and local features to representativeness of the feature map. Maximum pooling is a metho rectangular mask to extract the maximum value from a set of image regi maximum pooling can help reduce irrelevant data, it often results in the dis useful feature data. This study enhances the concept of feature reuse while simultaneou SPPF by utilizing a dense link construction similar to that of DenseNet [62]. we obtain the SPPF module, which helps minimize the feature information maximum module pooling. The SPPF+ module effectively retains global i fires affecting small target forest areas. A flowchart of SPPF+ is shown in F This study enhances the concept of feature reuse while simultaneously enhancing SPPF by utilizing a dense link construction similar to that of DenseNet [62]. Subsequently, we obtain the SPPF module, which helps minimize the feature information lost owing to maximum module pooling. The SPPF+ module effectively retains global information on fires affecting small target forest areas. A flowchart of SPPF+ is shown in Figure 5.
Spatial Pyramid Pooling Fast
The newest version of YOLOv5, 6.1, includes the spatial pyramid pool as the final module of the backbone. The SPPF module comprises three layers through which inputs are iterated; the combined output of the concatenated before the CBS operation is performed. Figure 4 shows a flow The image can learn features at multiple scales with the help of maximum jump connections, and then combine global and local features to representativeness of the feature map. Maximum pooling is a metho rectangular mask to extract the maximum value from a set of image regi maximum pooling can help reduce irrelevant data, it often results in the dis useful feature data. This study enhances the concept of feature reuse while simultaneou SPPF by utilizing a dense link construction similar to that of DenseNet [62]. we obtain the SPPF module, which helps minimize the feature information maximum module pooling. The SPPF+ module effectively retains global i fires affecting small target forest areas. A flowchart of SPPF+ is shown in F
Bi-Directional Feature Pyramid Network
The goal of multiscale feature fusion is to combine features from various scales [63].
Typically, given a list of multiscale features, P in = P in , . . . , where P in l i denotes the feature at level l i . The purpose is to discover a conversion f that successfully gathers various features and produces a list of new features, P out = f P in . Figure 6a shows a concrete illustration of a conventional top-down FPN [64]. Input features P in is obtained from the backbone network, which is CSPDarknet as shown in Figure 2. It accepts level 3-7 input features P in = P in 3 , . . . P in 7 , where P in i describes the feature level of the input image with a size of 1/2 i . For example, if the input size is 640 × 640, then P in 3 corresponds to feature level 3 (640/2 3 = 80) with a size of 80 × 80, whereas P in 7 corresponds to feature level 7 with a size of 5 × 5. The traditional FPN aggregates multiscale features from the top down as follows: where Resize is typically an upsampling or downsampling operation for matching the size, and Conv is a convolutional operation for feature extraction.
level 7 with a size of 5 × 5. The traditional FPN aggregates multiscale features from the top down as follows: where is typically an upsampling or downsampling operation for matching the size, and is a convolutional operation for feature extraction. Only top-down data flow naturally restricts the traditional FPN multiscale feature fusion. Figure 6b illustrates how PANet [61] addresses this issue by adding a bottom-up direction aggregation network.
When using the Bi-FPN as the feature network, top-down and bottom-up bidirectional feature fusion is frequently applied to the level 3-7 features (P3, P4, P5, P6, P7) obtained from the backbone network. The box and class networks receive these combined features as input and output predictions for the class of the object and bounding box. As described in [65], the weights of the box and class networks were cumulative across all feature levels. Figure 6b,c show the feature network designs of PANet and Bi-FPN, respectively. Only top-down data flow naturally restricts the traditional FPN multiscale feature fusion. Figure 6b illustrates how PANet [61] addresses this issue by adding a bottom-up direction aggregation network.
When using the Bi-FPN as the feature network, top-down and bottom-up bidirectional feature fusion is frequently applied to the level 3-7 features (P3, P4, P5, P6, P7) obtained from the backbone network. The box and class networks receive these combined features as input and output predictions for the class of the object and bounding box. As described in [65], the weights of the box and class networks were cumulative across all feature levels. Figure 6b,c show the feature network designs of PANet and Bi-FPN, respectively.
The use of Bi-FPN to enhance YOLOv5's neck facilitates a more accessible and quicker multiscale feature fusion. Bi-FPN can also regularly apply top-down and bottom-up multiscale feature fusion owing to the introduction of learnable weights. Compared with Yolov5's neck PANet, Bi-FPN performs better with fewer parameters and FLOPS, without sacrificing accuracy. This enhanced the capability of identifying wildfire smoke in real time.
Network Pruning
There are the following two primary types of pruning techniques: unstructured and structured. To obtain a particular proportion between the model's performance and the number of parameters, unstructured pruning employs techniques, such as kernel-level pruning, vector-level pruning, and fine-grained pruning. This type of pruning technique suffers from the need for a dedicated algorithm to support it whenever the network topology changes. When performing structured pruning, the pruning technique primarily modifies the total number of feature channels and filter banks of the network. By contrast, structured pruning can successfully prune an entire network layer without requiring a custom-designed algorithm. In this study, structured pruning was employed to refine the architecture of an improved YOLOv5m network.
The YOLOv5 network architecture comprises three detection heads formed by a cascade of three different forms at the neck. The three detection heads have different output feature scales (19 × 19, 38 × 38, and 76 × 76) and are used to detect small, medium, and large objects in the images, respectively. There is a need to increase both the efficiency and precision of the detection of small-sized smoke. However, because of its time-consuming nature, the 76 × 76 detection head is not well suited to boosting the inference speed. Thus, a structural pruning technique was applied to the YOLOv5 network's neck part, which involved removing the large object detection heads (76 × 76) and leaving only the medium and small object detection heads in place (38 × 38 and 19 × 19).
Wildfire Smoke Detection Dataset
The accuracy of the deep learning model was highly dependent on the datasets used during the training and testing stages. Our analysis of wildfire smoke detection datasets revealed that the datasets created for vision-based wildfire smoke detection systems were deficient and that existing open-access datasets had their own set of problems. Existing wildfire UAV images [66], a Korean tourist spot database [67] for non-wildfire mountain images, and the Kaggle, Bing, Google, and Flickr images were used to address these concerns. Both datasets were crawled from images or videos obtained using a drone, as the early wildfire smoke detection model is meant for applications in drones and UAVs for monitoring. Collected images mainly include aerial pictures of wildfire smoke and aerial images of forest backgrounds. The resolution of collected pictures varies between 2048 × 3072 and 512 × 512 pixels. Recent wildfires in Australia and California are depicted in these images, along with images from Alberta, British Columbia, Colorado, North and South Carolina, and Indonesia, among others. The dataset sample is shown in Figure 7. The dataset consisted of 3285 wildfire smoke images and 2715 non-wildfire smoke images, which were resized to 640 × 640 resolution for network input, as presented in Table 1. Consequently, we applied data augmentations such as rotation, horizontal flip, and the mosaic method to increase the number of images in the wildfire-smoke detection dataset. As shown in Figure 8, we performed the following transformations on each original fire image: 60° and 120° counterclockwise rotations and a horizontal flip. Therefore, we squeezed the preexisting training images to make them more generalized, enabling the model to acquire knowledge from a wider variety of scenarios. The time required to manually rotate and label every dataset image was substantial. We developed a software that uses the OpenCV library to automatically rotate and flip images to simplify the image transformation procedure. The success of any deep learning model relies heavily on the availability of a large amount of labeled training data. Nevertheless, reliable results for wildfire smoke detection are difficult to achieve in practice, using this dataset. This could be because of insufficient data, class imbalance, or overfitting. Overfitting a model makes it impossible to capture visual patterns accurately. We used image data augmentation (modifying and reusing images) to increase the predictive ability of the model because insufficient data can cause underfitting. After reviewing the literature [68,69], we found that geometric transformations, such as flipping and rotation, are the most effective techniques for image data augmentation and conducting experiments [70,71]. The efficacy of CNN models depends on the size and resolution of the image datasets used to train the models. Consequently, we applied data augmentations such as rotation, horizontal flip, and the mosaic method to increase the number of images in the wildfire-smoke detection dataset. As shown in Figure 8, we performed the following transformations on each original fire image: 60 • and 120 • counterclockwise rotations and a horizontal flip. Therefore, we squeezed the preexisting training images to make them more generalized, enabling the model to acquire knowledge from a wider variety of scenarios. The time required to manually rotate and label every dataset image was substantial. We developed a software that uses the OpenCV library to automatically rotate and flip images to simplify the image transformation procedure. mosaic method to increase the number of images in the wildfire-smoke detection dataset. As shown in Figure 8, we performed the following transformations on each original fire image: 60° and 120° counterclockwise rotations and a horizontal flip. Therefore, we squeezed the preexisting training images to make them more generalized, enabling the model to acquire knowledge from a wider variety of scenarios. The time required to manually rotate and label every dataset image was substantial. We developed a software that uses the OpenCV library to automatically rotate and flip images to simplify the image transformation procedure. The smoke coordinates shift when the labeled images are rotated counterclockwise at specific angles. We developed a software to read the images in the folder, rotate them, and update their labels so that we would not have to relabel them manually. All wildfire smoke in the images was labeled using the LabelImg tool per YOLOv5 training annotation. The location of smoke was recorded in a text file within the tag folder. This was also implemented in the CNN training process. To reduce the number of false positives, we also included training images that did not depict smoke but were similar, including wildlands, fog, and clouds, as shown in Figure 7.
The 6000 images used for wildfire smoke detection were split into training and test sets, with 80% (4800) being used for training. After applying the data augmentation techniques to the training set only, we expanded the dataset images five times. According to Table 2, the total number of images for wildfire smoke detection has expanded to 30,000.
Transfer Learning
A deep neural model requires several samples to train the model to be effective. It is challenging to obtain good detection results by training from scratch because the initial wildfire smoke dataset is part of a small example dataset. In contrast to fine-tuning, which entails using a portion of a network that has already been trained on a known dataset to train a new unseen dataset, transfer learning involves applying previously acquired knowledge from one domain to a new unseen domain. The trained model served as a baseline for training the target dataset.
In this study, we trained a model to detect small-scale wildfire smoke using a transferlearning approach to increase the precision of the model. The wildfire smoke detection model was obtained by training on the wildfire smoke dataset and then using that model to train on the small-sized wildfire smoke training set to obtain the small-size wildfire smoke detection model. The training of the small-sized wildfire smoke detection model using transfer learning is illustrated in Figure 9.
The 6000 images used for wildfire smoke detection were split into training sets, with 80% (4800) being used for training. After applying the data augm techniques to the training set only, we expanded the dataset images five times. A to Table 2, the total number of images for wildfire smoke detection has expanded t Wildfire smoke images 2628 5256 7884 657 Non-smoke images 2172 4344 6516 543 Total 4800 9600 14,400 1200
Transfer Learning
A deep neural model requires several samples to train the model to be effec challenging to obtain good detection results by training from scratch because th wildfire smoke dataset is part of a small example dataset. In contrast to fine-tunin entails using a portion of a network that has already been trained on a known d train a new unseen dataset, transfer learning involves applying previously a knowledge from one domain to a new unseen domain. The trained model ser baseline for training the target dataset.
In this study, we trained a model to detect small-scale wildfire smoke transfer-learning approach to increase the precision of the model. The wildfir detection model was obtained by training on the wildfire smoke dataset and th that model to train on the small-sized wildfire smoke training set to obtain the sm wildfire smoke detection model. The training of the small-sized wildfire smoke d model using transfer learning is illustrated in Figure 9.
Evaluation Metrics
Based on our previous studies [41,[72][73][74], we conducted quantitative experiments using Microsoft COCO benchmarks (Table 3), which are commonly used in object detection tasks, and analyzed the results. The precision of a classifier can be measured by the number of correct identifications it makes or the number of times it correctly identifies an object. The ability of a model to recognize important cases is quantified by its recall, which is the rate of correct predictions relative to the total number of ground truths. Good models have a high recall (the ability to identify most ground-truth objects correctly) while recognizing only the objects of interest (it shows high precision). If the recall and precision of a model are both 1, then the model is perfect, and the false-negative value is zero. The accuracy and recall rates were calculated by comparing the results of the proposed method with the ground-truth images at the pixel level. The precision and recall of the wildfire smoke detection system were determined using the following equations: where TP represents the number of smoke regions that were correctly identified, and FP represents the number of false positives that occurred when non-smoke regions were misidentified as smoke. FN represents the number of false negatives that occurred when the actual smoke region was incorrectly identified as a nonsmoke region. Based on Equation (7), we determined the average precision (AP) as follows:
Experimental Results
This section explains the experiments conducted and the results of the AI server smoke-detection models for wildfires. The enhanced YOLOv5m model was trained on a personal computer equipped with an 8-core 3.70 GHz CPU, Nvidia GeForce 1080Ti GPUs, and 32 GB of RAM [41]. A wildfire smoke dataset was used for training and testing. The width and height of the input image were 640 × 640 pixels, the number of epochs was 600, the subdivision was 8, the learning rate was 0.001, and the batch size was 32. To enhance the model's performance, we modified the following particularly crucial parameters: batch size and learning rate.
A high-performance AI server was selected over embedded systems to improve the energy storage activity of UAV systems and guarantee real-time system implementation. The effectiveness of the proposed system in detecting smoke from wildfires depends on the efficiency of the AI server. There is a high demand for AI server processing power to train deep-learning models for smoke detection and notification systems in the event of wildfires. Table 4 shows the results of our experiments, in which we evaluated the implementation of the proposed system with the help of a high-power AI server [41].
Qualitative Evaluation
First, we conducted a qualitative analysis of the proposed method for detecting smoke from wildfires. In the test set of our wildfire smoke dataset, we randomly selected four images for large-sized smoke and another four images for small-sized smoke. The improved YOLOv5m model yielded qualitatively similar results for images of both large-(a) and small-(b) sized smoke, as shown in Figure 10. These eight pictures show different scenes and situations as well as smoke blowing in various directions.
Qualitative Evaluation
First, we conducted a qualitative analysis of the proposed method for detecting smoke from wildfires. In the test set of our wildfire smoke dataset, we randomly selected four images for large-sized smoke and another four images for small-sized smoke. The improved YOLOv5m model yielded qualitatively similar results for images of both large-(a) and small-(b) sized smoke, as shown in Figure 10. These eight pictures show different scenes and situations as well as smoke blowing in various directions. As shown in Figure 10, the proposed wildfire smoke detection method uses the improved YOLOv5m model, which can detect smoke in a wide range of forest scenes. We also experimented with both large-and small-sized smoke images to ensure the stability of the proposed method. Early detection of smoke is essential for wildfire prevention and suppression. If not controlled in time, even a small amount of smoke can lead to a devastating wildfire that poses a threat to human life, forest resources, and the environment. The proposed method for detecting wildfire smoke can also accurately detect relatively small regions of smoke in images.
According to experiments, the suggested method has been shown to be effective in decreasing false detections and allowing for early suppression and rapid response times, regardless of the size, direction, or shape of wildfire smoke. In most cases, when a small As shown in Figure 10, the proposed wildfire smoke detection method uses the improved YOLOv5m model, which can detect smoke in a wide range of forest scenes. We also experimented with both large-and small-sized smoke images to ensure the stability of the proposed method. Early detection of smoke is essential for wildfire prevention and suppression. If not controlled in time, even a small amount of smoke can lead to a devastating wildfire that poses a threat to human life, forest resources, and the environment. The proposed method for detecting wildfire smoke can also accurately detect relatively small regions of smoke in images.
According to experiments, the suggested method has been shown to be effective in decreasing false detections and allowing for early suppression and rapid response times, regardless of the size, direction, or shape of wildfire smoke. In most cases, when a small amount of smoke has the same color and pixel intensity values as the background, traditional visual fire detectors falsely detect it.
Quantitative Evaluation
As a second step, we conducted quantitative experiments using Microsoft COCO metrics, such as precision, recall, and AP (as determined by Equations (5)- (7)).
In our dataset, images have various smoke sizes, such as large and small, as well as close and long distances. To determine the most effective model, we conducted experiments with several different members of the YOLOv5 network family. YOLOv5n is a cutting-edge platform that is a reliable solution for embedded systems, the Internet of Things (IoT), and edge devices. The YOLOv5s can be used in both desktop and mobile software. YOLOv5m is the best model for a wide range of datasets and training scenarios because it achieves a good balance between speed and accuracy. For datasets where it is necessary to discover smaller objects, the YOLOv5l large model is the best option. The largest of the five models, YOLOv5x, is the most accurate. As can be seen in Table 5, larger models, such as YOLOv5l and YOLOv5x, take longer to run and have more parameters, but they also yield superior results in practice. Owing to the scope of our dataset and the nature of our research, we selected YOLOv5m. The ability to quickly detect and inform about wildfire smoke is crucial for limiting the destruction of forest ecosystems and saving lives. YOLOv5m can use deep learning to assess smoke from wildfires of varying sizes and directions. Table 5 presents a more detailed comparison of all the models, including the inference speed on the CPU and GPU and the number of parameters for a 640 × 640 image size [43]. Subsequently, we tested the enhanced YOLOv5m model for its speed on the original dataset of 6000 images and the full augmented dataset of 30,000 images to see how well it performed. As shown in Table 6, the improved YOLOv5m model performed better when using the complete augmented dataset than the original dataset, with 75.6% and 82.7%, respectively. The variation in model weight size in Table 6 does not depend on data augmentation. As far as we know, the main reason is that the training ended before the final epoch. In addition to the FP16 model, training checkpoints include an FP16 EMA and an FP32 optimizer (each model parameter has its FP32 gradient saved within the optimizer). After the last training epoch, the EMA and optimizer are removed from the final checkpoint, leaving only the FP16 model. Initially, the number of epochs was set to 600, but the training process was manually ended at 300 epochs since a predefined learning rate and accuracy were achieved. We found that if the training process ended before the final epoch, the weight size could be up to four times larger than the original model's weight size.
We evaluated the precision of the proposed method by deploying various YOLO variants on the original wildfire smoke fire dataset (6000 images) and distinguishing the resultant precisions (Table 7). As can be seen in Table 7, the improved YOLOv5m has an average precision of 75.6% with the training set, whereas the average precisions of YOLOv5m, YOLOv4, and YOLOv3 were 73.5%, 71.3%, and 65.6%, respectively. We also compared the average precision results of various YOLO implementations on the augmented fire dataset (30,000 images) to measure the performance of the proposed approach. The enhanced YOLOv5m model performed at the top (82.7% and 79.3% accuracy, respectively) in both the training and testing phases (Table 8). Compared with the enhanced YOLOv5m model, which achieved an average precision of 79.1% in the test set, YOLOv5m's result was 75.4% (a difference of 3.9%). YOLOv4 and YOLOv3 were trained to average precisions of 73.5% and 78.1%, respectively. These models were trained longer than those in the earlier experiments because of the larger number of images in the augmented dataset. With the help of data augmentation techniques, we were able to boost the overall precision of our training dataset from 75.6% to 82.7% (7.1%), and the precision of our test dataset from 72.4% to 79.3% (6.9%).
Although the average precision of the test set was 79.3%, we have researched and evaluated several recently presented methods to improve this result. Most methods proposed for detecting smoke from small wildfires in images have failed [50]. Therefore, to broaden our dataset and improve the precision of wildfire smoke detection, we collected images of small-sized smoke from wildfires. Images of smoke of relatively small sizes are shown in Figure 11. Based on [8], we combined a large-scale feature map with a feature map from a previous layer to detect small moving objects while preserving fine-grained features. Smoke pixels of varying sizes can be detected using this comprehensive feature map by combining location data from lower layers with more complex characteristics from higher ones.
To comprehensively explore the performance of the proposed method, we compared it with two-stage methods such as Fast R-CNN, Faster R-CNN+++, Mask R-CNN, Cascade R-CNN, CoupleNet, and DeNet, and one-stage methods such as RFBNet, SSD, RefineDet, DeepSmoke, EfficientDet, YOLO, YOLOv2, YOLOv3, YOLOv4, and YOLOv5 object detectors. Table 9 shows a performance comparison between the improved YOLOv5m model and the other six two-stage object detectors using the wildfire smoke dataset. To compare and assess the performance of the object detector models, we utilized the same training and testing images of smoke from the custom wildfire smoke dataset. Table 10 shows a performance comparison between the improved YOLOv5m model and the other six one-stage object detectors using the wildfire smoke dataset. Sensors 2022, 22, x FOR PEER REVIEW 18 of 25 Figure 11. Examples of images of small-sized smoke for wildfire smoke detection dataset.
To comprehensively explore the performance of the proposed method, we compared it with two-stage methods such as Fast R-CNN, Faster R-CNN+++, Mask R-CNN, Cascade R-CNN, CoupleNet, and DeNet, and one-stage methods such as RFBNet, SSD, RefineDet, DeepSmoke, EfficientDet, YOLO, YOLOv2, YOLOv3, YOLOv4, and YOLOv5 object detectors. Table 9 shows a performance comparison between the improved YOLOv5m model and the other six two-stage object detectors using the wildfire smoke dataset. To compare and assess the performance of the object detector models, we utilized the same training and testing images of smoke from the custom wildfire smoke dataset. Table 10 shows a performance comparison between the improved YOLOv5m model and the other six one-stage object detectors using the wildfire smoke dataset. Figure 11. Examples of images of small-sized smoke for wildfire smoke detection dataset. As we can observe, the improved YOLOv5m model achieved the best smoke detection performance on our wildfire smoke dataset in terms of the AP, AP50, AP75, APS, APM, and APL evaluation metrics.
Ablation Study
Ablation experiments are developed to confirm whether or not the SPPF+ and BiFPN described in this paper enhance the accuracy. The total number of ablation experiments is four, as follows: YOLOv5m, YOLOv5m + (SPPF+), YOLOv5m + BiFPN, and YOLOv5m + (SPPF+) + BiFPN. Experiments 2-4 were performed in the following sequence. The original YOLOv5m model was trained in the first experiment by adding only SPPF+. In the second experiment, the model was trained by adding only BiFPN to the neck part of the original YOLOv5m. In the last experiment, SPPF+ and BiFPN were added to the original YOLOv5m and trained together. Table 11 shows the comparison of the ablation experiments. Even though YOLOv5m is one of the most well-known object detection models, its results are relatively low, as demonstrated by the ablation studies (Table 11). These verify that replacing the SPPF module and modifying the PANet design in YOLOv5m to a BiFPN design can enhance the model's performance.
Analysis of Wildfire Smoke Detection Based on Various Systems
Thermal sensors. As a form of thermal energy, heat is transferred from warmer to colder regions. Sensing the thermal energy transferred via convection requires either a heating element or an infrared camera. Whether this is due to a shift in the refractive index, a change in displacement, or a shift in resistance, temperature shifts can be detected by the heating element. Amplification circuits, signal conditioning, and heating elements are the three main components of thermal sensors. A thermal sensor is used to assess the level of fire-related heat within a building. There are the following three distinct types of thermal sensors: fixed temperature, increasing rate, and compensating rate sensors. The thermal detector has a minimum operating temperature or a predetermined temperature threshold. The rate of the thermal compensation sensor is triggered when the ambient air temperature rises above the set point.
Smoke sensors. Currently, the most prevalent and widely used fire alarm system is based on smoke sensors. Smoke can be detected in the following two different ways: photoelectric (light distribution) and ionization. To detect smoke, ionization smoke detectors use a radioactive source, whereas photoelectric detectors use a photodetector and light source. When smoke is present in the air, the particles spread light. Occlusion or light distribution can be measured using a detector. Regardless of the detection method, an alarm is produced when the signals reach a certain threshold. Its sensing principle makes fire alarms effective, responsive, and reliable.
In actual flames, ionization alarms typically react more quickly than photoelectric alarms. Compared with ionization detectors, photoelectric alarms are more reliable and sensitive in the presence of flaming fires. In summary, smoke detectors are particle detectors that are sensitive to a narrow range of particle sizes. When the signal from the smoke detector rises above a specific limit, an alarm is activated. These systems cannot always distinguish between fire-related and non-fire-related particles when they are of the same size or have the same refractive index. For instance, fire alarms are easily damaged by dust and humidity. Both photoelectric and ionization fire alarm systems suffer from high false alarm rates owing to cross-sensitivity. Additional sensors can be added to the smoke detectors to improve the reliability of fire detection.
Vision-based fire detection. Traditional heat, smoke, flame, and gas sensors are problematic because they take too long to reach predefined values. This is the time it takes for the particles to travel to the point sensors and trigger them. The limited coverage area is another issue. For this reason, a large number of point sensors are required to monitor large areas. When describing a fire, it is important to consider its origin, location, intensity, shape, size, growth, and dynamic texture. Traditional sensors cannot identify all these nuances. Most currently available sensors generate unnecessary alarm signals and incur additional financial expenses. The use of cameras to capture and analyze images of smoke or fire is an effective way to reduce these problems. UAV and surveillance cameras can be used instead of expensive smoke-and fire-detection sensors to further reduce costs.
Sun-synchronous satellites. Recently, numerous studies have attempted to detect forest wildfires using satellite imagery. This is primarily attributable to the high volume of satellite launches and declining costs. The data from three different types of multispectral imaging sensors onboard sun-synchronous satellites-the advanced very-high-resolution radiometer (AVHRR), the moderate resolution imaging spectroradiometer (MODIS), and the visible and infrared imaging radiometer suite (VIIRS)-have all been used to detect wildfires. Given its substantial similarity to clouds, haze, and other similar phenomena, smoke detection using MODIS data is a complex problem that has been addressed in multiple studies. Shukla et al. [87] proposed a multiband thresholding technique as a basis for automatic smoke detection using MODIS data. The algorithm appears to be able to distinguish smoke pixels from backgrounds with other elements, such as clouds, although it is better when the smoke is fresh and dense than when it is more dispersed. Priya et al. [88] also utilized a dataset of 534 RGB satellite images gathered from various sources, such as MODIS images available on the NASA Worldview platform and Google. A robust method for distinguishing between fire and non-fire images was developed using a CNN based on Inception v3 and transfer learning. Thresholding and nearby binary patterns were then used to isolate the areas where fires existed.
Geostationary satellites. The Advanced Himawari Imager (AHI) sensor of the Himawari 8 weather satellite has already been used to conduct crucial work on fire and smoke detection using satellite imagery from geostationary satellites. The Himawari 8 satellite is part of the Japan Meteorological Agency's new-generation geostationary weather satellites. Compared to its predecessor, AHI 8 claims significantly improved radiometric, spectral, and spatial resolutions. Its primary mechanism is called the advanced baseline imager (ABI), which captures images of Earth in 16 different visible and infrared spectral bands at extremely high spatial and temporal resolutions. Recently, using information from the Himawari 8/AHI sensor, Larsen et al. [89] introduced a deep FCN for near-real-time prediction of fire smoke in satellite imagery.
CubeSats. Miniaturized satellites known as "CubeSats" are becoming increasingly popular in remote sensing. These satellites, which typically weigh between 1 and 10 kg and connect to the famous "CubeSat" standard, define the outer dimensions of the satellite within multiple cubic units of 10 cm × 10 cm × 10 cm. It can accommodate small technology payloads for various scientific research or commercial functions, as well as for exploring new space technologies. Owing to their specific design, CubeSats have an easier operating time in the LEO zone from a technical perspective. As of January 2020, over 1100 CubeSats had been successfully launched by various academic institutions and commercial enterprises worldwide. Shah et al. [90] introduced a system comprising a constellation of nanosatellites equipped with multispectral visible-to-infrared cameras and a ground station. This would allow all surface points on the planet to be revisited at least once per hour. To accurately estimate the thermal result of the surface, it must be captured with high resolution in both the mid-and long-wave infrared. Based on computer simulations, a multispectral infrared camera measuring incident power in two thermal infrared bands can detect a fire that spans approximately 400 square meters (mid-wave and long-wave). Because of the system's built-in data-processing capabilities, we can issue a warning about a wildfire within 30 min and use very little network bandwidth.
Limitations and Future Research
Despite these successes, the proposed wildfire smoke detection and notification system has certain limitations. One of these is the ability to differentiate between real smoke and phenomena such as fog, haze, and clouds, which can make it appear as though smoke is present. These drawbacks are illustrated in Figure 12. These restrictions are most noticeable in scenes with dense clouds or haze, whose pixel values are close to those of a smoke plume. Our next step is to update the wildfire smoke detection system to determine the difference between different-sized clouds and different-shaped smoke so that we can better detect the source of a fire. These methods improve the ability of the model to predict the presence of smoke by effectively expanding the size of our training data and extracting better representations from the data. One technique that could broaden the scope of this field is to determine the size and shape of the smoke. The lack of consideration of the model's potential for nighttime detection of wildfires is another potential issue. As daytime smoke detection was the focus of this investigation, this temporal variable was omitted. Our research suggests that smoke detectors are less reliable in the dark than fire detectors. Agirman et al. [91] proposed a method that incorporates both the spatial and temporal behavior of a nighttime wildfire by using a CNN+RNN-based network and a bidirectional long short-term memory network to detect fire. Note that this study covered only the AI server part of the wildfire-smoke detection and notification system. smoke is present. These drawbacks are illustrated in Figure 12. These restrictions are most noticeable in scenes with dense clouds or haze, whose pixel values are close to those of a smoke plume. Our next step is to update the wildfire smoke detection system to determine the difference between different-sized clouds and different-shaped smoke so that we can better detect the source of a fire. These methods improve the ability of the model to predict the presence of smoke by effectively expanding the size of our training data and extracting better representations from the data. One technique that could broaden the scope of this field is to determine the size and shape of the smoke. The lack of consideration of the model's potential for nighttime detection of wildfires is another potential issue. As daytime smoke detection was the focus of this investigation, this temporal variable was omitted. Our research suggests that smoke detectors are less reliable in the dark than fire detectors. Agirman et al. [91] proposed a method that incorporates both the spatial and temporal behavior of a nighttime wildfire by using a CNN+RNN-based network and a bidirectional long short-term memory network to detect fire. Note that this study covered only the AI server part of the wildfire-smoke detection and notification system. Future work will focus on addressing the model's limitation of having a high number of false positives in challenging scenarios, such as low-altitude cloud cover and haze. Incorporating fire location, date, and weather data from historical fire records can improve predictions because fires tend to occur in similar locations and conditions during specific months. Another drawback of the proposed approach is its incompatibility with edge devices. However, we hope to address this in future work by reducing the model size without compromising prediction accuracy. It is feasible to create a model that is better suited for edge computing by using distillation to train a smaller deep network, such as YOLOv5n, to achieve the same level of performance as our current model.
Conclusions
Several researchers have attempted to employ a CNN-based deep learning model to enhance wildfire smoke detection systems as remote camera sensing technology has Future work will focus on addressing the model's limitation of having a high number of false positives in challenging scenarios, such as low-altitude cloud cover and haze. Incorporating fire location, date, and weather data from historical fire records can improve predictions because fires tend to occur in similar locations and conditions during specific months. Another drawback of the proposed approach is its incompatibility with edge devices. However, we hope to address this in future work by reducing the model size without compromising prediction accuracy. It is feasible to create a model that is better suited for edge computing by using distillation to train a smaller deep network, such as YOLOv5n, to achieve the same level of performance as our current model.
Conclusions
Several researchers have attempted to employ a CNN-based deep learning model to enhance wildfire smoke detection systems as remote camera sensing technology has advanced. Collecting sufficient image data for training models in wildfire detection is challenging, leading to data imbalance or overfitting issues that decrease the model's performance. This study developed an early wildfire smoke detection and notification system using an improved YOLOv5m model and a wildfire image dataset.
Initially, the classification error was decreased using the K-mean++ method to enhance the anchor box clustering of the backbone part. Second, the SPPF layer of the backbone part was upgraded to SPPF+ to better concentrate the smoke from small wildfires. A Bi-FPN module was implemented as a third step to fine-tune the neck and achieve a more precise fusion of features across multiple scales. Finally, during training, network pruning and transfer learning techniques were used to enhance the network architecture, detection accuracy, and speed. The proposed wildfire smoke detection method was trained using smoke images, including various wildfire smoke scenes, and conducted on an AI server. We collected a wildfire smoke dataset that included 6000 smoke and non-smoke images for model training and testing. We compared the proposed system to other popular two-stage object detectors in an experiment to evaluate the qualitative and quantitative performance. Based on the experimental results and evaluation, it was concluded that the enhanced YOLOv5m model is robust and outperforms other methods in the training and testing steps, with 82.7% and 81.5% AP 50 , respectively, on the custom smoke image dataset. | 2022-12-04T16:10:07.144Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "dd6dc3d2e24704be75416d5f0fd09dadc0870743",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/23/9384/pdf?version=1669897124",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6914fd03a6cb1d2067c390980d422ea87b25c46",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
199023033 | pes2o/s2orc | v3-fos-license | Comparative review of drug–drug interactions with epidermal growth factor receptor tyrosine kinase inhibitors for the treatment of non-small-cell lung cancer
Abstract The development of small-molecule tyrosine kinase inhibitors (TKIs) that target the epidermal growth factor receptor (EGFR) has revolutionized the management of non-small-cell lung cancer (NSCLC). Because these drugs are commonly used in combination with other types of medication, the risk of clinically significant drug–drug interactions (DDIs) is an important consideration, especially for patients using multiple drugs for coexisting medical conditions. Clinicians need to be aware of the potential for clinically important DDIs when considering therapeutic options for individual patients. In this article, we describe the main mechanisms underlying DDIs with the EGFR-TKIs that are currently approved for the treatment of NSCLC, and, specifically, the potential for interactions mediated via effects on gastrointestinal pH, cytochrome P450-dependent metabolism, uridine diphosphate-glucuronosyltransferase, and transporter proteins. We review evidence of such DDIs with the currently approved EGFR-TKIs (gefitinib, erlotinib, afatinib, osimertinib, and icotinib) and discuss several information sources that are available online to aid clinical decision-making. We conclude by summarizing the most clinically relevant DDIs with these EFGR-TKIs and provide recommendations for managing, minimizing, or avoiding DDIs with the different agents.
Introduction
The occurrence of drug-drug interactions (DDIs) is a serious problem for the use of anticancer drugs. DDIs can exacerbate the risk of serious or fatal adverse events, and/or lead to reductions in therapeutic efficacy. 1 In particular, inducers of drug-metabolizing enzymes are known to increase the systemic clearance of many anticancer agents, 2 for example, long-term anticonvulsant therapy increases the systemic clearance of several antileukemic agents, thereby reducing their clinical efficacy. 3 Tyrosine kinase inhibitors (TKIs) are effective for a wide variety of solid and hematologic malignancies, and are now established as standard therapeutic options; 1 more than 20 different TKIs are currently in use. 4 In particular, the development of small-molecule TKIs that target the epidermal growth factor receptor (EGFR) -EGFR-TKIshas revolutionized the management of locally advanced/metastatic non-small-cell lung cancer (NSCLC). 5 This article reviews clinically relevant DDIs with EGFR-TKIs approved for the treatment of NSCLC.
The EGFR (HER1; ErbB1) is a member of the ErbB receptor family, which also includes HER2 (Neu, ErbB2), HER3 (ErbB3), and HER4 (ErbB4). 6,7 The main physiological role of ErbB-linked tyrosine kinases (TKs) is regulation of cellular proliferation. 6 Somatic EGFR mutations are important oncogenic drivers of NSCLC, [8][9][10] and occur in 10-15% of the Caucasian patients and around 50% of the Asian patients with metastatic NSCLC and adenocarcinoma histology. 11 The most common EGFR mutations are deletions in exon 19 (del19) and L858R point mutations in exon 21. 12 These "activating" mutations lead to activation of intracellular signaling by EGFR in a ligand-independent manner. 13,14 NSCLC patients with activating EGFR mutations become dependent on EGFR activity to stimulate downstream signaling pathways and maintain the malignant phenotype ("oncogene addiction"). 15,16 Consequently, blocking ErbB family pathways with EGFR TKIs can inhibit tumor cell proliferation and initiate apoptosis.
EGFR-TKIs widely available for the treatment of advanced NSCLC include the first-generation reversible TKIs, erlotinib and gefitinib; the second-generation irreversible ErbB family blocker, afatinib; and the third-generation EGFR-wild-type sparing, irreversible EGFR/T790M inhibitor, osimertinib. 17 In China, another first-generation EGFR-TKI, icotinib, is available. 18 EGFR-TKIs are recommended as firstline treatment options for advanced EGFR mutation-positive (EGFRm+) NSCLC, 19,20 having demonstrated robust benefit in terms of progression-free survival (PFS, erlotinib: median 9.7-13.1 months; 21-23 gefitinib: 8.4-10.9 months; 24-28 icotinib 4.0-12.4 months; 29 afatinib: 11.0-13.6 months; 25,30,31 osimertinib: 17.7 months). 32 The first-generation EGFR-TKIs, gefitinib and erlotinib, bind reversibly to EGFR TK and thereby inhibit both mutant and (to a lesser extent) wild-type EGFR. 33 In randomized phase III trials, both agents demonstrated improved PFS and response rates, but not overall survival (OS) compared with platinum-doublet chemotherapy. [21][22][23][24]27,28,34 However, patients with EGFRm+ NSCLC inevitably develop resistance to first-generation EGFR-TKIs. 35 In the majority (50-60%), resistance develops due to development of a novel T790M missense mutation in exon 20 of the EGFR, [36][37][38] which impairs binding of the EGFR-TKIs to the kinase domain of the receptor. 39 Another first-generation EGFR-TKI, icotinib has a similar chemical structure and physicochemical properties to erlotinib and displays similar clinical efficacy. 29 A randomized, double-blind trial in Chinese patients with advanced NSCLC who had failed to respond to chemotherapy concluded that icotinib was non-inferior to gefitinib with regard to PFS; median PFS was 4.6 months for icotinib and 3.4 months for gefitinib. 40 In a meta-analysis of pooled data for icotinib in patients with NSCLC (11 studies), the pooled mean PFS was 7.34 months (95% confidence interval: 5.60-9.07). EGFRm+ patients had longer PFS (median 11.0 months) than EGFRm-patients (1.97 months). 29 Further "generations" of EGFR-TKIs have also been developed, the aim being to improve efficacy and tolerability, and to overcome treatment resistance; data from head-to -head trials suggest improved outcomes with second-/thirdgeneration versus first-generation TKIs.
In contrast to gefitinib and erlotinib, afatinib is an ErbB receptor family inhibitor. Afatinib binds covalently and irreversibly blocks signaling via all hetero-and homodimers formed by ErbB1, but also by HER2, HER3, and HER4. 41,42 It was postulated that this broad irreversible inhibition might delay or avoid the development of resistance. 43 In two phase III trials conducted in treatment-naïve NSCLC patients, afatinib demonstrated significant improvements in PFS versus platinum-doublets, 30,31 while pre-specified analyses of both trials identified significant improvements in OS with afatinib versus chemotherapy in patients with del19 mutations. 44 In a phase IIb study of treatment-naïve patients with advanced EGFRm+ NSCLC, afatinib significantly improved PFS, time to treatment failure (TTF) and objective response rate (ORR) versus gefitinib, with a trend towards improvement in overall survival (OS). 25,45 Osimertinib is an oral, third-generation, irreversible EGFR-TKI inhibitor 46 that selectively inhibits EGFR harboring both activating mutations and EGFR T790M resistance mutations, with less activity versus wild-type EGFR. 47 In a double-blind phase III trial in patients with previously untreated, EGFRm+ (del19 or L858R) advanced NSCLC, osimertinib demonstrated efficacy superior to that of standard EGFR-TKIs (gefitinib or erlotinib). Median PFS for osimertinib was significantly longer than for standard EGFR-TKIs (18.9 months versus 10.2 months). 32 In a phase I study in patients harboring the EGFR T790M mutation having progressed during therapy with EGFR-TKIs, second-line (or later) osimertinib was highly active, with median PFS in T790M-positive patients (9.6 months) being substantially longer than in T790M-negative patients (2.8 months). 48 A single-arm study in patients harboring both activating EGFR mutations and the T790M mutation and with progression after prior EGFR-TKI therapy also showed antitumor efficacy of osimertinib (objective response rate 64%; disease control rate 90%) suggesting that osimertinib can overcome T790M-mediated acquired resistance. 49 Another secondgeneration EGFR-TKI, dacomitinib, recently became available in the US and EU, but to date there is little published evidence on potentially clinically relevant DDIs with dacomitinib.
In the treatment of NSCLC, EGFR-TKIs are commonly used together with other types of medication. Consequently, the risk of serious DDIs should be taken into consideration when selecting appropriate treatment. 5 As the increasing molecular stratification of lung cancer has provided more options for targeted intervention and rational combination therapy, a clear understanding of the DDI profiles of different TKIs has become essential. Clinicians need to consider the potential implications of clinically important DDIs when formulating individualized therapeutic strategies for their patients. In this article, we review the key pharmacologic properties of the EGFR-TKIs currently approved for the treatment of NSCLC and the clinically relevant DDIs associated with each agent.
Literature search strategy
We searched the published literature (English language only) relating to established and potential DDIs between the EGFR-TKIs of interest (ie, those currently approved for the treatment of NSCLC) and other prescription drugs, over-the-counter drugs, and herbal medicines. Relevant publications were identified by means of searches of US National Library of Medicine (NLM) PubMed, using the search terms [interaction] OR [drug-drug] AND [Drug name (for each EGFR-TKI)]. Other relevant publications were identified from citations in the key publications identified via NLM PubMed. Further information was obtained from the US and EU prescribing information for each agent (not available for icotinib).
Interactions via effects on gastrointestinal pH
Other than the physicochemical properties of the different TKIs, the most important factor affecting the solubility of/ exposure to these agents is gastric pH. 1 For TKIs with a pKa of less than 4-5, concomitant administration of agents that increase stomach pH can reduce TKI solubility, absorption, and bioavailability. 5,50 In particular, clinically relevant DDIs due to changes in gastric pH have been demonstrated between histamine H 2 -receptor antagonists such as ranitidine, or proton pump inhibitors (PPIs) such as omeprazole, and a number of TKIs, including crizotinib, dasatinib, erlotinib, gefitinib, lapatinib, and pazopanib. 50,51 As patients with cancer often take acid suppressants for symptoms of gastrointestinal reflux, the potential for such interactions is clinically important. 5 Interactions via effects on cytochrome P450 (CYP)-dependent metabolism Phase I metabolism (mostly oxidative) by liver cytochrome P450 (CYP)-dependent enzymes is the most important route of drug metabolism in vivo. 1 Many TKIs are metabolized by this family of enzymes, which makes them prone to metabolic DDIs. 5,50 Indeed, most pharmacokinetic (PK) interactions that affect the EGFR-TKIs involve effects on metabolism, especially via CYP enzymes. Potent enzyme inhibitors and inducers can modify the exposure (the area under the plasma concentrationtime curve [AUC] and the maximum plasma concentration [C max ]) of specific EGFR-TKIs, while EGFR-TKIs that are CYP enzyme substrates can affect the PK of other drugs. 5 Increased or decreased exposure due to alteration of CYP enzyme activity could lead to clinically relevant toxicity or reduced effectiveness of the EGFR-TKIs. 1 Cigarette smoking is also known to induce key CYP enzymes 52 and has been shown to affect the PK profiles of specific TKIs. 5 As shown in Table 1, the extent to which different EGFR-TKIs are metabolized by CYP enzymes varies markedly, as do their effects as inducers or inhibitors of CYP enzyme activity. Nevertheless, there is wide potential for interactions with other drugs used in supportive therapy or for the treatment of comorbidities in patients with NSCLC.
CYP2D6, in particular, has been reported to be responsible for the metabolism of up to 25% of commonly prescribed drugs. 53,54 Importantly, CYP2D6 is encoded by a highly polymorphic gene, with more than 70 alleles and 130 genetic variations, 55 which could have a significant influence on up to a half of the drugs metabolized by this enzyme. 56 There is marked inter-ethnic variation in the frequency of the different alleles, leading to substantial variability in the prevalence of the four main phenotypes (poor, intermediate, extensive, and ultra-rapid metabolizers). 57 The poor metabolizer phenotype, in particular, has been widely examined in relation to adverse drug reactions. 58 Poor metabolizers account for 5-10% of the Caucasian populations, but are rarely found in Asia, and are highly variable among people of African ancestry. 59,60 Conversely, about 10-15% of the Caucasians are intermediate metabolizers, compared with up to 50% of the Asians and 30% of the Africans. 57 Interactions via effects on UDP-glucuronosyltransferase Uridine-diphosphate (UDP)-glucuronosyltransferase (UGT) catalyzes the conjugation of glucuronic acid to endogenous substances and exogenous compounds. 4 Most UGT isoforms are expressed in the liver. 4 Since UGTs are ratelimiting enzymes in the metabolism of various compounds, co-administration of UGT-inhibiting drugs can lead to an increase in the concentration of such compounds in the circulation. 61 In particular, UGT1A1 plays a key role in the metabolism and detoxification of many potentially harmful compounds and drugs, and thus inhibition of UGT1A1 can lead to severe DDIs and other undesirable effects. 62 UGT1A1 is also involved in the metabolic elimination of endogenous bilirubin, preventing accumulation to toxic levels. 62 Several TKIs are potent inhibitors of UGT1A1 (eg, erlotinib, lapatinib, nilotinib, pazopanib, regorafenib, and sorafenib), [63][64][65] and this may underlie some of the adverse events observed with these agents, such as hyperbilirubinemia and hepatotoxicity. 62,64,66 Interactions via effects on transporter proteins In order to reach the portal blood circulation, TKIs need to pass through the gut wall. This involves both passive diffusion and active transport via organic anion-transporting peptide (OATP), organic cation-transporting peptide (OCTP), multidrug-resistance-associated proteins such as ATP-Binding Cassette (ABC) transporter G2 (ABC-G2), efflux transporters such as P-glycoprotein (P-gp, also known as multidrug resistance protein 1 [MDR1], ABC sub-family B, member 1 [ABC-B1] or cluster of differentiation 243 [CD243]), and intestinal metabolic enzymes such as CYP3A4. 1 P-gp and ABC-G2 are expressed in the small intestine, liver, kidneys, and blood-brain barrier (BBB); they appear to regulate the oral absorption, biliary and renal elimination, and also the BBB penetration of several anticancer drugs, including TKIs. 1,2 The role of P-gp in the absorption of TKIs has been widely studied. Some TKIs (eg, crizotinib) are P-gp substrates, so inhibition or induction of P-gp due to concomitant administration of another drug could lead to clinically relevant DDIs. 1 Others such as pazopanib, lapatinib, and gefitinib directly inhibit P-gp activity, so could increase the bioavailability of concomitantly administered P-gp substrates. 1 Other ABC drug transporters such as Breast Cancer Resistance Protein (BCRP) and Multidrug Resistance-associated Protein 2 (MRP2 or ABC-C2) are also recognized for their potential for DDIs. 5,[67][68][69] The role of the uptake solute carrier transporters (eg, OATP, OCTP) in transporter-mediated drug interactions with EGFR-TKIs is less well defined. 5,69,70 Clinically relevant DDIs with selected EGFR-TKIs Gefitinib Gefitinib was the first oral quinazoline compound to be referred to as a "selective" EGFR-TKI. 71 In the US, gefitinib is indicated for first-line treatment of patients with metastatic NSCLC who harbor EGFR exon 19 deletions or exon 21 Notes: +++, major metabolic route; ++, other significant metabolic route; +, minor metabolic route; -, no interaction. Abbreviations: w, weak; m, moderate; s, strong; NR, not reported; EGFR-TKI, epidermal growth factor receptor-tyrosine kinase inhibitor.
Acid-reducing agents
Medications that cause significant sustained elevation in gastric pH, such as PPIs and H 2 -receptor antagonists, may reduce the bioavailability, plasma concentration, and efficacy of gefitinib. 72,73 High doses of short-acting antacids may have similar effects if taken regularly close to the time of administration of gefitinib. 73,74 In a recent preclinical study, treatment with omeprazole (10-100 mg/kg orally [p.o.]) and vonoprazan (1-5 mg/kg p.o.) produced significant dose-dependent increases in gastric pH, and the AUC 0-3h of gefitinib (5 mg/kg, p.o.) declined with increasing pH. 75 In healthy male volunteers, use of the rapidacting H 2 -receptor antagonist ranitidine (at a dose of 450 mg p.o., that increased gastric pH to ≥5 for ≥4 hrs), taken 1 hr before a 250 mg dose of gefitinib, markedly reduced gefitinib exposure (geometric least-squares mean AUC 0-∞ reduced by 47%; C max by 71%). 74,76 However, inhibitors of gastric secretion had no effect on the efficacy of gefitinib in patients with NSCLC or in those harboring EGFR-activating mutations. 77 Concomitant use of gefitinib with PPIs should be avoided but, if deemed essential, gefitinib should be taken 12 hrs before or after the PPI. 72 Similarly, gefitinib should be taken 6 hrs before or after an H 2 -receptor antagonist or an antacid. 72
CYP450-dependent metabolism
Gefitinib was metabolized at a similar rate when incubated in vitro with recombinant human CYP3A4 or CYP2D6, less efficiently with CYP3A5 and CYP1A1. 78 The range of metabolites produced by recombinant human CYP3A4 78 are similar to those generated by human liver microsomes, in which gefitinib was found to be rapidly and extensively metabolized. 79 O-desmethyl-gefitinib, the major metabolite of gefitinib in human plasma, 80 was formed mainly by recombinant human CYP2D6, 78 but in liver microsomes, O-desmethylgefitinib was only a minor product; 79 the metabolism of gefitinib was primarily dependent on CYP3A4 and was not notably reduced in microsomes from CYP2D6 poor metabolizers. 79 Gefitinib is excreted either unchanged or after metabolism. 73,79,80 Concomitant administration of gefitinib with strong CYP3A4 inhibitors may reduce metabolism and clearance of gefitinib, and may increase its plasma concentration. 72,73 This may be clinically important, as adverse reactions with gefitinib are related to dose and exposure. 73 In healthy volunteers, pretreatment with the potent CYP3A4 inhibitor itraconazole (200 mg QD for 12 days) prior to gefitinib (a single dose of 250 mg on day 4) led to an increase in gefitinib exposure (mean AUC) by up to 78%. 74,81 Patients receiving potent CYP3A4 inhibitors should be closely monitored for adverse reactions to gefitinib. 73 Notably, the effect of CYP3A4 inhibitors on gefitinib exposure may be greater in CYP2D6 poor metabolizers. 73 CYP3A4 inducers may increase gefitinib metabolism and reduce the plasma concentration and efficacy of gefitinib. 72,73 Thus, pretreatment with the strong CYP3A4 inducer rifampicin (600 mg QD for 16 days) prior to gefitinib (a single dose of 500 mg on day 10) led to a reduction in mean gefitinib AUC of up to 83%. 81 Administration of gefitinib with the moderate-to-strong CYP3A4 inducer phenytoin led to a 26% reduction in C max and a 47% reduction in AUC. 82 A potential interaction was also reported between gefitinib and herbal medicines including ginseng (a CYP3A4/5 inducer); the patient was a non-responder but became a partial responder after discontinuation of the herbal medicines. 83 A similar interaction might be expected with St. John's Wort (Hypericum perforatum), another CYP3A4 inducer. 1 Concomitant administration of gefitinib with CYP3A4 inducers (phenytoin, carbamazepine, rifampicin, barbiturates, St. John's Wort, ginseng) should, therefore, be avoided, as treatment efficacy may be reduced. 73 If use with a moderate-to-strong CYP3A4 inducer is essential, the dose of gefitinib should be increased to 500 mg/day (provided no severe adverse drug reactions are apparent). 72 The standard dose (250 mg/day) should be resumed after discontinuation of the CYP3A4 inducer. 72 Patients taking gefitinib with potent CYP3A4 inhibitors should be carefully monitored, due to the potential for toxicity, while those taking CYP3A4 inducers should be monitored for reduced efficacy. 5,72 As previously noted, gefitinib is mainly metabolized by CYP3A4, and to a lesser extent by CYP2D6, 79 although the impact of CYP2D6 inhibitors on gefitinib PK has not been evaluated. 5,73 The role of CYP2D6 in the clearance of gefitinib has been evaluated in healthy volunteers genotyped for CYP2D6 status. In CYP2D6 poor metabolizers, O-desmethyl gefitinib was unmeasurable in plasma (confirming that production of this metabolite is mediated by CYP2D6), and gefitinib exposure was twofold higher than in extensive metabolizers. 72,84 The investigators suggested that the absence of metabolite is unlikely to be clinically relevant, as it contributes little to the overall activity of gefitinib, while poor metabolizers have higher exposure to unchanged gefitinib (this is also unlikely to lead to clinically significant changes in the safety and tolerability of gefitinib). 84 Consequently, prospective screening for CYP2D6 genotype is not warranted before starting gefitinib, and dose adjustments and changes in clinical management strategy are not required for CYP2D6 poor metabolizers. 72,73,84 Giving a potent CYP2D6 inhibitor concomitantly with gefitinib might also increase gefitinib exposure. Consequently, poor CYP2D6 metabolizers and patients who begin taking a CYP2D6 inhibitor together with gefitinib should be closely monitored for adverse reactions to gefitinib. 72,73 Gefitinib is also a weak inhibitor of CYP2D6 in vitro. 72,81 In patients with solid tumors, co-administration of gefitinib with the CYP2D6 substrate metoprolol led to a 35% increase in metoprolol exposure. 81 This effect may be relevant to CYP2D6 substrates with a narrow therapeutic index 73 as it may be necessary to modify the dose of such agents when used concurrently with gefitinib. 73 Smoking status is not a relevant consideration for gefitinib. 5
UDP-glucuronosyltransferases
Gefitinib demonstrated broad inhibition of UGTmediated glucuronidation in vitro, particularly against UGT1A1, UGT1A7, UGT1A9, and UGT2B7 isotypes. 63 The risk of potential DDIs in vivo was predicted by calculating the ratios between the area under the plasma concentration-time curve in the presence and absence of inhibitor (AUCi/AUC). For gefitinib, the AUC ratio at the highest evaluated dose (700 mg/day) was less than 1.3 for the substrates of each inhibited UGT isoform. While the authors acknowledged that in vivo DDIs extrapolated from in vitro data should be interpreted with caution, they concluded that the use of gefitinib is unlikely to lead to clinically significant DDIs via inhibition of glucuronidation. 63
Transporter proteins
In vitro evidence indicates that gefitinib is a substrate of P-gp but according to the EMA assessment report for gefitinib, there is no evidence to suggest that this effect has clinical consequences. 74 Effects of other agents on P-gp are unlikely to influence gefitinib absorption, as P-gp is saturated at higher concentrations. 72 In one study, gefitinib was reported to directly inhibit the function of P-gp in multidrug-resistant lung and breast cancer cells. The authors suggested that gefitinib may inhibit the excretion of P-gp substrate drugs and that potential DDIs should be evaluated, 85 although there is no evidence that this effect is clinically relevant. In a preclinical study, simultaneous administration of gefitinib dramatically increased the oral bioavailability of irinotecan, 86 and in children with refractory solid tumors, use of gefitinib led to a fourfold increase in the bioavailability of oral irinotecan versus historical controls, and significantly reduced the clearance of irinotecan and its active metabolite, SN-38. The authors suggested that these effects may have occurred via inhibition of ABC-G2 by gefitinib. 87 Gefitinib has been shown to inhibit BCRP in vitro, 88 but the clinical importance of this effect is unknown. 5,73 Other clinically relevant interactions Increases in the international normalized ratio (INR) and/ or the rate of bleeding events have occurred in patients taking warfarin and gefitinib concomitantly. 72,73,89 Patients taking this combination should be monitored regularly for changes in prothrombin time or INR. 72,73 Concomitant use of sorafenib reduced gefitinib exposure (C max by 26%, AUC by 38%) via an unknown mechanism, whereas sorafenib exposure was unaffected. 90 In phase II clinical trials, concomitant use of gefitinib and vinorelbine exacerbated the neutropenic effect of vinorelbine, 73,91 although in a phase I/II trial of gefitinib plus vinorelbine and gemcitabine in patients with metastatic breast cancer the incidence of febrile neutropenia was not a major limiting factor. 92
Erlotinib
Erlotinib is a reversible EGFR-TKI that is approved by the FDA as first-line, maintenance, or second-line or subsequent treatment following progression after at least one prior chemotherapy regimen, in patients with metastatic NSCLC who harbor EGFR exon 19 deletions or exon 21 (L858R) substitution mutations, as detected by an FDAapproved test. 93
Acid-reducing agents
The solubility of erlotinib is pH-dependent and decreases above pH 5; 1,5,94 therefore, drugs that alter gastrointestinal pH could alter the solubility and absorption of erlotinib, leading to potentially clinically relevant changes in bioavailability. 94 In a recent preclinical study of concomitant treatment with omeprazole (10-100 mg/kg p.o.) and vonoprazan (1-5 mg/kg p.o.), both of which induced significant dose-dependent increases in gastric pH, the AUC 0-3h of erlotinib (5 mg/kg p.o.) decreased as pH increased. 75 Combining erlotinib with PPIs should be avoided. 93,94 In healthy volunteers, concomitant use of the PPI omeprazole (40 mg once daily [QD] for 7 days) led to a reduction in erlotinib exposure (46% reduction in AUC; 61% reduction in C max ). 95 Temporal separation of doses may not eliminate the interaction because PPIs affect the pH of the upper gastrointestinal tract for an extended period. 93 Co-administration of H 2 -receptor antagonists can also reduce the efficacy of erlotinib. Concomitant use of the H 2 -receptor antagonist ranitidine (300 mg QD for 5 days, given 2 hrs before erlotinib) led to a 33% reduction in erlotinib AUC and a 54% reduction in C max . 95 Increasing the dose of erlotinib is unlikely to compensate for such reductions in exposure, 94 but when dosing was staggered (ranitidine was given as a divided dose of 150 mg twice daily [BID] and erlotinib was given 10 hrs after the previous evening dose of ranitidine and 2 hrs before the next morning dose of ranitidine), the reduction in erlotinib exposure was much less marked (15% reduction in AUC; 17% reduction in C max ). 95 Consequently, if ranitidine coadministration is considered, it should be used in a staggered manner; ie, erlotinib must be taken at least 2 hrs before or 10 hrs after the dose of ranitidine. 93,94 A retrospective analysis of 190 patients with advanced NSCLC indicated that concomitant use of gastric acid suppressants had no significant effect on plasma concentrations of erlotinib, progression-free survival (PFS) or overall survival (OS). 96 However, in another retrospective review of 544 patients with advanced NSCLC treated with erlotinib, both PFS and OS were significantly reduced in patients who took gastric acid suppressants compared with those who did not (median PFS 1.4 vs 2.3 months, p<0.001; median OS 12.9 vs 16.8 months, p=0.003). 97 The authors did not speculate about the reasons for the divergent results. According to the product label, if required, antacids should be taken at least 4 hrs before or 2 hrs after erlotinib. 94 Subsequently, two retrospective studies in patients with EGFR mutations receiving either of the first-generation EGFR-TKIs (erlotinib or gefitinib) also reported that use of acid suppressants had no adverse effects on median PFS and median OS. 98,99 Recently, however, a further retrospective observational study of NSCLC patients taking erlotinib or gefitinib found that median PFS was 84 days in patients taking acid suppressants, compared with 221 days in those not taking acid suppressants (p<0.0001); the type of acid suppressant used did not seem to be important. 100 In the earlier studies, the presence of activating EGFR mutations in a proportion of patients may have conferred increased sensitivity to EGFR TKIs. Consequently, when erlotinib and gefitinib were given concomitantly with an acid suppressant, despite the reduction in bioavailability, the concentrations of erlotinib and gefitinib achieved in plasma may have been sufficient to inhibit mutant EGFR. 98,100 The discrepancy between the outcomes of the earlier studies of erlotinib 96,97 might also be explained by a difference in the proportion of patients with EGFR mutations between the two studies, ie, in the study of Chu et al, the majority of patients did not exhibit EGFR mutations, so the outcomes of erlotinib therapy may have been more sensitive to differences in plasma levels of erlotinib. 97
CYP450-dependent metabolism
The metabolism of erlotinib is mediated predominantly by CYP3A4/3A5 in liver and intestine, and to a lesser extent by CYP1A2 and CYP2C8, as well as extra-hepatically by pulmonary CYP1A1 and CYP1B1 in tumor tissue. 78,101 The active metabolite, O-desmethyl erlotinib, subsequently undergoes oxidation and glucuronidation. 5,78,101 Extrahepatic metabolism by CYP3A4 in the intestine, CYP1A1 in the lungs, and CYP1B1 in tumor tissue may also contribute to the clearance of erlotinib. 94 Potent inhibitors of CYP3A4 reduce erlotinib metabolism, leading to an increase in plasma erlotinib concentrations. 94 Thus, concomitant use of erlotinib with the potent CYP3A4 inhibitor ketoconazole (200 mg p.o., BID for 5 days) led to an 86% increase in erlotinib AUC. 94,102 In an open-label, crossover study in male and female healthy volunteers, co-administration of erlotinib (100 mg p.o. on days 1 and 15) with the combined CYP3A4 and CYP1A2 inhibitor ciprofloxacin (750 mg BID on days 13-18) led to a 39% increase in erlotinib AUC and a (non-significant) 17% increase in C max . 93,103 The EU label advises caution when combining erlotinib with ciprofloxacin or potent CYP1A2 inhibitors such as fluvoxamine; if severe adverse events occur, the dose of erlotinib should be reduced. 94 The US label recommends against concurrent use of erlotinib with strong CYP3A4 inhibitors or CYP3A4/1A2 inhibitors, but if this is unavoidable, and if severe adverse reactions occur, it suggests reducing the dose of erlotinib in 50-mg steps. 93 Potent CYP3A4 inducers increase erlotinib metabolism and reduce plasma erlotinib concentrations. 104 Thus, pretreatment with rifampicin (600 mg p.o., QD for 7 days) led to a 69% reduction in the median AUC of erlotinib. 94 Concomitant use of erlotinib with CYP3A4 inducers should be avoided; alternatively, the use of a higher dose of erlotinib may be considered (300 or 450 mg, compared with the standard dose of 150 mg). 93,94 Patient safety should be closely monitored (including renal and liver function and serum electrolytes). 94 Other strong and moderate CYP3A4 inducers (eg, enzalutamide, phenytoin, carbamazepine, barbiturates, and herbal preparations containing St. John's Wort) may also reduce erlotinib exposure. 1,94 Caution is advised when using these agents concomitantly with erlotinib, and alternative treatments should be considered when possible. 94 Cigarette smoking has been shown to markedly reduce erlotinib exposure via an increase in CYP1A1/1A2 activity. In healthy volunteers, the geometric mean of erlotinib AUC 0-∞ following a single 150 mg dose was 2.8-fold lower in smokers than in non-smokers, and was similar to that in non-smokers after a dose of 300 mg. C max in smokers was two-thirds of that in non-smokers, and C 24h was 8.3-fold lower than in non-smokers. 105 In patients with solid tumors, clearance of erlotinib was 24% faster in current smokers than former smokers/neversmokers. 106 The increase in clearance seems to be related to induction of CYP1A1/1A2 in smokers. 2 In current smokers with NSCLC, the response rate to erlotinib was markedly lower than that in never-smokers (3.9% vs 24.7%; p<0.001). 107 In another study, the maximum tolerated dose (MTD) of erlotinib in current smokers with NSCLC was 300 mg/day 108 (ie, double the MTD previously established in unselected patients), although the authors did not speculate on the underlying mechanisms. 109 However, the efficacy and long-term safety of doses greater than the recommended starting doses have yet to be established in patients who continue to smoke. 94 In current smokers with locally advanced or metastatic NSCLC, a dose of erlotinib of 300 mg/day led to higher plasma concentrations than were achieved by the standard dose of 150 mg/day, but no incremental efficacy benefit was demonstrated. 110 Patients should be encouraged to stop smoking as soon as possible before initiating erlotinib. 94 The US label advises increasing the dose of erlotinib in current smokers to 300 mg (maximum), returning immediately to the recommended dose (150 or 100 mg/day) on cessation of smoking. 93 The US label also recommends against the use of erlotinib together with moderate CYP1A2 inducers. 93 Erlotinib is itself a potent inhibitor of CYP1A1 and a moderate inhibitor of CYP3A4 and CYP2C8. 94 The physiologic relevance of CYP1A1 inhibition by erlotinib is unclear, given the limited expression of CYP1A1 in humans. 94 Pretreatment with, or co-administration of, erlotinib did not alter the clearance of the CYP3A4 substrates midazolam and erythromycin, but reduced the oral bioavailability of midazolam. 94 In another study, concomitant use of erlotinib did not affect the PK of paclitaxel (a CYP3A4/2C8 substrate). 111 While the EU label suggests that clinically relevant effects of erlotinib on the PK profiles of other CYP3A4 substrates are unlikely, 94 case reports suggest a need for caution during use of erlotinib together with CYP3A4 or CYP2C8 substrates. These include a case of rhabdomyolysis due to increased simvastatin exposure in a patient receiving concomitant erlotinib. 112 In another patient, toxicities that occurred during use of phenytoin were exacerbated following addition of erlotinib. 113 Clinicians are advised to be aware of these potential interactions when combining these drugs with erlotinib and to proceed with caution. 1
UDP-glucuronosyltransferases
Erlotinib is a selective and potent competitive inhibitor of glucuronidation by UGT1A1 in vitro and exerts potent mixed inhibition of bilirubin glucuronidation in human liver microsomes. 63 Based on these findings, coadministration of erlotinib (≥100 mg/day) is predicted to increase the AUC of drugs predominantly cleared by UGT1A1 by ≥30% and to cause clinically significant DDIs when given with such agents. 63 Cheng et al (2017) found erlotinib to be a potent noncompetitive inhibitor of UGT1A1. 62 Patients with low UGT1A1 expression or genetic glucuronidation disorders (eg, Gilbert's disease) could develop high serum concentrations of bilirubin and must be treated with caution. 94
Transporter proteins
In vitro transport studies demonstrated that erlotinib is a substrate for, and inhibitor of, both P-gp and BCRP. [114][115][116] Concomitant administration of P-gp inhibitors such as cyclosporine and verapamil may lead to altered distribution and/or altered elimination of erlotinib. 94 The clinical relevance of this interaction is unclear, 5 but clinicians should be aware of the potential for an increase in adverse events when using erlotinib in the presence of P-gp inhibitors. 94 Xu and Li Dovepress submit your manuscript | www.dovepress.com
DovePress
OncoTargets and Therapy 2019:12 Erlotinib and its active metabolite OSI-420 are substrates for human organic anion transporter 3 (OAT3) and, to a lesser extent, organic cation transporter 2 (OCT2) 116 but the clinical implications of these properties have not been fully elucidated. 2
Other clinically relevant interactions
Erlotinib can increase the INR in patients taking warfarin. 117 Bleeding events have been reported (including peptic ulcer bleeding, hematemesis, hematochezia, melena, and hemorrhage from possible colitis), 93 some of which were fatal. 94 Patients taking erlotinib with coumarinderived anticoagulants such as warfarin should be monitored regularly for changes in prothrombin time or INR. 93,94 Dose modifications are not recommended for erlotinib. 93 Concomitant use of capecitabine may increase plasma erlotinib concentrations. When combined with capecitabine, there was a significant increase in erlotinib AUC and a borderline increase in C max (compared with concentrations measured in another study of erlotinib monotherapy). 94 In patients with advanced solid tumors, carboplatin exposure was reported to increase when administered concomitantly with erlotinib. 111 However, in an intensive PK study of patients with advanced NSCLC who had participated in a phase III trial of first-line erlotinib plus chemotherapy, the use of erlotinib did not alter systemic exposure of paclitaxel and carboplatin compared with that in the placebo group. 118 Afatinib Afatinib is an oral, irreversible inhibitor of the ErbB family of tyrosine kinases. Afatinib downregulates ErbB signaling by covalently binding to the kinase domains of EGFR, HER2, and HER4, leading to irreversible inhibition of tyrosine kinase autophosphorylation; afatinib also inhibits transphosphorylation of HER3. 119 Afatinib is approved by the FDA for first-line treatment of patients with metastatic NSCLC who harbor nonresistant EGFR mutations, as detected by an FDA-approved test. It is also approved for locally advanced or metastatic NSCLC of squamous histology progressing after platinum-based chemotherapy. 120,121 Acid-reducing agents Afatinib is highly soluble throughout the physiologic pH range (1-7.5). 122 Consequently, interactions with acidreducing drugs are not expected. 5
Cytochrome P450-dependent metabolism
Afatinib undergoes minimal biotransformation, and oxidative CYP-mediated metabolism is of negligible importance. 5,123 Metabolism is mainly governed by non-enzyme catalyzed formation of adducts to proteins and nucleophilic small molecules. 5,123 Consequently, DDIs arising from inhibition or induction of CYP450 enzymes by concomitant medications are unlikely to occur. 121 Smoking status has no significant effect on exposure to afatinib. 5,124 Transporter proteins Afatinib is a substrate and inhibitor of P-gp in vitro, 5,125 and concomitant use of strong P-gp inhibitors can increase exposure to afatinib. 120,121 In healthy subjects, ritonavir (a strong inhibitor of P-gp and BCRP) given simultaneously or 6 hrs after a single 40 mg dose of afatinib led to minimal increases in afatinib AUC 0-∞ and C max (by 5% and 11% respectively). 125 However, in a second study, ritonavir given 1 hr before a single 20 mg dose of afatinib led to a 48% increase in afatinib AUC 0-∞ and a 39% increase in C max . 125 Conversely, strong P-gp inducers can reduce exposure to afatinib. 120 In healthy subjects, pretreatment with the potent P-gp inducer rifampicin (600 mg QD for 7 days) before a single 40 mg dose of afatinib led to a reduction in plasma exposure (34% reduction in AUC 0-∞ and 22% reduction in C max ). 121,125 For patients taking afatinib who require treatment with a P-gp inhibitor, the EMA label recommends using staggered dosing to maximize the interval between the doses of afatinib and the P-gp inhibitor (preferably 6 hrs for P-gp inhibitors dosed BID and 12 hrs for those given QD). 120 According to the US label, if a patient taking a concomitant P-gp inhibitor experiences toxicities while taking afatinib, their clinician may reduce the afatinib dose by 10 mg, and resume the original dose after discontinuation of the P-gp inhibitor, provided tolerability is acceptable. 121 For those taking a P-gp inducer, the afatinib dose may be increased by 10 mg, subject to tolerability, and the original dose may be resumed 2-3 days after the P-gp inducer is discontinued. 121 Afatinib is a moderate inhibitor of P-gp in vitro, 5,125 but clinical data suggest that changes in plasma concentrations of other P-gp substrates are unlikely to occur due to concomitant administration of afatinib. 120,126 Afatinib is both a substrate and an inhibitor of BCRP in vitro 120,121,126 and may increase the bioavailability of BCRP substrates administered orally, such as rosuvastatin and sulfasalazine. 120
Osimertinib
Osimertinib is a third-generation potent irreversible EGFR-TKI that has efficacy in patients with advanced DovePress NSCLC with EGFR mutations (both sensitizing/activating mutations (del19/L858R) and T790M resistance mutations). 47 In the USA, osimertinib is indicated for the treatment of patients with metastatic EGFR T790M mutation-positive NSCLC, as detected by an FDA-approved test, who have progressed on or after EGFR TKI therapy, and also for first-line treatment of patients with metastatic NSCLC whose tumors have EGFR exon 19 deletions or exon 21 L858R mutations, as detected by an FDAapproved test. 127 Similarly, in the EU, osimertinib is indicated for the treatment of adults with locally advanced or metastatic EGFR T790M mutation-positive NSCLC; also for first-line treatment of adults with locally advanced or metastatic NSCLC with activating EGFR mutations. 128
Acid-reducing agents
In a preclinical study, the AUC 0-3h of osimertinib (5 mg/ kg p.o.) was not significantly affected by concomitant omeprazole (10-100 mg/kg p.o.) or vonoprazan (1-5 mg/ kg p.o.), both of which caused significant dose-dependent increases in gastric pH. 75 In an open-label study in healthy male volunteers (n=68), co-administration of omeprazole did not significantly alter osimertinib exposure: the geometric least-squares mean ratio [90% CI] for AUC was 107% [100-113%] and for C max was 102% [95-109%]. 129 In patients whose gastric pH may be altered by concomitant agents or medical conditions, dose modifications are not required for osimertinib. 129 Gastric pH-modifying agents can be used with osimertinib without restriction. 128
CYP450-dependent metabolism
In vitro studies indicate that osimertinib is predominantly metabolized by CYP3A4/5 and is a weak inducer of CYP3A. 130 Hence, modulators of CYP3A could impact osimertinib metabolism, while osimertinib may alter the exposure of other CYP3A substrates. 130 Drug interaction studies with inhibitors, inducers, or substrates of CYP enzymes and transporters have not been conducted systematically for osimertinib. 127 The effect of strong CYP3A4 inhibitors and inducers on the PK of osimertinib in patients with advanced NSCLC was investigated in two open-label studies. 131 In the first study of 36 patients, concomitant use of the strong CYP3A4 inhibitor itraconazole (200 mg BID; days 6-18) together with osimertinib (80 mg/day, days 1 and 10) had no clinically significant effect on osimertinib exposure; AUC increased by 24% and C max decreased by 20% versus osimertinib given alone 128,131 (the upper bounds of the 90% CIs of the geometric mean least square mean treatment ratios [itraconazole + osimertinib/osimertinib alone] for AUC and C max were both below the pre-specified "no-effect" limit of 200%). Similarly, there were no clinically relevant changes in exposure parameters for the active metabolite of osimertinib, AZ5104. The authors suggested that preclinical hepatocyte and recombinant CYP studies 132 may have overestimated the contribution of cytochrome P450 metabolism to the clearance of osimertinib in the clinic, whereas the availability of multiple elimination pathways for osimertinib might also explain the lack of significant effects of itraconazole. 131 The minor reduction in osimertinib C max and increase in AUC were interpreted as being due to inhibition of CYP3A by itraconazole, leading to changes in elimination of osimertinib and its metabolites. Osimertinib t max was 2 hrs longer in patients taking osimertinib plus itraconazole than those taking osimertinib alone (p=0.0002), which suggests that concomitant use of itraconazole may alter the absorption of osimertinib. 131 In the second study (n=40), concomitant use of osimertinib (80 mg/day, days 1-77) with the CYP3A4 inducer rifampicin (600 mg/day, days 29-49) led to a 78% reduction in osimertinib AUC as well as an 82% reduction in AUC and 78% reduction in C max of AZ5104, the metabolite of osimertinib. 128,131 Although the proportion of white patients was slightly higher in the rifampicin study than the itraconazole study, the authors felt that this was unlikely to have affected the results given that osimertinib exposure does not appear to be affected by ethnicity. 133 Consistent with the US and EU labels, 127,128 the researchers concluded that osimertinib can be given concurrently with CYP3A4 inhibitors but that strong CYP3A inducers should be avoided if possible. 131 Concomitant use of St. John's Wort with osimertinib is specifically contraindicated in the EU. 128 The US label advises that if concurrent use of a strong CYP3A4 inducer is unavoidable, the dose of osimertinib should be increased to 160 mg/day; the standard dose (80 mg/day) may be resumed 3 weeks after discontinuation of the CYP3A4 inducer. 127 Moderate CYP3A4 inducers may also reduce osimertinib exposure, so should be used with caution or avoided if possible. 128 No dose adjustments are required when osimertinib is used with moderate and/or weak CYP3A inducers. 127 In patients with EGFR mutation-positive NSCLC following disease progression on a prior EGFR-TKI, daily administration of osimertinib increased rosuvastatin exposure but had minimal effects on exposure of the sensitive CYP3A4 substrate, simvastatin. 134 Clinically relevant interactions between osimertinib and CYP3A4 substrates are therefore unlikely. 128 In a population PK analysis based on data from 780 patients, Brown et al (2017) found that smoking status had no significant effect on osimertinib PK (dose-normalized AUC at steady state), 133 which suggests that CYP1A1 induction does not have a major effect on osimertinib metabolism. Only 3% of the patients were current smokers, which limited the strength of the analysis. Nevertheless, no dosage adjustments are required when treating current smokers with osimertinib. 128
UDP-glucuronosyltransferases
Based on in vitro studies, osimertinib is not an inhibitor of UGT1A1 or UGT2B7 at clinically relevant concentrations. Intestinal inhibition of UGT1A1 is possible, but the clinical impact is unknown. 128
Transporter proteins
Osimertinib is a substrate of P-gp and BCRP in vitro, 135 but this is unlikely to lead to clinically significant DDIs at clinically relevant doses. 128 Osimertinib is a competitive inhibitor of BCRP transporters in vitro, 128,130 and it may, therefore, increase the exposure of BCRP substrates. 128,130 Concomitant administration of osimertinib led to a 35% increase in the AUC and a 72% increase in the C max of rosuvastatin (a sensitive BCRP substrate). 134 Patients taking osimertinib with medications that have a narrow therapeutic index and BCRP-dependent disposition should be closely monitored for changes in tolerability due to increases in osimertinib exposure. 127,128 At clinically relevant concentrations, osimertinib is not a substrate or inhibitor of OATP1B1 or OATP1B3 in vitro. 128
Icotinib
Icotinib is a second-generation reversible EGFR-TKI, approved by the China Food and Drug Administration (CFDA) for the treatment of advanced NSCLC following progression on at least one platinum-based chemotherapy. 18,29,136,137 To date, relatively few studies have reported on potential DDIs with icotinib. 138
CYP450-dependent metabolism
A preclinical pharmacokinetic study and a clinical mass balance study showed that more than 90% of icotinib is eliminated by hepatic metabolism, primarily via CYP450 enzymes; four to six main metabolites were identified. 139 The main enzymes responsible for icotinib metabolism are CYP3A4, CYP2C19, CYP3A5, and CYP1A2. 138,139 According to Shi et al (2013), the involvement of several enzymes in the metabolism of icotinib means that accumulation of the drug is limited, and explains its relatively short half-life (6 h 140 ), which is one of the main differences between icotinib and the other EGFR-TKIs. 40 Zhang et al (2018) recently examined the formation of icotinib metabolites by recombinant CYP isozymes in human liver microsomes, to identify the enzymes responsible for icotinib metabolism. 138 The metabolic pathways identified in vitro predominantly involved CYP3A4 (accounting for 77-87% of icotinib metabolism), CYP3A5 (5-15%), and CYP1A2 (3.7-7.5%). Metabolism of icotinib via CYP450 2C8, 2C9, 2C19, and 2D6 was insignificant. The authors recommended that clinicians should consider the risk of DDIs when prescribing icotinib with strong CYP3A inhibitors or inducers. 138 Induction of CYP1A2 in lung cancer patients with smoking history may also contribute to the PK and pharmacologic variability of icotinib. 138 Chen et al (2015) used a physiologically based PK model (validated using data from a phase I trial of icotinib in healthy Chinese subjects) to simulate DDIs with ketoconazole and rifampin (a potent CYP3A4 inhibitor/ inducer, respectively). The model-predicted exposure (AUC) for icotinib was higher when given with ketoconazole (400 mg) and rifampin (600 mg) than when given alone; the AUC ratio for icotinib during concomitant use of ketoconazole and rifampin was 3.22 and 0.55, respectively. 139
UDP-glucuronosyltransferases
Cheng et al (2017) 62 found both icotinib and erlotinib to be non-competitive inhibitors of UGT1A1, but the effect of icotinib was weaker than that of erlotinib (IC 50 for inhibition of UGT1A1-mediated NCHN-O-glucuronidation in human liver microsomes was 5.15 μmol/L for icotinib vs 0.68 μmol/L for erlotinib). The authors concluded that use of icotinib is unlikely to lead to clinically significant DDIs due to inhibition of UGT1A1. 62
DDIs: aids for clinical decision-making
The growing awareness of the importance of DDIs in the treatment of cancer patients has been reflected by increases in the number and scope of the sources of information and guidance available to aid clinical decision-making. 141 Some useful online resources are now available.
"Oncology in Practice" (http://oncologypro.esmo.org/ Oncology-in-Practice) from the European Society of Medical Oncology includes an overview of the main DDIs for the most frequently used TKIs, prophylaxis and treatment of these DDIs, and information for patients. "Drugs.com" includes a drug interaction checker at https://www.drugs.com/drug_interactions.html, which allows users to specify the agents in a prescription and to obtain information on potential interactions. "SiteGPR" (http://sitegpr.com/fr/) provides evidence-based advice (in French) on dose adjustments for patients with renal impairment, including dose adjustments due to DDIs. 141 " C a n c e r D r u g I n t e r a c t i o n s " ( h t t p s : / / c a n c e rdruginteractions.org/), which is endorsed by the British Oncology Pharmacy Association, allows users to select from a list of anticancer drugs and commonly prescribed concomitant medications and obtain information on whether a DDI is likely, together with the rationale and quality of evidence. Watson for Oncology (https://www. ibm.com/watson/health/oncology-and-genomics/oncology/ ) is an artificial intelligence system that extracts data from medical records and provides evidence-based treatment options tailored to the individual patient. 142,143 Lexicomp Online (http://www.lexi.com) and Micromedex 2.0 (http:// micromedex.com) both include interactive tools for evaluation of drug interactions. 144 Interestingly, Muhič et al (2017) found that different DDI screening systems may differ significantly in their ability to detect clinically relevant DDI-related adverse drug reactions. 145 Notably, pharmacy information experts recommend that to address such questions, multiple sources of information should be consulted. 144
Summary and recommendations for clinical practice
The development of the EGFR-TKIs has changed the therapeutic landscape of NSCLC and raised expectations among both patients and physicians. The introduction of these drugs into clinical practice presents challenges for physicians, however, not least due to the risk of DDIs with some agents. When formulating individualized therapeutic strategies for their patients, physicians should be aware of how differences between the PK properties of the different EGFR-TKIs may affect the potential for DDIs and, consequently, the efficacy, optimum dose, and tolerability of the treatment regimen. Moreover, given that cancer patients are often highly polymedicated, physicians must always bear in mind the potential impact of concomitant medications when selecting treatment and addressing the management of side effects. The most important interactions for physicians to be aware of, in terms of their significance to the treatment of patients with NSCLC, are described in Table 2; recommended approaches to managing such interactions are summarized below.
Clinically significant interactions with acid-suppressive drugs (PPIs, H 2 -receptor antagonists, and antacids) have been demonstrated for EGFR-TKIs that exhibit pHdependent solubility (ie, gefitinib and erlotinib). When used concomitantly, bioavailability may be reduced, to such an extent that clinical efficacy may be significantly impaired. As patients taking EGFR-TKIs often experience gastrointestinal side effects and routinely use acid-reducing agents for palliation of gastro-esophageal reflux, dyspepsia, gastritis, and mucositis, these DDIs are clinically relevant. If concomitant use is unavoidable, then staggering the dose of the EGFR-TKI and the acid suppressant by several hours may help to reduce the extent of the interaction. Another option would be to prescribe afatinib, which is not subject to this type of interaction (no information is available for icotinib). To manage a clinically significant DDI, a twicedaily PPI could be replaced by a once-daily regimen.
Giving the EGFR-TKI 2 hrs before the PPI (and using an enteric-coated formulation of the PPI) should optimize absorption of the EGFR-TKI. Physicians should exercise caution when prescribing a known CYP enzyme inhibitor or inducer, as concomitant drugs may need to be substituted or the doses adjusted to account for potential reductions or increases in CYP enzymemediated metabolism. In addition to the agents mentioned earlier, commonly used CYP450 inhibitors include: amiodarone, cimetidine (1A2); clarithromycin, diltiazem, grapefruit juice, telithromycin (3A4/3A5); amiodarone, fluconazole, fluoxetine, metronidazole, trimethoprim, sulfamethoxazole (2C9); isoniazid (2C19); and amiodarone, cimetidine, diphenhydramine, fluoxetine, paroxetine, quinidine, terbinafine (2D6). Furthermore, phenobarbital is an inducer of CYP1A2, 3A4/5, and 2C9. A table of important substrates, inhibitors, and inducers (with direct links to PubMed citations) is continually updated by Indiana University School of Medicine and can be accessed at https://drug-interactions. medicine.iu.edu/Home.aspx. 146 Gefitinib, erlotinib, osimertinib, and icotinib are predominantly metabolized by CYP3A4; consequently, concomitant administration with a potent CYP3A4 inhibitor may substantially increase plasma concentrations of the EGFR-TKIs. Conversely, co-administration with a strong CYP3A4 inducer may increase EGFR-TKI metabolism, reduce plasma concentrations, and consequently reduce efficacy. Clinicians should take care when treating patients with CYP3A4 inducers, and if an interaction is anticipated, concomitant administration should be avoided if possible.
Plasma concentrations of erlotinib are markedly reduced in smokers; therefore, while receiving erlotinib, current smokers should be advised to stop smoking. For those who continue to smoke, it may be necessary to increase the dose of erlotinib (to a maximum of 300 mg/ day). Induction of CYP1A2 in smokers may also influence the metabolism of icotinib.
Gefitinib, erlotinib, osimertinib, and afatinib are substrates for the drug transporter P-gp in vitro (no information is available for icotinib), but clinical findings indicate that clinically relevant DDIs may occur with afatinib only (not with gefitinib, erlotinib or osimertinib). Clinicians should consider staggering or adjusting the dose of afatinib when used in combination with a P-gp inhibitor or inducer. | 2019-08-02T13:24:58.002Z | 2019-07-09T00:00:00.000 | {
"year": 2019,
"sha1": "386c8fbe2c41f1a27dcde15a6d0ddb60f34e930e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=51080",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0128f45c3824e071e42a6137f4061333a345a7d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119611936 | pes2o/s2orc | v3-fos-license | Homogenisation of parabolic/hyperbolic media
We consider an evolutionary problem with rapidly oscillating coefficients. This causes the problem to change frequently between a parabolic and an hyperbolic state. We prove convergence of the homogenisation process in the unit square and present a numerical method to deal with approximations of the resulting equations. A numerical study finalises the contribution.
Introduction
In the present article, we discuss an academic example of a partial differential equation with highly oscillatory change of type. In real-world applications this change of type can be observed, when discussing a solid-fluid interaction model. In these kind of models, the solid is modelled by a (hyperbolic) elasticity equation and the fluid is of parabolic type.
An example of the equations to be studied is the following system of equations in the unit square Ω = (0, 1) 2 Ω should be thought of being a chessboard like structure with Ω w being the white areas and Ω b being the black areas. u satisfies natural transmission conditions on the interfaces. Our aim is to study the limit of the white and black squares' diameters tending to zero.
We shall present a convergence estimate for this homogenisation problem as well as a numerical study. Equations with change of type (ranging from elliptic to parabolic to hyperbolic) can be treated with the notion of so-called evolutionary equations, which are due to Picard [7], see also [8]. The notion of evolutionary equations is an abstract class of equations formulated in a Hilbert space setting and comprises partial differential-algebraic problems and may further be described as implicit evolution equation, hence the name 'evolutionary equations'.
More precisely, given a Hilbert space H and bounded linear operator M 0 , M 1 ∈ L(H) as well as a skew-self-adjoint operator A in H, we consider the problem of finding U : R → H for some given right-hand side F : R → H such that where ∂ t denotes the time derivative. The solution theory for this equation is set up in an exponentially weighted Hilbert space describing space-time. We shall specify the ingredients in the next section. For evolutionary equations, a numerical framework has been developed in [4]. In particular, this numerical treatment allows for equations with change of type.
Qualitatively, problems with highly oscillatory change of type (varying in between elliptic/parabolic/hyperbolic) have been considered in [10] in a one-dimensional setting. For a higher dimensional setting of highly-oscillatory type in the context of Maxwell's equations, we refer to [11]. For a solid-fluid interaction homogenisation problem with oscillations between hyperbolic and parabolic parts we refer to [3].
A quantitative result for equations with change of type has been obtained in [1,5]. In the latter reference, we have employed results and techniques stemming from [2] to transfer operator-norm estimates on (static) problems posed on R n to corresponding estimates for periodic time-dependent problems on the one-dimensional unit cell.
The present contribution is very much in line with the approach presented in [5]. The major difference, however, is the transference to a higher-dimensional setting.
For the sake of the argument, we restrict ourselves to two spatial dimensions. The higher-dimensional case is then adopted without further difficulties.
We shortly comment on the organisation of this paper. We start by presenting the analytical background in the next section. In this section, we shall also derive the necessary convergence estimates for the homogenisation problem.
Our numerical approach will be provided in Section 3. We conclude the article with a small case study.
Analytical background
In this section, we rephrase and summarise some results from [2]. The key ingredients are [2, Theorem 3.9] as well as [2,Proposition 3.16].
First of all, we properly define the operators involved. Let Ω = (0, 1) 2 . Then we defineg rad : . We define div # := −grad * . It is easy to see, that and so grad # := − div * # is a well-defined operator extending grad. Note that it can be shown that Next, let s 0 , s 1 : R 2 → C be measurable, bounded, (0, 1) 2 -periodic functions satisfying s 0 (x) = s 0 (x) * ≥ 0 for all x ∈ R 2 and ρ 0 s 0 (x) + ℜs 1 (x) ≥ c for some ρ 0 ≥ 0 and c > 0 and all x ∈ R 2 . We and M 1 similarly replacing s 0 by s 1 . Note that we have in the sense of positive definiteness; furthermore M 0 is selfadjoint. A straight forward application of [7, Solution Theory] leads to the following result. We recall that ∂ t is the distributional derivative with respect to the first variable in the space It will be obvious from the context, which ρ and which Hilbert space H is chosen.
In the next theorem, we have H = L 2 (Ω ) 3 .
Remark 2. Note that it can be shown ([10, Remark 2.3]) that if
Substituting the second equation into the first one, we obtain Next, we aim to study the limit behaviour of S N , which is given as S but with s 0 (N·) and s 1 (N·) respectively replacing s 0 and s 1 . In particular, our aim is to establish the following theorem. For this, we define endowed with the graph norm of ∂ k t acting as an operator from L 2 ρ (H) into itself. It can be shown that given ρ > ρ 0 that ∂ t is continuously invertible in L 2 ρ (H); so that u → ∂ k t u is equivalent to the graph norm on H k ρ (H). Theorem 3. Let ρ > ρ 0 . There exists κ ≥ 0 such that for all N ∈ N and f ∈ H 2 ρ (L 2 (Ω )) we have In order to prove this theorem, we need to introduce the Fourier-Laplace transformation: Let H be a Hilbert space. For φ ∈ C c (R; H) we define A variant of Plancherel's theorem yields that L ρ extends to a unitary operator from where m is the multiplication by argument operator in L 2 (R; H) with maximal domain; see [6, Corollary 2.5]. Thus, applying the Fourier-Laplace transformation to the norms on either side of the inequality in Theorem 3, we deduce that it suffices to show that there exists κ ≥ 0 such that for all N ∈ N, z ∈ C ℜ≥ρ and f ∈ L 2 (Ω ) we have This inequality will be shown using the results of [2]. For this we need some auxiliary statements. Proof.
(b) The mapping G N := V N T N is unitary.
The mapping G N in the previous theorem is also called the Floquet-Bloch or Gelfand transformation. With this tranformation at hand, we are in the position to transform the inequality in (2) into an equivalent form such that [2, Section 2] is applicable. The reason is the following representation: [5]). Let N ∈ N, k ∈ {0, . . . , N − 1} 2 and θ := 2πk/N, f ∈ L 2 (Ω ). Then we have
Proposition 8 ([2] and
where div θ and grad θ as well as ι θ are given as in [2, Section 3]. Proof. Let (u, q) := zs 0 (N·) + s 1 (N·) 0 0 Applying G N to zq = − grad # u, we obtain which yields the assertion. Now, along the lines of [2, Section 3] it is possible to show the following result, which eventually implies Theorem 3.
Numerical method
In this whole section, we address solving the equation The analytical results (Theorem 1 and Remark 2) state that given 3 . We will use a discontinuous Galerkin method in time and a conforming Galerkin method in space. For that let 0 = t 0 < t 1 < · · · < t M = T be a mesh for the time interval . . , N} and y j = 1 N , j ∈ {0, . . . , N}. Again a non-equidistant tensor product mesh with different mesh-sizes in the different dimensions is also possible.
We will approximate U = (u, v) using piecewise polynomials, globally discontinuous in time and piecewise polynomials, globally continuous (H 1 -conforming) in space for u and globally H(div)-conforming for v. Thus our discrete space is given by where the spatial spaces are Here, P q (I m , H) is the space of polynomials of degree up to q on the interval I m with values in H and Q p (K i j ) is the space of polynomials with total degree up to p on the cell K i j ⊆ Ω . Furthermore, RT p−1 (K i j ) is the Raviart-Thomas space on K i j , defined by RT p−1 (K i j ) = (Q p−1 (K i j )) n + xQ p−1 (K i j ).
With these notions at hand, we can now properly specify the numerical method. For any given right-hand side F ∈ U h,τ and initial condition x 0 ∈ H, find U ∈ U h,τ , such that for all Φ ∈ U h,τ and m ∈ {1, 2, . . . , M} it holds Here, we denote by see [4] for further details. We can cite the convergence results from [4] which were for Dirichlet boundary conditions. The proof needs only marginal modifications to hold for the periodic case too. We introduce two measures for the error. The first one measures the error in an L ∞ -L 2 sense with Note that E Q (a) = a for a ∈ U h,τ .
Theorem 10. We assume for the solution U of Example (3) the regularity as well as . Then we have for the error of the numerical solution U h,τ of (4) with a generic constant C Note that the spatial regularity is only needed in each cell K i j of the spatial mesh as local interpolation error estimates are used.
Numerical study
All computations were done in SOFE (https://github.com/SOFE-Developers/SOFE), a finite element suite for Matlab/Octave. For our numerical study let us assume an equidistant rectangular background mesh covering Ω with nodes (x i = i N , y j = j N ), i, j ∈ {0, . . . , N} for an even number N ∈ N. This background mesh will be used in defining the oscillating coefficients.
The corresponding homogenised problem is then The theoretical results of Sections 2 and 3 provide the following expected convergence behaviour for smooth solutions U hom and U N . In general we cannot expect the solutions to be very smooth. Thus, for our experiments we only chose a polynomial order p = 2 in space and q = 1 in time. Setting furthermore h = τ = 1/(2N) we combine the above expected estimates and obtain where the second inequality comes from Sobolev's embedding theorem (see e.g. [6, Lemma 5.2]) Let us finalise the definition of our problem by setting the right-hand side f (t, x, y) = 1, t ∈ (0, 1) and max{|2x − 1|, |2y − 1|} ≤ 1 4 , 0, otherwise.
Thus f is one in the time-space cube (0, 1)×[1/4, 3/4] 2 and otherwise zero. Figure 1 shows (numerical approximations of) the solutions U 4 , U 8 , U 16 and U hom at different times. In the first row the rough coefficients can be seen quite nicely, while the solution becomes smooth very quickly (lower rows). Furthermore, already for a very coarse background mesh of N = 16 the solutions U N and U hom are very similar. This visualises the homogenisation process.
In Table 1 we see the results for a simulation using polynomial degrees p = q + 1 = 2. As no exact solutions to (5) and (6) are known, we use reference solutions U N andŨ hom computed with polynomial degree p = 3 on a mesh with 256 cells in each space dimension and 384 cells in time dimension. The reference solution mesh is therefore twice as fine as the finest one used in the simulation. Note that we also provided the experimental orders of convergence (eoc), calculated for errors E n and E 2n by We observe a first order convergence of the numerical solution U h,τ N towards U N and towards U hom . While the second result confirms the reasoning at the beginning of this section, the first directs to a non-smoothness of the solution as otherwise we would obtain a second order convergence, see Theorem 10. Considering the oscillating coefficients and discontinuous f this reduction is to be expected. | 2018-10-02T13:38:15.000Z | 2018-10-02T00:00:00.000 | {
"year": 2020,
"sha1": "3ee0fc8b22d2f0c6d923db688dab95b708105e2a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1810.01234",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3ee0fc8b22d2f0c6d923db688dab95b708105e2a",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
139732810 | pes2o/s2orc | v3-fos-license | Multiscale simulation of surface characteristics of field emitter tip
Paper presents multi-scale modelling of the field emitter tip surface characteristics. At micro-scale an approximation of emitter shape is obtained, at meso-scale crystallographic faces are constructed for application of a semi-empirical regression model of work function distribution, on the nano-scale surface atoms coordinates are calculated that serves as base data for all the above mentioned levels of detail. Electric field distribution at all scale levels is calculated.
Introduction
An important problem of using computational models for simulation the properties and characteristics of emission systems is accounting for multi-scale nature of the physical phenomena that occur during the process of field electron emission. One of the principal problems of multi-scale modeling is the need to conjoin a number of different models describing the behavior and properties of complex systems with different levels of detail.
A combination of classic and quantum approach in emission system modeling at different scales is an extraordinarily complex and currently topical problem. The solution for the problem of constructing a multi-scale models that would unite conceptually different algorithms for describing nanostructured system behavior on different levels in the hierarchy -i.e. on nano-, meso-, micro-and macro-scaleallows to conjoin the formulations for 1D-, 2D-and 3D-problems. The study gives the examples of such models and systems.
Simulation of structural, crystallographic, work function and electric field characteristics of field emitter tip surface
Multi-scale modeling is not only modeling on different scales (from subatomic to macro-level), but also on different levels, as the results for one level of scaling can be used like the input data for the next one. The object of study is a metal single crystal emitter tip made out of metal wires of diameters about 0.1−0.15 mm by the use of anode electric etching. The field emission phenomena occur with electric field strength of the order of 10 9 −10 11 V/m. Usually such fields are observed on surfaces of emitters with tip curvature radii of about 10−1000 nm. For example Figure 1 shows possible shapes approximating emitter as equipotential surfaces of the electric field [1] generated by charged cone with a sphere at the top (dotted line). As can be seen, the emitter apex shape is close to a hemisphere.
One of the problems that require multi-scale approach is the problem of computing the electric field above the emission surface. This is caused by the direct influence on the result by both nano-and micro-(atom packing density distribution, work function values, local surface curvature radius etc. [2−7]) and macroparameters (geometry of the electrode systems which defines the distribution of the macroscopic electric field). The number of the detalization levels of different scales during modeling depends on the complexity of shape and structure of the emitter. Those levels can also be categorized by the three principal functions that are simultaneously performed by the electric field in field emission systems: generation, acceleration and transportation. Field electron current is generated in process of emission on the nanoscale: under the influence of the field the surface threshold turns into a potential barrier which can be tunneled through with a non-zero probability, as described by the quantum theory. The electric field strength which causes the width of the potential barrier to be in order of nanometers, is formed due to the presense of nanostructure details of the surface and due to amplification of the 3 larger scale field. Hence, this level of modeling is donned nanoscale. The most important problem of modeling on nanoscale is defining the exact maximum distance to the surface on which the nanoscale defects have any significant impact, and computing the field enhancement coefficient.
On the microscale the field distribution is defined by microscale geometric parameters of field emission systems. The non-uniform distribution of surface work function values causes patch field effect due to contact potential difference established between areas having different work function values.
On the macroscale the field emission electrode systems usually corresponds to a calculation area of complex shape which includes the boundaries of the emitter with large surface curvature and small size, the fact that leads to a rather large range of the characteristic sizes in the same geometric configuration. Moreover, the exponential dependence of the field emission current density [8] requires increased precision for taking into account the emitter's boundary conditions: are Nordheim's elliptic functions, which can be approximated as [9] The microscale electric field's distribution near the emitter surface can be calculated analytically by the sphere-on-cone model [1]. Within the framework of this model one equipotential surface (Figure 1) of the electric field created by the charged orthogonal cone with the sphere at the top is taken as the emitter and the other one, as the anode. The electric field strength is given by the expression . In this approach one has to compute local values of the work function for various crystallographic planes of the emitter tip, using either ab initio calculation methods or empirical models [10,11]. Figure 1 shows the hierarchy chart of models and their relations useful for understanding and development of the theory and relating it to experimental data.
Conclusion
This study presents the mathematical atomistic model of characteristics of emitter surface that is the study object of atom probe tomography, field emission electron/ion microscopy and field desorption microscopy, where crystallographic, work function and electric field characteristics of field emitter tip surface are of great importance [12].
In order to achieve a more in-depth understanding of the existing monocrystalline emitters, and to develop them further it is necessary to study further, considering both macroscopic processes (electric field distribution in the interelectrode gap; formation of the spatial charge) and microscopic parameters (formation and deformation of crystallographic faces, distribution of atomic packing 4 density and of work function). Such a combined approach to the processes with temporal and spatial scales being that different is only possible using multiscale computer modeling. | 2019-04-30T13:08:52.503Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "72e914eb5025a00af0631eb4c990932a3cbb5da1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1124/2/022023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "532fe8469bcdba47c2d3ef4a39291e6606abdf90",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
238745218 | pes2o/s2orc | v3-fos-license | LitCovid-AGAC: cellular and molecular level annotation data set based on COVID-19
Currently, coronavirus disease 2019 (COVID-19) literature has been increasing dramatically, and the increased text amount make it possible to perform large scale text mining and knowledge discovery. Therefore, curation of these texts becomes a crucial issue for Bio-medical Natural Language Processing (BioNLP) community, so as to retrieve the important information about the mechanism of COVID-19. PubAnnotation is an aligned annotation system which provides an efficient platform for biological curators to upload their annotations or merge other external annotations. Inspired by the integration among multiple useful COVID-19 annotations, we merged three annotations resources to LitCovid data set, and constructed a cross-annotated corpus, LitCovid-AGAC. This corpus consists of 12 labels including Mutation, Species, Gene, Disease from PubTator, GO, CHEBI from OGER, Var, MPA, CPA, NegReg, PosReg, Reg from AGAC, upon 50,018 COVID-19 abstracts in LitCovid. Contain sufficient abundant information being possible to unveil the hidden knowledge in the pathological mechanism of COVID-19.
Introduction
Coronavirus disease 2019 (COVID-19) is an abbreviation for corona virus disease, which caused a pandemic in 2019. People infected with COVID-19 suffers from severe high fever, dyspnea, lung disease and with 0.3%-1.5% chance of death. Due to the severe condition COVID-19 caused, the research upon the disease has been increasing dramatically. As of January 2021, there are over 90,000 related literature published, and make it a huge repository for knowledge discovery. Such a large growth rate makes it difficult for relevant researchers to understand the massive information in time.
Understanding the mechanism of COVID-19 is of importance for containing the virus. Like severe acute respiratory syndrom virus, it enters cells by binding angiotensin-converting enzyme 2 (ACE2) protein on the surface of human cells with S protein. S protein is located in the outermost layer of COVID-19, and exists in the form of trimer. Each monomer contains a receptor binding domain composed of amino acids where S protein binds to ACE2 and infects human cells.
Compared with the whole vision of the COVID-19 mechanism, the above commonsense knowledge is far from sufficiency. For unveiling the mechanism hidden in the huge text data, application of text mining has drawn a good amount of attentions recently. So far, nearly 200 researches have been published in PubMed, which worked on COVID-19 liter-LitCovid-AGAC: cellular and molecular level annotation data set based on COVID-19 ature mining. For propelling the COVID-19-oriented text mining researches, NCBI developed a huge public available COVID-19 corpus, LitCovid [1,2], and make it a gold database for knowledge mining.
Fortunately, the Bio-medical Natural Language Processing (Bi-oNLP) community has long focused on fundamental tools development, including bio-medical entity recognition, entity concept normalization, relation extraction, and so forth. For PubMed abstracts and PMC full texts, PubTator [3] efficiently tags and normalizes six types of biological entities, i.e., gene, disease, chemical, mutation, species and cell line.
For example, PubTator is a search database that highlights some keywords in the search results, it's based on the results of PubMed. Pubtator supports six tag types, which are gene, disease, chemical, mutation, species and cell line. The above six kinds of tags are already very useful for unveil hidden mechanism of COVID-19. Li-tCovid is a reliable corpus which is a collection of texts related to COVID-19. Therefore, when PubTator annotates the LitCovid corpus, the six biological entities in the text will be assigned a corresponding tag. Moreover, the OntoGene's Bio-medical Entity Recognizer (OGER) [4,5] is an important Tagger, which will annotate the following seven bio-medical entities, Disease, Chemical, Sequence, Gene/Protein, Biological_process, Organism and Cell, and these were annotated by using Bio Term Hub (BTH) terminologies. BTH supports the rapid construction of term resources from famous life science databases in a simple standardized format for text mining, and it can label specific concept types such as protein, gene, disease and cell line. However, we use OGER only to add gene ontology (GO) and chemical annotations to our data set.
Considering the need for logical mining, AGAC is good at discovering Regulation relations. Therefore, it is easy to reveal Pathway-like logic. In this research, we release LitCovid-AGAC database. It provides multiple annotations by PubTator, OGER and AGAC.
AGAC as a corpus for key annotations labeling
The purpose of designing AGAC [6] corpus is to better find the logical lines in the sentence, and designed six tags for this, namely Var, MPA, CPA, PosReg, NegReg, Reg. It took 20 months for 4 annotators to manually annotate and check. AGAC is illuminative to be applied in drug-related knowledge discovery. For example, AGAC was successfully applied in LOF/GOF classification by using tensor decomposition algorithm [7]. As well, it has been adopted as the training data in a competition in the BioNLP open shared task 2019 [8], and applied to extract relevant literature for Alzhei-mer's disease for the support of gene disease association prediction [9].
AGAC tagger
An AGAC tagger based on the deep neural network was introduced as a baseline method in AGAC track in BioNLP OST 2019. The baseline fully used sophisticated BERT structure and reached sufficient high quality for sequence labeling [7], the F-1 value of which is about 0.5. Such high-quality annotation results indicate that applying AGAC corpus to annotate the text helps to find the convincing logical relationships between biological entities.
PubAnnotation platform for multiple annotations alignment
PubAnnotation [10] is a platform for biologist curator to assemble annotations or annotate their own labels upon interested texts. Till now, there are 45 released projects in PubAnnotation with AGAC included. Co-tagging is possible to carry on automatically via PubAnnotation, as various bio concept taggers, e.g., OGER and PubTator, have already been involved in the system. Co-tagging helps to integrate different annotations and to serve sophisticated knowledge representation. As can be seen from the following example, three resources mentioned above provided different kinds of annotation on a same sentence, which form a complete logical line shown in the figure.
As shown in Fig. 1, TF is the abbreviation of total flavonoid, which has been labeled in the previous article. By combining AGAC labels with other important annotations, we can clearly see the logical lines shown at the bottom right of the figure. The data we uploaded can be downloaded in PubAnnotation in JSON format. The annotation set we released combines the annotation of PubTator, OGER and AGAC, which can be used to mine the logical lines of biological process changes in COVID-19.
It can be seen that different corpora have different annotation focuses. Other corpora mainly label biological concepts and match them to standard data sets. However, AGAC not only focuses on biological concepts, but also focuses on logical lines in sentences. The same biological concept may be given different labels in different contexts, or even will not be labeled. In this way, we can find that some chemicals up regulate or down regulate gene expression in COVID-19.
Automatic annotation pipeline
By integrating the method mentioned above, we performed an automatic annotation pipeline to obtain the LitCovid-AGAC dataset.
Step 2. AGAC annotation: Obtain the AGAC annotations by applying AGAC tagger on literature set.
Step 3. Regulation annotation: Create a regulation dictionary on PubDictionary [10] and automatically annotate the regulation words.
Step 4. PubTator and OGER annotation: Import the annotations from PubTator and OGER by using PubAnnotation.
Statistics of LitCovid-AGAC dataset
LitCovid-AGAC contains 50,018 abstracts from PubMed, and the annotations are from three sources, AGAC, PubTator and OGER. LitCovid-AGAC aims on the regulations of biological process described in COVID-19 literature. Therefore, we applied all the AGAC labels which contains 5 biological concept labels and 3 regulation labels. To enrich the relative annotation, Mutation, Species, Gene, Disease from PubTator and GO, Chemical Entities of Biological Interest (CHEBI) [11] from OGER are included in Li-tCovid-AGAC dataset. CHEBI includes natural products and synthetic products used to intervene in biological processes, but generally does not include macromolecules encoded by genes. According to the statistics data, the most frequent label is "Disease, " which appears 285,135 times, and the least frequent label is "Mutation", which only appears 435 times.
It can be clearly seen that the annotation results of OGER and PubTator are more abundant, on the contrary, the number of AGAC annotations is not in the same order of magnitude as the number of their annotations. It is due to the annotation rules in AGAC that the sentence without the description of regulation is not annotated, so AGAC annotations are less than the annotations from other sources. The more detailed statistics is shown in Table 1.
Knowledge discovery pattern and research paradigm in LitCovid-AGAC dataset Logical line examples from single sentence in cellular and molecular level
Enriched by PubTator and OGER, the data set contained more complete annotations. For instance, in Fig. 2A, the "Disease" annotation provided by PubTator acts as the cause among the other annotations from AGAC in this sentence, where the "Cell Physiological Activity," lymphocyte, was firstly regulated by the severe acute respiratory syndrom coronavirus 2 (SARS-CoV-2) infection and other two "Cell Physiological Activity," T and B cells and monocytes, were down-regulated subsequently. From the annotations in this sentence, the effect of the SARS-CoV-2 infection in cell level was clearly showed with the sequential order, which could be transformed to a path in a knowledge graph.
Besides, the annotations also unveil the molecule-level biological processes. In Fig. 3, R518W/Q mutations in gene NPC1 inhibited the cholesterol transports and thus resulted the accumulation of cholesterol and lipids, which are all "Molecular Physiological Activity. " In this sentence, AGAC annotations provided the variation, regulation and molecular level processes, while PubTator and OGER provided the gene, variation, chemical and GO [12] annotations with their unique ID which supplemented the information recognition and also provided the normalization on some of the AGAC annotations. GO has three categories, which are biological process, molecular function and cellular component, these terms used to represent all entities and their relationships.
With the annotations in LitCovid-AGAC data set, the genes, diseases, variations and the biological processes in cellular-level and molecular-level are connected by the regulations 4 labels in the same sentence. Combining with the semantics information, the sequential order of the regulation events helps to convert them into a directional path which regards regulation label as the edges and the other labels as the nodes. For example, the path in Fig. 2B is a neutral regulation edge from SARS-CoV-2 infection to lymphocytes then three negative regulations to T and B cells and monocytes. The same knowledge pattern is shown in Fig. 3B. The numerous knowledge paths in this data set are able to construct a network with plenty of biological information contained in the COVID-19 literature, which should contribute to the pathological mechanism analysis of COVID-19 and the evolution of this virus. Combined with the contents of four pictures, we drew the Fig. 4E, which shows the logical lines that are contained in the four examples above. Fig. 4A only shows that D614G mutation will lead to higher infectivity of SARS-CoV-2 virus, but the addition of D614G mutation in Fig. 4C and 4D will lead to the enhancement of ACE2 binding and fusion, which makes SARS-CoV-2 virus produce more virus transmission and viral loads. Therefore, the logical relationship from S gene to ACE2 to SARS-CoV-2 was formed. As the virus infectivity increasing, a series of immune reactions will appear in the patients' body infected with SARS-CoV-2. This information is supplemented in Fig. 4B.
Combined logical lines from multiple sentences
This example reflects not only the information at the molecular level, but also the information at the cellular level, which proves the feasibility of finding and forming a logical line from different texts. Therefore, an idea can be put forward that we can extract the key knowledge from the massive information and form a large logical network when the number of texts is enough. As a result, more hidden information can be discovered and new knowledge can be inferred.
Discussion
As indicated in this research, though single annotation is limited for comprehensive bio-medical knowledge discovery upon the huge literature repository for COVID-19, combination of relevant annotations from different resources makes it possible to bring a rich annotation data set which lead to knowledge with complete semantics.
Furthermore, the suggested knowledge pattern by using Li-tCovid-AGAC is capable of offering a huge amount of structured logic knowledge, and unveiling the pathological mechanism of COVID-19 in cellular or molecular level.
In addition, it as well makes sense to further curate the obtained results in LitCovid-AGAC, e.g., concept normalization, co-reference, and relation extraction. Meanwhile, it is instructive to visualize the knowledge entry in a syntactic way. The VSM box [13] in Fig. 5 presents a typical knowledge template which carries a type of semantic structure of the information in LitCovid-AGAC. The LOF/GOF/REG/COM can be inferred from the regulation annotations [7], and the pattern in this figure shows the effect of a protein on a disease. | 2021-10-14T06:24:01.018Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "5da49c0660605d8450c8a614b223c790d137e2ff",
"oa_license": "CCBY",
"oa_url": "https://genominfo.org/upload/pdf/gi-21013.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85a5f28faa9b28f4f397489072bbcb29add10bcb",
"s2fieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247981638 | pes2o/s2orc | v3-fos-license | Antifungal application of biosynthesized selenium nanoparticles with pomegranate peels and nanochitosan as edible coatings for citrus green mold protection
Background Citrus production and trading are seriously affected by fungal decays worldwide; the green mold infection by Penicillium digitatum could be the most disastrous. The substitutions of chemical and synthetic fungicides with effectual natural alternatives are global demands; plant extract from pomegranates peels (PPE), biosynthesized selenium nanoparticles with PPE (PPE/SeNPs) and chitosan nanoparticles (NCT) were suggested as efficacious fungicidal agents/nanocomposites to control P. digitatum strains. Method PPE from Punica granatum was extracted and employed directly for synthesizing SeNPs, whereas NCT was produced using ionic gelation method of chitosan extracted from white prawn (Fenneropenaeus indicus) shells. The physiochemical, biochemical and structural characterization of generated molecules were conducted using infra-red spectroscopy, particles’ size (Ps) and charge assessment and electron microscopes imaging. Antifungal potentialities were investigated in vitro and in infected fruits with P. digitatum by applying NCT nanocomposites-based edible coating. Results The synthesis of PPE-synthesized SeNPs and NCT was successfully achieved, the molecular bonding in synthesized agents/composites were proved with infrared spectroscopy to have both biochemical and physical interactions. The nanoparticles had 82.72, 9.41 and 85.17 nm mean diameters for NCT, PPE/SeNPs and NCT/PPE/SeNPs nanocomposites, respectively. The nanoparticles had homogenous spherical shapes and good distribution attributes. The entire agents/nanocomposites exhibited potent fungicidal potentialities toward P. digitatum isolates; NCT/PPE/SeNPs nanocomposite was the most forceful and significantly exceeded the fungicidal action of standard fungicide. The direct treatment of fungal mycelia with NCT/PPE/SeNPs nanocomposite led to remarkable lysis and deformations of P. digitatum hyphae within 12 h of treatment. The coating of infected orange with NCT-based edible coatings reduced the green mold infection signs by 91.7, 95.4 and 100%, for NCT, NCT/PPE and NCT/PPE/SeNPs based coating solutions, respectively. Conclusions NCT, PPE-synthesized SeNPs, and their innovative nanocomposites NCT/PPE/SeNPs are convincingly recommended for formulating effectual antifungal and edible coatings to eliminate postharvest fungal pathogen, both with protection from their invasion or with destructing their existing infections. Graphical Abstract
Introduction
Citrus includes numerous varieties of important fruits that are susceptible to microbial damage by phyto-pathogens throughout planting, harvesting and commercializing, due to their prominent nutritional composition and elevated water content [1]. Citrus nature is typically acidic (pH ~ 2.2-4.0), which derive fungi to be the main responsible for most infections/deteriorations of these fruits [2]. The fungal contamination/invasion can occur in any step of fruits' life. Penicillium digitatum is the necrotrophic fungus pathogen responsible for green mold that causes the foremost postharvest rot in citrus fruits; with massive economic losses reaching ≥ 90% from total postharvest damage in infected fruits worldwide [2,3]. For example, although Egypt is one of the leading citrus producers globally, its worm climate can facilitate fungal pathogens growth, which seriously decreases the exported amounts from these crops [4].
Due to restraint regulation, potential carcinogenicity and acute toxicity, elongated degradation periods, environmental consequences, emerging fungal resistance, and intensified public concerns about chemical residues in crops, the uses of synthetic/chemical fungicides became increasingly worried [1,5]. Consequently, the natural products usages and biological control approaches (including antagonistic microorganisms, bioactive natural derivatives and nano-biomaterials) attained great attentions as safe, effectual and environment-friendly alternatives to manage postharvest fungal infections with diminished risks for human and environment [5][6][7].
Nanotechnology and nanoparticles (NPs) are currently employed in most human-related fields; including biomedical, chemical, nutritional, biological, optical, mechanical, environmental and agricultural applications [8,9]. The NPs are widely involved in the production, processing and preservation of human food, beginning from agricultural production and fertilization to presentation of foodstuffs to consumers [10,11]. Due to NPs unique traits, e.g. enlarged surface area, very reduced size, high penetrability, distributions and shapes, they exhibit exclusively novel and enhanced characteristics over bulk particles. The customary used protocols for NPs synthesis, i.e. using chemical and physical procedures, could instigate numerous drawbacks. Physical methods require excessive energy supply, elevated costs, and produce limited NPs yield, whereas chemical procedures frequently have serious ecological and toxicological consequences [12]. The use of biomaterials like microbes, algae, biopolymers, plant materials, or their derivatives in the bio-(green) synthesis of NPs could efficaciously solve most of these foregoing drawbacks via providing facile, environment-friendly, cost-effective, controllable and high yield approaches [13,14]. Plant (phyto) extracts and biochemicals were effectively utilized for metals NPs green synthesis; plentiful phytochemicals, e.g. phenolics, carotenoids, alkaloids, flavonoids, terpenoids, lignans, and further physiologically active molecules, were proved to generate and stabilize NPs with astonishing characteristics [11,12].
The pomegranate (Punica granatum L.), are fruits that were recurrently mentioned in Holy Qur'an, Bible and Torah, and are cultivated worldwide for their taste and health benefits [15]. Because of their elevated contents from antioxidants, hydrolysable tannins, dyes, polyphenols and alkaloids, various parts of pomegranate plants are historically utilized for treating numerous ailments [16]. The extract of pomegranates peel (PPE) contains most of precious phytochemical components in pomegranate. PPE was documented as GRAS (Generally Recognized As Safe) and non-toxic even at high doses [2,17]. Due to its outstanding potentialities (e.g. antioxidant, antibacterial, and antifungal properties), PPE was recurrently employed in food preservation and edible coating (EC) constitution to protect numerous crops and food stuffs from chemical and microbiological spoilages [18][19][20]. The PPE applications for biosynthesis of various metals NPs (e.g. silver, gold, zinc oxide and selenium) were successfully achieved, which validated the higher reducing, antioxidation and stabilizing capabilities of the extract [21][22][23][24]. These PPE-synthesized NPs have been experienced as potent antimicrobial, preservative and anticancerous agents. Selenium (Se) trace element is a crucial component for both human and animal lives and functionality; ≥ 25 human seleno proteins/enzymes containing selenocysteine, which is essential for human health keeping [11]. The SeNPs have reduced cytotoxicity toward higher organisms (e.g. human, animals, fish, and crop plants) within permitted limits, but these NPs are highly bioactive to suppress bacteria, fungi, and eventually cancerous cells, which provide more applicability to SeNPs in biomedical, pharmaceutical and nutritional disciplines [24,25]. The green synthesized SeNPs, especially with plant extracts, were actually incorporated in preservatives ECs for meat products and agricultural crops, antioxidant and anticancer pharmaceutical formulations [26][27][28].
Chitosan is the positively charged, linear and semicrystalline biopolymer that derived from chitin [29,30]. This astounding biopolymer has remarkable bioactive characteristics (including its biodegradability, eco-friendly, non-toxicity, bactericidal, wound healing, biosorption, and antioxidant potentialities); the chitosan could easily transfigured into emulsions, membranes, hydrogels, bandages, edible films and ECs [31][32][33]. The bioactivities, functionalities and formulability of this biopolymers are dramatically augmented by the transformation to chitosan nanoparticles (NCT), which are applicable for usage individually or as effectual carriers for further bioactive molecules in pharmaceuticals, food processing, biomedical, environmental, agricultural and nutritional sectors [30,34,35]. NCT was recurrently the nanopolymer of choice for fabricating bioactive ECs that protects foods and crops from microbial spoilage, quality and moisture loss, oxidation stress and pathogens invasion [35][36][37].
Accordingly, current investigation intended the production and synthesis of NCT and to apply PPE for biosynthesis of SeNPs, to make nanocomposites from these agents and evaluate their effectuality as P. digitatum fungicidal agents in vitro, and as ECs for controlling green mold in citrus fruit.
Chitosan nanoparticles preparation and loading
The preparation and loading of NCT with PPE/SeNPs was adopted form formerly illustrated studies [18,37]. The CT was produced from Fenneropenaeus indicus (white prawn) shells as previously demonstrated [27]. The extraction processes included shrimp shells' drying, pulverization, deproteinization (in 2.0 N NaOH at RT for 250 min), demineralization (in 2.0 N HCl at RT for 250 min) and deacetylation (in 55% NaOH at 123 °C for 95 min). 5-Sodium tripolyphosphate "TPP; Sigma-Aldrich, MO" was utilized for NCT cross-linking; the dissolved chitosan (0.1%, w/v) in diluted acetic acid (1.5%, v/v) was the working solution and the TPP solution (0.5%, w/v in DW) was very slowly (rate 0.3 mL/ min) dropped into the working solution wile stirred vigorously, until reaching 3.5: 1 ratio from chitosan: TPP solutions, respectively. The stirring (670 ×g) sustained for additional 75 min after TPP dropping and the formed NCT was gathered through centrifugation (11.250 ×g for 28 min). For nanocomposites formation from PPE/ SeNPs and NCT (mentioned next as NCT/PPE/SeNPs), the PPE/SeNPs was added to CT solution (at 0.1% w/v), vortexed vigorously for 120 min before TPP dropping. Accordingly, the NCT was synthesized and uphold PPE/ SeNPs in conjugation with the polymer nanoparticles (e.g. NCT/PPE/SeNPs nanocomposite), which was then centrifuged, washed with DW and lyophilized.
Physiochemical and biochemical characterization FTIR spectroscopic analysis
The produced compounds (PPE, NCT, PPE/SeNPs and their composites) were investigated spectrophotometerically for their infra-red spectra after mingling with KBr [e.g. intense powder mixing of samples with 1% (w/w) potassium bromide before analysis], operating FTIR "Fourier transform infrared spectroscopy, JASCO FT-IR-360, Tokyo, Japan", with transmission mode and wavenumber range of 450-4000 cm −1 .
Green mold isolates
The Penicillium digitatum isolates (e.g. Pd O, isolated from orange; Pd T, isolated from tangerine; and Pd S, standard strain ATCC-10030) were attained from green mold-infested fruits and identified in the EPCRS-KSU "Egyptian Phytomicrobial Collection and Preservation for Scientific Researches and Sustainability Development" Excellence Center-Kafrelsheikh University, Egypt. The identification of fungal isolates included their morphological patterns and was confirmed by MALDI-ToF MS analysis "Matrix-assisted laser desorption/ionization with time-of-flight mass spectrometry". The fungal isolates were regularly propagated and screened using PDA and PDB media "Potato dextrose agar and broth, respectively; Oxoid, UK", aerobically at 27 °C. The spores' suspension (SS) was attained via soft scrapping of grown fungal cultures on PDA (7 days old) with sterile loop and washing the free spores with DW, vortexing this suspension well, and adjusting spores count to ~ 2 × 10 6 spores/ mL, using DW after counting spores with automatic cell counter "Countess-II FL, Thermo Fisher Scientific, MA".
Well diffusion (WD) method
Agar WD method is broadly employed for assessing the antifungal potentiality, especially from natural derivatives [38]. The PDA plates were firstly inoculated and spread with 100 µL of fungal SS, then wells with 6 mm diameter were made using cork-borer and 50 µL (from 1% concentration of each compound in DW or imazilil in DMSO) were pipetted into wells. The plates were incubated under darkness for 72 h at 27 °C and the appeared inhibition zones (ZOI) around wells were determined in millimetre [39].
Minimum fungicidal concentration (MFC)
The MFC of each screened compound (NCT, PPE, PPE/ SeNPs and NCT/PPE/SeNPs) or imazilil toward P. digitatum isolates, were appraised using diluted broth method [6,40]. Gradual concentrations (10-100 mg/ mL) from challenging compounds were intermingled with PDB and inoculated with isolates SS. The media were incubated aerobically for 8 days, then 100 µL from each trial were spread onto fresh PDA plates and incubated. The absence of any grown cells onto PDA plates after 7 days of incubation designated the MFC from each compound toward fungal isolates. Considering imazilil as the standard fungicide, the MFC ranges that distinguish susceptible and resistant P. digitatum isolates was set as follow: ≤ 25 mg/mL (highly susceptible), 25-50 mg/mL (moderately susceptible), 50-75 mg/mL (partially resistant) and > 75 mg/mL (resistant).
Antifungal edible coating Edible coating (EC) preparation
The ECs preparation was adopted with some modification from Tayel et al. [6]. Briefly, the bioactive NCTbased materials (e.g. NCT alone, NNCT/PPE and NCT/ PPE/SeNPs) were dissolved in acidified DW (pH 5) at their MFCs values; glycerol was integrated to solutions as plasticizer with 5% v/v.
Fruits treatment
Organically farmed Navel oranges (Citrus sinensis) were gathered from the USC Research farm "University of Sadat City, Egypt"; the fruits average diameter was 8.3 ± 0.4 cm and they were all free of any surface damage, injuries or apparent infections. Before coating, fruits were showered with DW, disinfected via immersion in NaOCl solution (5%) for 2.5 min, rinsed again with DW and drained until dryness. Orange fruits were wounded at one point in the equator using sterile cutter (3 mm wide × 3 mm deep). Injured fruits were submerged in 1200 mL of Pd O fungal SS for 8 min (to generate artificial infections), drained for 15 min and aseptically air dried for 90 min. Fruits were subsequently dipped into NCT-based ECs (~ 1L) for 5 min with stirring and RT air-dried. The treated oranges with NCT-free EC represented the control group. ECs coated oranges were kept in sterile humid room (90% RH) for 14 days at controlled RT. The diameters of fungal lesions (LD) were gauged routinely within inoculation period [7].
Microscopic observation of treated fungal mycelia
The alteration in P. digitatum mycelial morphology, after treatment with MFC from NCT/PPE/SeNPs, was microscopically detected; with digital optical microscope "Labomed Lx400; Labo America Inc., Fremont, CA", after incubation of fungal mycelia with the nanocomposite for 12 and 24 h under stirring and staining of treated mycelia with lactophenol-blue (Sigma-Aldrich, MO).
Statistical analysis
Experiments were performed in triplicates; their means and SD (standard deviations) were computed and compared using SPSS V-20 software. The differences significance was computed by one-way ANOVA at p ≤ 0.05.
Chitosan characteristics
The assessment of produced chitosan physiochemical physiognomies revealed that the biopolymer had a DD of 88.13%, 97.4% solubility in acidic pH solution (1.5% acetic acid aqueous solution), without external heating or sonication, and a 46.72 kDa molecular weight.
Biochemical analysis of employed materials
The FTIR biochemical analysis of fabricated compounds are presented in Fig. 1. For the NCT spectrum ( Fig. 1 The PPE spectrum designated the key biochemical bonds of the extract (Fig. 1-PPE); the designative peaks in PPE spectrum were detected at 3347, 2977, 2888, 1723, 1601, 1361, 1439, 1182, 1042 and at 879 cm −1 .
The leading biochemical groups/bonds in PPE that contributed in SeNPs synthesis/reduction were screened from their combined FRIR spectrum ( Fig. 1-PPE/ SeNPs). Many distinctive bands in PPE spectrum have been shifted, disappeared or had altered intensities after interaction with SeNPs. Other bands were emerged, as indicators for novel formed bonds between PPE biomolecules and SeNPs.
The FTIR spectrum of NCT conjugates with PPE/ SeNPs illustrated their strong interactions ( Fig. 1-NCT/ PPE/SeNPs), the spectrum had numerous characteristic peaks from each compound.
Structural and physiochemical analysis of nanoparticles
The PPE phyto-reduction of sodium selenite (Na 2 SeO 3 ) to SeNPs was attained and visually displayed from changing the color of synthesis solution from pale yellow to intense reddish-orange. The estimated Ps and ζ potential of NCT, PPE/SeNPs and NCT/PPE/SeNPs are illustrated in Table 1. The PPE could effectually reduce SeNPs to miniature particles with the mean Ps diameter of 9.41 nm. The PPE-phytosynthesized SeNPs exhibited strong negative ζ potential (− 31.4 mV), which could preserve NPs dispersion and prevent aggregation. For NCT, Table 2 In vitro antifungal assessment of experimented agents against Penicillium digitatum isolates, via measurement of inhibition zones diameter (ZOI, mm) and minimal fungicidal concentrations (MFC, mg/mL) 1 The experimented agents included nanochitosan (NCT), extract of pomegranate peels (PPE), biosynthesized selenium nanoparticles with PPE (PPE/SeNPs), and their composites, compared to standard fungicide imazilil 2 Dissimilar superscript letters (a-d) in a column appointed significant difference at P > 0.05 3 The MFC ranges for isolates susceptibility were set as: ≤ 25 mg/mL (highly susceptible), 25-50 mg/mL (moderately susceptible), 50 The elevated ζ potential of synthesized nanoparticles/ composites could effectually conserve their stability as evidenced from electron microscope images (Fig. 2).
The SEM imaging of NCT ultrastructure indicated their semispherical shapes and good distribution, with average Ps of ∼ 83.45 nm ( Fig. 2A), which harmonized attained data from DLS analysis ( Table 1).
The TEM topography of phytosynthesized SeNPs by PPE (Fig. 2B) designated the SeNPs diminished Ps and consistent distribution (with 4.07-20.94 nm size range and 9.54 nm mean diameter), matching the DLS results ( Table 1). The nanometal particles had no apparent aggregation and appeared with spherical shapes and homogenous Ps (Fig. 2B).
Antifungal activity of produced compounds
The in vitro antifungal assessments of experimented agents (NCT, PPE, PPE/SeNPs and NCT/PPE/SeNPs), compared to imazilil (standard fungicide), against P. digitatum isolates, are illustrated using qualitative and quantitative assaying methods ( Table 2). The entire experimented agents/composites exhibited remarkable antifungal activities toward all P. digitatum isolates. The nanocomposite (NCT/PPE/SeNPs) was the strongest and their fungicidal activities significantly exceeded the other agents and the commercial fungicide imazilil actions. The antifungal synergism between agents was evidenced, as the combination of multiple agents (e.g. PPE/Se/NPs and NCT/PPE/SeNPs) displayed more forceful effects than individual compounds (e.g. NCT and PPE). Regarding P. digitatum isolates' sensitivity to challenging agents, the isolate Pd O was the most resistant and the Pd 10,030 was the most sensitive, as evidenced from their ZOI and MFC values ( Table 2). All fungal strains were "highly susceptible" to both PPE/ SeNPs and NCT/PPE/SeNPs, while they were considered as "moderately susceptible" toward NCT and PPE.
Microscopic observation of treated fungal mycelia
The alteration in P. digitatum mycelial morphology, after treatment with MFC from NCT/PPE/SeNPs, was microscopically detected (Fig. 3). The fungal mycelium in the trial beginning appeared with healthy and strong feature; the mycelial wall/surface were smooth and dense with no apparent distortions (Fig. 3A). After 6 h of exposure, the mycelium had irregular swellings and fragmentations, with the softening of their walls and appearance of distortion signs (Fig. 3B). By the exposure end, after 12 h, the entire fungal mycelia were mostly lysed and lost their distinctive structures; the interior cellular components leaked outside the hyphae at this stage (Fig. 3C).
Treatment of orange fruits with NCT-based edible coating
The consequences of orange fruits treatments with NCTbased edible coatings (i.e. plain NCT, NCT/PPE and NCT/ PPE/SeNPs), after 10 days of infection with P. digitatum O, is photographically shown (Fig. 4). While the control fruits (uncoated) became fully decayed and the fungal growth covered the entire fruit, the coated fruits exhibited reduced infection signs (Fig. 4). The coating with NCT/ PPE/SeNPs based solution could completely protected orange fruits from any infestation signs and preserved the fresh-like appearance and texture of treated fruits. The infection signs in NCT-coated fruits covered ~ 8.3 ± 1.2% from fruits' surface area, whereas in NCT/PPE-coated fruits, the fungal infestation covered only ~ 4.6 ± 0.7% from fruits' surface area (Fig. 4). Interestingly, the fruits' coating with NCT/PPE/SeNPs nanocomposite could maintain the quality of coated fruits for further 20 days after the experiment duration, without any infection signs.
Discussion
The chitosan was successfully generated in current study; the attained chitosan physiognomies suggested its successful extraction, as chitosan should have ≥ 70% DD, which indicated effectual deacetylation of chitin substrate [31,37,41].
The FTIR analysis indicated the most effectual bonds/ groups in screened molecules. For the NCTspectrum (Fig. 1-NCT), it had the main characteristic bands of the typical bands of natural chitosan [41,42]. The band around 3426 cm −1 indicated the main locations for TPP interactions with chitosan [37]. The bands appeared at 2919 and 2874 cm −1 are indicatives to C-H symmetric/ asymmetric stretching, which are typical bands for polysaccharides. The following detected bands are distinctive to NCT biochemical bonding: ~ 1655 cm −1 (stretched C = O of amide I); 1321 cm −1 (vibrated C-N stretching); 1411 and 1358 cm −1 (CH 2 bending and CH 3 symmetrical deformations); 1153 cm −1 (bridge of C-O-C asymmetric stretching); 1066 and 1025 cm −1 (C-O stretching) [43][44][45]. The appeared peaks at 1153 and 1066 cm −1 indicated the C-O overlapping and formation of NCT after interaction of PO 4 and NH 4 groups in NCTmolecules; and also the peak at 1196 cm −1 that is corresponding to stretched P = O pond, validated NCTsynthesis following TPP interaction [37,43].
The designated biochemical bonds in PPE spectrum included the bands at 3347 cm −1 [bonded -NH and -OH groups of carboxylic acid (CA), gallic acid, tannic acid and ellagic acid] [17,46]; 2977 cm The combined PPE/SeNPs spectral analysis indicated the most responsible groups in PPE for the biosynthesis of SeNPs. The PPE band at 3426 cm −1 shifted to 3482 cm −1 in PPE/SeNPs spectrum, indicating Se interaction with N-H and O-H groups, whereas the C-H band (at 2888 cm −1 in PPE spectrum) mostly disappeared in PPE/SeNPs spectrum, indicating its roles in SeNPs conjugation/reduction [22]. Also, the bands in PPE spectrum at 1723 cm −1 (N-H of amides and CA groups) and at 1601 cm −1 (aromatic rings C = C) were remarkably shifted, as indicator of their roles in SeNPs synthesis/ reduction [23,46]. The beak at 1439 cm −1 (aromatic rings in PPE spectrum) shifted to 1378 cm −1 in PPE/SeNPs spectrum. Furthermore, the emergence of multiple notable bands at 1603 cm −1 and in the range of 756-812 cm −1 in PPE/SeNPs spectrum clearly indicated the formation of novel bonds and vibrated bending (mainly of Se-O) after interactions of Se ions with PPE biomolecules [24,48]. These detectable bands in PPE/SeNPs spectrum strongly validated the PPE potentiality for conjugating, reducing and stabilizing SeNPs; the reduction/stabilizing of SeNPs forms are predominantly depending on the biomolecules' nature and their stabilization capability that enable Se ions interaction with them [24,46,49]. Thus, PPE could be advocated as valued stabilizer/reducer for SeNPs biosynthesis.
FTIR analyses are useful to assess whether the conjugation of PPE/SeNPs with NCT is physical or chemical entrapment; if minimal or no deviations from parental compounds FTIR spectra were observed, the physical entrapment is expected, whereas spectral bands' shift or varied intensities indicate probable chemical interactions between molecules [20]. As many peaks in NCT/ PPE/SeNPs spectrum were shifted and varied from their parental compounds (NCT and PPE/SeNPs), along with the identical peaks that were detected from both agents, the spectral comparison could strongly indicates both biochemical and physical interactions during PPE/SeNPs entrapment within NCT [20,44].
The NCT high capability of capping SeNPs and formation of highly stable nanocomposites with minute Ps were demonstrated formerly [27,42]. Generally, NPs with elevated ζ potentials (≥ + 30 mV or ≤ − 30 mV) display high stability and dispersity degrees due to electrostatic repulsion between particles [50,51].
The NCT synthesis via TPP cross-linking was proven as effective operative protocol, employing ionic-gelation interaction; the synthesized NCT with such protocol had astonishing properties for practical employment either as plain bioactive molecules or as nanocarriers for other bioactive constituents, or bases for active ECs [27,37].
The general antimicrobial and specific antifungal potentialities of screened agents have been documented toward various microbial pathogens [3,19,36,52]. The chitosan microbicidal actions depend mainly on its surface positive charges, which enable its attachment and interaction with microbial membranes and internal organelles, beside increase of intracellular ROS "reactive oxygen species" production, suppress cellular bioactivities and upsurge cellular membranes' permeability [32,33]. These actions become more forceful and effectual by transforming the biomolecule to nanoforms (e.g. NCT), because of the increased reacted surface area and tiny Ps that enable more effectual interactions and biocidal actions [30,32]. The PPE antimicrobial activities are principally attributed to its bioactive phytoconstituents (e.g. phenolics, tannins, alkaloids, flavonoids and acids), which were previously investigated, validated and applied for controlling numerous bacterial and fungal pathogens [15,17,19].
The PPE mediated nanometals were also verified as potent microbicidal agents that have the synergistic actions from both PPE and synthesized nanometals, including Se, Ag, Au and Zn NPs [22,23,48].
For the NCT/PPE/SeNPs, which was innovatively composited in current investigation, the antifungal synergism between compositing agents (NCT, PPE and SeNPs) was clear and forceful, as evidenced from the widest ZOIs and least MFCs values; this indicates that composites ingredients could preserve their distinctive antifungal actions. Matching findings were recently reported [20], employing NCT and PPE composites as antioxidant conjugates. Additionally, the application of NCT for carrying, capping and delivering further bioactive molecules such as plant extracts, essential oils and nanometals have been reported to augment their combined actions as antimicrobial, antioxidant or even anticancerous nanocomposites [29,[43][44][45]. These former findings could verify the obtained role here of NCT to strengthen the antifungal actions of both PPE and SeNPs.
The antifungal potentialities of NCT and its parent chitosan have been proved toward numerous postharvest pathogens; the exact modes of action still vague, but it could be suggested that the positively charged NCT can attach hyphal walls, interact with fungal membranes and penetrate within these membranes to inhibit/destruct the fungal biosystems and lead to their lysis [30,32,34]. The PPE/SeNPs are suggested to damage microbial cells because of their combined biocidal activities. The destruction and deformation of P. digitatum hyphae was formerly observed, after treating them with PPE-related phytochemicals [13], which advocates current obtained results, in addition to SeNPs antifungal action.
The innovative nanocomposite here (NCT/PPE/ SeNPs) is suggested to perform multiple actions; firstly the NCT carries/holds PPE/SeNPs to the fungal hypha and attaches/interacts with them to cause softening and partial lysis of membranes, then it could penetrate inside the hypha and the liberated PPE/SeNPs beside NCT are capable to intermingle with intracellular organelles/biosystems to suppress their vital functions, which consequently lead to fungal deformation and lysis [30,32,53].
The chitosan-and NCT-based ECs were recurrently validated as effectual treatments for preventing postharvest decays/losses in many agricultural crops. The main distinguished functions of these ECs, beside the antimicrobial actions, are to form barriers against fungal new infection, protect fruit from moisture loss and manage the respiration and over-ripening of coated crops [29,30,35,36]. PPE was also the principal component of ECs for many fruits; the extract could eliminate microbial growth on fruit and maintain their freshness because of powerful PPE antimicrobial and antioxidant potentialities [3,7]. Furthermore, the conjugation of chitosan and PPE in ECs of fruits and vegetables could have higher functionalities than each individual component for preserving coated crops, enlarging their shelf lives and prevent their microbial decays [18,46]. These functions were suggested to be elevated with conjugation of NCT with PPE and their usage in ECs of fruits. NCT has higher capabilities to encase the whole fruits surface, fill their pores, deliver the accompanied molecules to fruit, and prevent them from fungal invasions and quality loss [20,37,45]. The combination of NCT with PPE/SeNPs is innovatively presented here to employ this nanocomposite as effectual EC for orange fruit; the biosafe nature of NCT and its elevated capping ability could provide more biosafety attributes toward the potential toxicity from SeNPs, as the embedding of nanometals into biopolymer matrix was previously proven to diminish their biotoxicity and increase their biocompatibility and safety [27,31,35,42,53].
Author contributions MFS contributed in the conception, design of the work, investigation and analysis; WAA contributed in design of the work, investigation and analysis, work drafting; AAT contributed in the conception, investigation, interpretation of data, supervision, work drafting and submission; FMA contributed in interpretation of data, resources, administration and work drafting; OMA contributed in the conception, interpretation of data, work drafting, revising and supervision. All authors read and approved the final manuscript.
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Data availability
All data generated or analysed during this study are included in this published article.
Declarations
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable. | 2022-04-07T13:43:17.615Z | 2022-04-07T00:00:00.000 | {
"year": 2022,
"sha1": "df57438349cb93d80e52b2fc1b926a81384ca237",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "df57438349cb93d80e52b2fc1b926a81384ca237",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
152075291 | pes2o/s2orc | v3-fos-license | Valued and performed or not? Teachers’ ratings of transition activities for young children with learning disability
Abstract Stakeholder collaboration has been identified as a facilitator for positive transition outcomes for all children, and especially for children in need of special support. However, the type and extent of stakeholder collaboration have shown to be related to teachers’ view of their transition practises. Thus, this study set out to examine the transition activities reported by 253 teachers in Compulsory School for Students with Learning Disabilities in Sweden. The purpose was to study the type of transition activities performed and how important teachers regarded these activities to be. The results show that overall teachers are engaged in transition activities that can be described as mainly traditional, as they do not differ from transition activities carried out in other educational settings. The results also show that untraditional transition activities, such as home visits and joint parent meetings with preschools, are viewed as important, but rarely executed. The results are discussed from an ecological systems perspective, emphasising the interconnectedness of individuals and their environment. Focus is given to individualised transition processes and developmentally appropriate transition activities for young children with learning disability.
During the past decades, transition research has emphasised a holistic view of children's educational transitions, with a strong focus on stakeholder collaboration. Stakeholder collaboration has been identified as a facilitator for positive transition outcomes for all children, and especially for children in need of special support due to a physical or learning disability (McIntyre, Blacher, and Baker 2006;Walker et al. 2012). The level of collaboration, the transition activities performed and stakeholder views of transition have been conceptualised by Niesel and Griebel (2005) as the transition competence of the whole system encompassing the educational transition of a child. This paper sets out to discuss the transition competence in terms of transition activities performed and valued by teachers in Compulsory School for Students with Learning Disabilities (CSSLD) in Sweden.
The Swedish context -from preschool to CSSLD
In Sweden, young children with learning disabilities (LD) are enrolled in inclusive preschool settings together with other children in need of additional support as well as typically developing children. From the age of six, the majority of children make the transition to preschool class which is a voluntary school form with content both from preschool and primary school, before entering compulsory primary school at the age of seven.
Children with LD can follow the same educational path to compulsory primary school as their peers, but often with additional resource allocation. There exists however a parallel school form in Sweden called CSSLD. This school form is aimed to offer educational practices based on the need of individual children diagnosed with LD who are not anticipated to be able to reach the educational goals of compulsory school (SFS 2010). Approximately 1% (n = 9 656) of all children aged 7-15 were enrolled in CSSLD in the academic year 2012-2013 divided among 667 CSSLDs in 280 Swedish municipalities (National Agency of Education 2014). When comparing to the 1275 children with LD who are enrolled in inclusive settings, it becomes evident that children in CSSLD make up a fairly large number of children with LD.
Before placement in CSSLD is offered, a social, pedagogical, medical and psychological investigation is performed in order to establish that the child has LD and to rule out other explanations to why the child is anticipated not to reach the educational goals in compulsory school. The final decision on enrolment in CSSLD is always made by the parents, who generally have discussed the implication of enrolment in CSSLD with the principal of CSSLD and CSSLD coordinator, a resource provided in larger cities. Prior to making the decision, parents often also make visits to a CSSLD.
Adopting an ecological perspective to the concept of transition competence
As the research of educational transitions has expanded, the terminology has mirrored the changed views of educational transitions. The views have shifted from describing the outcomes of educational transitions as primarily dependent upon an innate state of readiness within the child successfully adapting to the new setting to a more holistic view of the transition as a process including the whole family (Griebel and Niesel 2013). The holistic view recognises the importance of a collective responsibility of everyone involved in the educational transition. This view incorporates well with the ecological perspective (Rimm-Kaufman and Pianta 2000) recognising the importance of all stakeholders and aiming to narrow the distance between the different microenvironments of the child, as well as providing a theoretical framework to understand transition process occurring on multiple layers. Niesel and Griebel (2005) conceptualised the transition process through the concept of transition competence, understood as the competence of the whole system encompassing the educational transition of a child. They describe the concept from three different levels: the individual, the interactive and the contextual levels, and argue that in order for a transition to be positive, these levels need to interact and strive towards a common goal.
Another aspect of positive transitions that has been identified in previous research is that they are characterised by balance regarding continuity and aspects of change (Broström 2002;Arnold et al. 2007;Peters 2010). Change is an inevitable part of transition: Dockett (2014) defined transition as 'a time of individual and social change, influenced by communities and contexts and, within these, the relationships, identities, agencies and power of all involved ' (192). Thus, the process of educational transition creates changes on multiple levels. Adopting the levels presented by Niesel and Griebel (2005) from an ecological perspective, the transition process can generate change on an individual level, for example by the experiences of children of increased academic demands and changed behavioural expectations and roles connected to the new status of being a schoolboy/girl, as well as the change parents experience of being a parent to a schoolchild. On an interactive level, a change might be how, during the preschool years, parents have traditionally contact with preschool teachers when leaving and picking up the children from preschools. In schools, however, this often changes as children might travel to school with taxi or school bus and stay at leisure time centres prior and after the school day. Thus, the day-to-day contact that parents are used to have from preschool changes to transition of information through other channels, such as contact-book or email (Wilder and Lillvist 2016). Changes on a contextual level can be understood as the meso-environment, the interconnection of all the microenvironments that encompass the life of each person, such as home, work and school. Consequently, these connections between different layers of the bio-ecological model are dependent upon interaction and collaboration. How transition is conceptualised in the curriculum and policy documents of preschool and CSSLD affects children via an indirect circuit by how, for example, teachers interpret and implement transition activities in accordance with policy documents and different guidelines. In autumn 2015, The Swedish National Agency for Education proposed an amendment to all school curriculums, including the curriculum for preschool and CSSLD, emphasising coherence, continuity and progression in children's learning and development by strengthening the collaboration of stakeholders across educational settings. In the long run, this aims for a greater equivalence in the quality of transitions. This proposed amendment can be understood as a sign of increased importance now given to educational transitions, from a macro-system perspective and can thus affect both the type and number of transition activities offered to children and families by teachers in CSSLD.
Positive transitions are a result of co-construction (Niesel and Griebel 2005;Ahtola et al. 2012). Parent satisfaction over the transition process has been identified as a factor related to child adjustment (Rosenkoetter et al. 2009;McIntyre et al. 2010), and in reviewing the current research on family-professional partnerships, Keen (2007) concludes that honesty and trust, as well as planning and deciding upon shared goals are important factors for an effective partnership between parents and professionals. However, Hornby and Lafaele (2011) state that there is a 'rhetoric-reality gap' (s.37) of parental involvement, i.e. that the collective notion of parental involvement in education is seen as an important factor for the educational outcome of children, but research shows that there are actually many barriers to parental involvement. Although the field of educational transitions is becoming well-researched, there is still scarce knowledge of how families and stakeholders from educational establishments interact in times of transition (O'Toole, Hayes, and Mhathúna 2014).
Transition competence and transition activities
An educational transition is more than just the transition moment stepping out from a wellknown context into something new. Educational transitions generally encompass preparations and arrangements prior to the transition as well as adaptations and adjustments during and after the transition. Thus, the length of the actual transition can differ, depending on the experiences of the child and all stakeholders involved. For example, a child might easily adjust to the new context and gain a feeling of belonging, while the parents might have a longer period of transition due to the changes it brings along in terms of loss of relationships, new routines and feelings of insecurity. Dockett, Perry, and Kearney (2011) argue that an educational transition does not end 'until the child and family feels comfortable at school' (46). Further, Bulkeley and Fabian (2006) discuss the importance of the educational transition to land in a sense of belonging, accomplished only by bridging the past and the future of the child.
The importance of transition practices and activities is a growing research topic. For example, LoCasale-Crouch et al. (2008) examined pre-kindergarten teachers' use of transition practices and how these were connected to children's adjustment to kindergarten. The results showed that both the number and type of transition activities performed were positively related to the children's adjustment as perceived by the teachers. A multitude of activities and especially activities in which the children themselves took active part in seemed to foster social-emotional adjustment. In the study of Ahtola et al. (2012), transition practices of entrance to formal school were researched from a bio-ecological perspective in a Finnish context. Their results showed that local guidelines on municipality level and teacher reports on the importance of transition activities were the only variables connected to the actual implementation of the transition practises between elementary schools and partner preschools. Commitment from individual teachers can, according to Ahtola et al. (2012), serve as a motor for innovative and versatile transition practises, and seem to be of great importance to the actual implementation of transition practises. This indicates the importance of not only investigating the type and number of transition activities performed, but also teachers' perceived value of them. The results also showed that school-level factors, such as the size of the school and number of preschool partners, did not significantly predict the number of performed transition activities (Ahtola et al. 2012). Teachers reported challenges for conducting transition activities to be lack of time and administrational obstacles. The authors call for greater alignment and coordination among all system levels that transitions, or the handling of transitions, are embedded in. They conclude that an alignment between policy, national, local, school and individual levels enables effective teaching processes and positive child outcomes. Rous and Hallam (2006) have developed a transition theory based on an ecological perspective and assumptions from organisation theory providing three stages of outcomes important for transitions. One of the stages includes interagency agreement, and they (2006) outline three components that all stakeholders need to agree on. These are: description of the responsibilities, roles and action of each stakeholder, clarification of economic resources, and lastly the timeframe for the transition and agreement between stakeholders. It is possible that these aspects are even more important when the transitions concern children with special needs.
Positive transitions of children with special needs
At a general level, seeing positive transitions as the balance between change and continuity and stakeholder collaboration is applicable also in understanding the educational transitions of young children with special needs. However, it is an even more complex picture that emerges when the focus is on children with special needs. McIntyre, Blacher, and Baker (2006) compared the adaptation to school for young children with and without learning disabilities (ID) and found that children with ID had less positive experiences than their typically developing peers, in terms of more difficulties related to social and regulatory behaviours, impacting overall adaptation to school. The authors stress the importance of developmentally appropriate school activities and how building equal partnership with parents could be one way of establishing a more positive transition. Thus, when children have learning disabilities and might not be able to communicate by themselves, the collaborative task of teachers and parents becomes even more important (Lovett and Haring 2003). However, findings from research on parent involvement have shown that generally parents are given little agency in the transition process (Villeneuve et al. 2013). For example, in a qualitative study undertaken by Dockett, Perry, and Kearney (2011), the families expressed that the individual transition was subsumed under a ready system for educational transitions of children with special needs. Thus, the families had to adapt to the system, rather than the opposite. Another challenge reported by Janus et al. (2008) is the discrepancy between the policy level, stating the access to support in the transition, and the executive level, the actual support given to the families. Families felt uninformed and planning for transition was left with periods of waiting for decisions to be made by the school boards. Villeneuve et al. (2013) conclude from a Canadian context that all parents should be supported by a key facilitator in the school context throughout their child's first year of school. Based on previous research pointing on the importance of transition practices and activities in the educational transitions of young children in need of special support, it is interesting to explore the transition practices adopted by teachers in CSSIDs in Sweden, in terms of what teachers say they do and how they value the transition activities.
Aim and research questions
The aim of this study is to examine the transition activities reported on by 253 teachers in CSSLD, in Sweden. The purpose was to study the type of transition activities performed and how important teachers regarded these activities to be.
Participants
Participants in the study were teachers (n = 253) who worked in CSSLD and had experiences of receiving 6-7-year-old children with learning disabilities to their classes. Approximately 70% were in the age range of 40-60, and close to 50% reported having multiple diplomas, such as a preschool teacher education, a special needs teacher diploma and an elementary school teacher education. The participants work experience: 24% reported working less than five years as teachers in CSSLD, and close to 60% had been working between 5-15 years in the field. Only a few (7%) had a long career stretching over 20 years as teachers in CSSLD. The majority (94%) were women teaching grades 1-3, in which children aged 6-9 years are enrolled in or grades1-6, teaching children up to age 12. The classes in CSSLD are organised as mixed age groups in the same classroom. For example, a teacher might have one student aged 7 enrolled in the first grad of CSSLD and the rest of the students in grade two or three. Teachers teaching grades 1-6 can thus have a mixed group aged 6-12 years in the class. Forty-three per cent of the participants reported that the children in CSSLD had a combination of several of medical conditions in addition to learning disability such as for example communicative difficulties and/or motor disability.
Procedure
According to statistics from the Swedish National Agency of Education (2014), there were 4204 teachers employed in the 667 CSSLDs in Sweden academic year 2012-2013 teaching students aged 7-15. The ambition was to cover approximately 400 or 10% of the total number of teachers in CSSLD in our sample. In order to meet this criteria, a procedure of convenience sampling was undertaken from autumn 2013 until spring 2015. Information about the study and a call for interest were posted on Internet networks for teachers working in CSSLD. Contact was established with CSSLD coordinators, a resource provided in larger cities where there are multiple CSSLDs. The coordinators provided contact information to a total of 64 CSSLD within the municipalities they were working in. The questionnaires were sent by post to the CSSLD principals who then distributed them along with stamped envelopes to the teachers. Reminders were emailed to the principals after approximately three weeks. Furthermore, nine Swedish universities that hosted CSSLD teacher programmes, where teachers working in CSSLD can study a postgraduate programme to receive formal competence as CSSLD teachers, were visited and engaged in recruitment of participants. Questionnaires were taken to the classes and students were given time to complete them at the site.
Through the described convenience sampling, a total of 685 questionnaires have been distributed and 253 have been returned as complete, thus resulting in a smaller sample than anticipated. An additional nine questionnaires were returned but excluded due to no full completion. The attrition rate (37.0%) can be explained by difficulty of locating those teachers that had experience of receiving children aged 6-7 to CSSLD. According to statistics from the National Agency of Education (2014), most students are enrolled in CSSLD after a few years in regular compulsory school, and relatively few children transition to CSSLD from preschool or preschool class. Consequently, many teachers did not have experiences of receiving children aged 6-7 to their classes and did not complete the questionnaire. Information about how many teachers had experience of receiving children from preschool or preschool class could not obtained before data collection.
Instruments
A questionnaire was developed by the authors with parts of the questionnaire inspired by a Norwegian questionnaire about collaboration between preschool and regular school (Hogsnes and Moser 2014). Earlier research about special needs of children with learning disabilities, and the collaboration between school and home was used as a framework for including important themes. A preliminary version was tested in a pilot study with 22 CSSLD teacher students, and the final version of the survey included two parts. The first part consisted of background questions about the teachers' gender, age, current working place, teacher education and characteristics of the children with learning disabilities that they received to their school, including questions about additional diagnosis of the children, for example how many children had motor disabilities, communicative difficulties or other medical conditions in addition to learning disability. A total of 43% of the participants reported children to have a combination of several of the above-mentioned conditions, and 22% reported that children in their class had communicative difficulties in addition to learning disability. The second core part of the survey was structured and focused on preand post-transition organisation, transition activities and teacher attitudes. Transition activities were rated both in terms of how often they were performed and how important teachers perceived these activities to be. This core questionnaire included 51 items concerning routines pre/during transitions, collaborative activities in receiving children, type of information passed on between stakeholders, teacher attitudes and knowledge about policies and practice, other micro-environments and time management, most of which were rated on a five-point Likert scale. In this study, questions related to the type of transition activities and teacher attitudes about policies and practice are analysed in correspondence with the aim of this paper. All questions focused on the general practice regarding transitions and were not related to experiences with the transition experiences of a specific child.
Data analysis
Data were analysed in the statistical package SPSS 18.0, calculating descriptive statistics, Pearson correlations and t-tests. Analyses were performed on item level.
Results
The type and frequency of transition activities performed were analysed with 10 variables asking teachers to specify the occurrence of specific transition activities, such as parent meetings and visits to preschool before children start school, on a five-point Likert scale ranging from 1 -never to 5 -always. The results showed that the transition activities that teachers most often engage in were: to visit the child in preschool before school start, to meet with preschool teachers to talk about CSSLD teacher and preschool teacher meet to talk about what the child has learnt and experienced in preschool and to have individual meeting with the parents, with or without the children, before school start (see Table 1).
In Table 1, the mean values and standard deviation of each transition activity are displayed. The results are divided in how often they were performed and how important they were rated by the teachers. As can be seen, the mean values are generally high for several of the activities, indicating that they are performed often. The standard deviation, however, shows that there is some variability in the ratings.
As a next step, the correlations between how often the transition activities were performed and how important they were rated were investigated. The results, presented in Table 2, show significant positive correlations for all activities, indicating that if activities were performed, they were also rated as important to a greater deal. The correlations ranged from moderate to high, with the strongest correlation for the activity 'CSSLD organize parent meetings before children start school' loading on .685 on a 0.001 significance level.
Along with correlational statistics, Table 2 also displays the distribution of ratings to four categories for each transition activity. These categories were labelled as performed and important, performed but not important, important but not performed and neither performed nor important. Ratings were assigned to categories based on their scores on the Likert scale for each transition activity. Each transition activity was rated on a Likert scale ranging from 1 to 5, with 1 indicating that the activity was never or rarely performed, and 5 that the activity was always performed. The importance of each transition activity was rated correspondingly. Thus, high ratings on both performance and importance were categorised as performed and important, as opposed to low values on performance and importance which yielded the category neither performed nor important. The remaining two categories were made up by ratings where performance was rated high and importance low, and the opposite.
The results show that most transition activities were performed and valued as important. The activity that was most often performed and seen as important was the individual meetings that teachers have with parents and their children before school start. This activity was rated as often occurring and seen as very important by 187 teachers. The same patterns were shown for the following activities: CSSLD teacher and preschool teacher meet to talk about what the child has learnt and experienced in preschool, and CSSLD teachers have individual meetings with parents before school start. These activities were reported as often occurring and seen as important by 178 and 175 teachers, respectively. Only a few (3-9) teachers reported activities as often performed but not seen as important. A larger proportion of teachers valued activities as very important but reported them to be performed never or rarely. The activity that most teachers valued as important but rarely performed was for CSSLD and preschool to have joint parent meetings followed by meetings with preschool to talk about what children will learn in CSSLD and home visits. Although home visits were identified as important by 99 teachers, 121 did not see it as an important transition activity. A similar pattern was found for joint parent meetings. Table 2 presents the distribution of ratings among the activities, showing which transition activities were performed and valued as important, and which were valued and performed to a lesser degree. But in order to investigate the profile of each teacher, a total score was created of the 10 ratings. Activities rated as both performed and important generated four points, activities seen as important but not performed generated three and two points were given if activities were performed but not valued as important. One point was given for activities neither valued nor performed. Thus, the 10 activities rated generated a maximum score of 40 points for each teacher, with a higher score representing transition activities both performed and valued as important, and low values indicating the opposite. Missing value imputation was used for those who had four missing values or less (7.5% of the data), using the nearest neighbour method (Chen and Shao 2000). In Table 3, the total scores are presented. The teacher scores ranged from 16 to 39, with 65% of 254 teachers scoring between 32 and 40. Comparing the teacher scores with the frequencies reported in Table 2, it becomes evident that the low scores were distributed evenly across teachers, meaning the majority Table 2. number of matching ratings between performance and importance of transition activities. notes: the ratings always and often were categorised as performed, and the ratings never and seldom were categorised as not performed. **correlation significant at the 0.01 level (2-tailed).
Corr. of teachers had both high and low ratings in their total score, but with a clear overweight on high ratings. As seen in table three, no one was assigned to group 1 and only one person was assigned to group 2 and were thus excluded from further analysis. As these analyses all present data on group level, the next step was to zoom in on those teachers who regarded many of the transition activities important but reported not performing them, and to compare this group with the rest of the participants. Based on previous research on factors that affect the implementation of transition practices, the following questions were identified for comparative analysis between the two groups: (a) does the CSSLD have guidelines for transition practices, (b) the extent to which teachers report receiving support from the school principal regarding the implementation of transition practices and (c) the extent to which teachers feel support from their colleagues regarding transition practices.
Performed and important
Whether the school had any guidelines for the transition activities was calculated as frequencies. Of the total number of 253 participants, 131 reported that the CSSLD had guidelines for transition activities. Additional 65 reported not having guidelines and 54 did not know whether guidelines existed at their school. The corresponding numbers for the valued but not performed group were 64, 26 and 20, thus following the same pattern as for the rest of the participants.
Independent t-test analysis was performed comparing the two groups on perceived support from the school principal. There was no significant difference in the scores for perceived support from the principal between the group that valued but did not perform transition activities (M = 3.23, SD = 1.15) and the rest of the participant (M = 4.03, SD = 4.09) conditions; t (229) = 1.72, p = 0.08. The results show overall high mean values, indicating that most participants are satisfied with the support they received from their school principal. However, the mean values are slightly higher for the rest of the participants, although the difference is not statistically significant. How teachers in the two groups perceived the support from their colleagues was also investigated with a t-test. The results show overall high mean values but no significant differences between teachers that value but do not perform transition activities (M = 3.89, SD = 1.09) and the rest of the participants (M = 4.00, SD =1.03), t (227) = 0.47, p = 0.96. These result show that the difference between the groups in the extent to which they value and perform transition activities could not be related to whether the CSSLD had guidelines for transition activities, nor to which extent they perceived that the CSSLD principal or colleagues supported them in the implementation of transition practices.
Discussion
In this study, the transition activities reported on by teachers in CSSLD were investigated. The purpose was to study the type of transition activities performed and how important teachers regarded these activities to be. The findings show that teachers are involved in different transition activities including parents, children and other professionals. Most of these activities focus on individual children and their families, for example visiting the child at preschool and meeting preschool teachers to get information about what the child has learnt in preschool. Another common transition activity that both teachers and school principals take part in is meetings with the parents, both with and without children. These activities are all performed prior to child entry to CSSLD, which indicated that focus is on preparing the transition. The data do not support a conclusion of whether the transition is looked upon as a process as suggested by research (Rimm-Kaufman and Pianta 2000;Dockett, Perry, and Kearney 2011). However, that CSSLD teachers approach preschool teachers to talk about what children have learnt in preschool indicates a strive towards facilitating continuity in the process of transition. The balance between continuity and change has been identified as a key factor for successful transitions (Peters 2010;Broström 2002;Arnold et al. 2007), and in conjunction to this, children's wellbeing and sense of belonging (Bulkeley and Fabian 2006) Another interesting finding is that although the continuity in educational transitions seemed important for teachers, this did not necessarily mirror the type and degree of transition activities performed. The most commonly performed transition activities were to visit the child in preschool before school start, to meet with preschool teachers to talk about what the child has learnt in preschool and to have individual meeting with the parents, with or without the children, before school start. These activities constitute traditional transition activities, much similar to those performed in any school. The results show however a wish for more untraditional activities, such as home visits and joint parent meetings with the preschool. These were identified as activities of great importance, although they were rarely performed. Plausible explanations to why activities are not performed although they are viewed as important can be discussed in the light of the rhetoric-reality gap of parental involvement identified by Hornby and Lafaele (2011), as well as Villeneuve et al. (2013), who describe the discrepancy between what is stated in policy documents and what is actually executed to support the process of transition. However, Ahtola et al. (2012) found that guidelines and activities on municipality level as well as how teachers value the importance of the transition activities were decisive for implementation of transition activities. In our study, one group of 73 CSSLD teachers stood out in the sense that they scored high on valuing transition activities, but lower on the performance part. When comparing this group with the rest of the participants, no significant differences were found in the extent to which they felt support from the school principal or colleagues in implementing transition activities, nor the extent to which they reported having guidelines for transition activities. It is possible that although guidelines exist, they might not support teachers in developing and adapting transition activities of more untraditional nature. The adaptation of the transition process to the needs and wishes of families to children with special needs has been discussed by McIntyre, Blacher and Baker (2006) who stress the importance of developmentally appropriate transition activities. This can be understood as a necessity to think outside the box and agree upon more untraditional transition activities as well, if the families see a need for this. Similar thoughts are presented by LoCasale-Crouch et al. (2008) who criticise the 'one-size-fit-all-approach' (135) of transition activities and conclude that a multitude of activities are preferable. In the study of Ahtola et al. (2012), teachers who valued the transition practices as important were also innovative and versatile in the implementation of the transition activities. Perhaps these teachers also have the ability to think outside the box and beyond the one-size-fits all. That there is a need for versatility in the transition practices is pointed out by Dockett, Perry, and Kearney (2011) who report that families felt they had to adjust to already existing programmes, leaving few opportunities for them to influence the process of transition.
The theoretical framework of this study has been grounded by an ecological perspective posed by Rimm-Kaufman and Pianta (2000) and the concept of transition competence initiated by Niesel and Griebel (2005). The ecological perspective argues that transitions need to be understood in terms of the interrelations of the multiple settings and environments that encompass children's everyday life, and how these settings set conditions for children's entrance to school. Viewing our results from the ecological perspective, the slight discrepancy between performed and valued transition activities can be discussed from all systems levels in the bio-ecological model. On a micro-level, the competence and beliefs of the teachers are important for implementation of transition activities carried out on the mesolevel. The meso-level can be described as the activities bridging the past and the future of the child, for example by linking content in preschool to what children will learn in CSSLD. The other, more distal systems can be described in terms of the local and national guidelines and curriculum that frame the practices. The concept of transition competence as a competence of the social system (Niesel and Griebel 2005) is aligned with an ecological perspective, and the person-in context interrelation. The results of this study indicate that the importance of continuity and meeting children and other persons important for the children in settings familiar to them is captured in the transition activities which special teachers report to perform and value. It can be interpreted as a holistic view of transition process (Rimm-Kaufman and Pianta 2000; Griebel and Niesel 2013) being adopted by the CSSLD teachers.
The data collected in this study are based on information from 253 teachers spread across CSSLDs in Sweden. The data showed low variability, although the sample of participants was located in both urban and rural schools, varied in their level of education and experience from working as CSSLD teachers. A limitation of the questionnaire was that there were no room left for participants to add other transition activities to those listed. This means that possibly other transition activities might also be performed. Convenience sampling was conducted and participants were recruited with the help of CSSLD coordinators and principals, and by distributing the questionnaire to CSSLD teacher programmes at nine Swedish universities. Although a wide geographical distribution, the representativeness of the sample and generalisability of the results should be interpreted with caution. The contribution of this study is that it sheds light on the transition practices of CSSLD in Sweden. Although the research field of educational transitions is growing, little interest has so far been given to the practices and processes that frame the educational transitions of young children with LD who transition from preschool to CSSLD. Our study describes the type and nature of transition activities on a group level, capturing only general transaction practices of CSSID. Thus, no conclusions can be drawn of how the transition practices are carried out for individual children and their families. This is a task for future research and when possible to involve the perspective of children in the process of transition. Also, future longitudinal research is needed to study the transition process of individual children and families.
Conclusion
The transition of children with ID from preschool to CSSLD is paved with different types of transition activities. The results show that overall teachers are engaged in transition activities that can be described as mainly traditional, as they do not differ from transition activities carried out in other educational settings. In regard to the vast body of research promoting individualised transition processes based on family needs (Dockett, Perry, and Kearney 2011;Villeneuve et al., 2013) and developmentally appropriate transition activities (McIntyre, Blacher, and Baker 2006), we conclude that untraditional transition activities, such as for example home visits and joint parent meetings with preschool, might be a way to further individualise the transition process. The results also show that untraditional transition activities are viewed as important, but are rarely executed. These results have illuminated how the teachers in CSSLD view the transition activities as a part of the transitions process, but the results give no indication of the actual process of transition. To capture the process of transition, longitudinal research focusing on the co-construction of educational transitions through interrelated dynamic systems is needed. One way of reaching this is to examine the educational transition of young children with learning disabilities as co-constructed from the perspective of different stakeholders, such as teachers, parents and school principals. | 2019-05-10T13:08:58.190Z | 2017-02-24T00:00:00.000 | {
"year": 2017,
"sha1": "9c11b3ce699ec4f78e12b63cb6796a8a849bcb83",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/08856257.2017.1295637?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "db1522a7461c6dcebfe1a93ab3308d3a4b6f48ed",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
268006536 | pes2o/s2orc | v3-fos-license | Economic Hardships in Managing COVID-19 Patients in the Intensive Care Unit: A Retrospective Observational Study at a Tertiary Care Hospital in North India
Background: The information on healthcare expenditure is crucial to know the impact of the pandemic on public health budgets, thereby correctly managing the ongoing crisis and preparing for subsequent waves. Objective: To estimate the length of stay and cost incurred on COVID-19 patients who died in the ICU. Methods: It is a record-based descriptive study conducted on 76 deceased COVID-19 patients admitted to the ICU of a dedicated COVID-19 hospital (DCH) between April and October 2020. Central Government Health Services (CGHS) package rate list, Delhi-NCR, was used as a reference for the cost of the ICU bed, ventilator, investigations, and procedures. Results: The median duration of stay in the hospital was 12 days, and in the ICU, it was eight days. The median total cost of managing the patient was 91,235.6 INR; of this, the median total cost for ICU stay per patient was 6,904 INR. The major proportion of total expenses was contributed by personal protective equipment (PPE) kits, an average of 11,091.33 INR per month. The median cost of stay in the ICU, on the ventilator, in the ward, and mean cost of investigations were higher among those with associated co-morbidities. Conclusion: Most elderly male with co-morbidities lost their battle after ventilator support in the ICU. Patients with co-morbidities and severe disease not only have a long duration of hospitalization and poor survival rate but also fetch an economic burden close to one lakh on the institute.
Introduction
The world suffered a major setback when the World Health Organization declared the COVID-19 pandemic on March 11th, 2020.COVID-19 is an acute respiratory illness caused by Coronavirus 2 (SARS-CoV-2) [1].Since its outbreak in Wuhan, China, in December 2019, the disease has totaled more than 665,003,256 confirmed cases and 6,697.442deaths worldwide, India's tally shows 44,679,564 cases and 530,705 (1.18%) deaths as of December 31st, 2022 [2].
In addition to the morbidity and mortality, it has caused a tremendous impact on the economy of the nations.The social and economic impact of the disease has already been calculated to be worse than those of the second world war [3].While major COVID-19-related research efforts are dedicated to understanding its pathophysiology, treatment, prevention, and vaccines, there is a paucity of studies examining the impacts of the pandemic on public health budgets [4].The information on healthcare expenditure is crucial to knowing the disease burden, correctly managing the ongoing crisis, preparing for the subsequent waves, and guiding the implementation and management of new services such as telemedicine or creating dedicated COVID-19 clinic units [5].Decision-makers need to understand how their health systems are getting squeezed due to current and future pandemics, particularly to understand the healthcare resource use (e.g., length of hospital stay) and subsequent costs in managing the pandemic [6].
Employment State Insurance Scheme under the legislation of the ESI Act, 1948 is one of the largest social security schemes in India, devised to protect employees and their dependents, covered under it against contingencies such as sickness, maternity, and death or disability due to employment injuries.Medical care under medical benefits is provided through a network of ESI hospitals and dispensaries.The scheme is run solely by the contributions from the employers and the employees as decided from time to time by the Employment State Insurance Corporation.Only insured persons and their families covered under the act get Thus, during the pandemic, ESIC, besides providing services to its insured population, was giving medical services to all other non-insured citizens and bearing the expenses out of its own corpus.The current study explored the expenses incurred in managing COVID-19 patients in intensive care units, which can help policy/decision-makers plan resources and budget allocation for future health crises.
Objectives
To estimate the length of stay and cost incurred by COVID-19 patients who died between April and October 2020 at a tertiary care center in the Faridabad district.To determine the effect of co-morbidity and severity of illness on the cost incurred by COVID-19 patients who died between April and October 2020 at a tertiary care center in the Faridabad district.
Study population
The sampling frame was constituted by the diagnosed COVID admissions in the designated COVID hospital of the district between April and October 2020.The study included hospital records sheets of COVID-19confirmed deaths in the ICU between April and October 2020, except those who were brought dead.The complete records of the deceased which were considered by the health authorities, were included in the study.The records of indoor ward patients were not accessible or incomplete.Hence, the deaths due to COVID-19 were considered to be our study population.Existing guidelines issued by the government were used to classify the patients into three categories according to the severity of their symptoms.The case definitions of COVID-19 death, mild, moderate, and severe categories were adapted from government guidelines [9,10].
Sample size and sampling technique
Around 2775 cases were admitted to ESIC Hospital, Faridabad, from April through October 2020.Out of these, around 203 patients died in the ICU.This information is taken to assess mortality due to COVID-19, which comes out to be 7.31% at our institute.The minimum sample size was calculated using the following formula: (p=0.073,q=0.927, l=0.06), which was 76 at a 6% margin of error and 95% confidence interval.Systematic random sampling was used to select the records.
Data collection and study variables
Hospital record sheets were used to study the length of hospital stay (ward/ICU) and expenditure incurred in ICU/ward stay (including manpower, equipment, and diet cost), diagnostics (including lab & radiological investigations), and drugs (including supplements) to calculate the total cost in treating the COVID-19 patients.For data collection, a network with a medical record section, nursing staff, and resident doctors was developed to ensure the completeness of records.
The direct cost borne by the health system in managing COVID-19 patients in ICU for the purpose of analysis included bed cost, ventilator, investigations, and procedures (for example, intercostal tube insertion, central venous line insertion, etc.).Since the individual cost was not available for each category, the reference for the costing of the ICU bed, ventilator, investigations, and procedures was taken from the Central Government Health Services (CGHS) package rate list, Delhi-NCR, updated on 22.07.2021[11].ESIC follows CGHS rates for ICU referrals of patients from ESIC Hospitals to empaneled private hospital admissions wherever required.It also follows its own rate contract (DG ESIC RC) for drugs published for a defined period for the ESI hospitals by the parent body, i.e., ESIC.The indirect costs involved in COVID-19 management were the cost of PPE kits consumed by healthcare providers visiting the patients admitted to the hospital and the cost of biomedical waste management.The amount of PPE kits consumed was retrieved from the stock register maintained by ICU nursing staff for the period mentioned above.PPE kit consumption per patient/day was difficult to calculate as multiple patients were attended by the same health worker using the same PPE kit, and full bed occupancy was not documented at all times.A rough estimate was computed using the total consumption of PPE kits in the ICU during the study period.
Data and statistical analysis
Data was entered into Microsoft Excel and analyzed using the commercially available statistical software (IBM SPSS V21.0).Continuous data was expressed as mean or median, and categorical data as proportion.The student t-test was applied for continuous variables to test the difference between the two groups.The chi-square test was applied for categorical data to test the difference between groups.Non-parametric tests (Mann-Whitney U test) were applied to test the statistical difference between non-normal distributed variables.Differences were considered to be statistically significant at p-value <0.05.
Results
The The median total duration of stay of the deceased without co-morbidities was higher (median (IQR); 12 (9) days) than those with co-morbidities, though the association was not statistically significant (p:0.28).The median cost of stay in the ICU, on the ventilator, in the ward, and mean cost of investigations were higher among those with associated co-morbidities.The median cost of drugs used for the study population was higher for those without co-morbidities (median (IQR); 50,631.2(129956.25))as compared to those with comorbidities (median (IQR); 89313.0(59413.3)),(p:0.13).The median total cost was lower among those with co-morbidities, although statistically non-significant (p:0.34)(Table 3).
Discussion
The present study was conducted in a tertiary care hospital and associated with a medical college.The death audit was conducted using standard COVID-19 death audit proforma to find the cause of death [12], wherein it was found that most of the cases were of severe category and died in the ICU.The median duration of stay in the current study was 12 days, and in ICU was 10 days.Rees et al., in their systematic review of the length of stay of COVID-19 patients, reported that the median stay in ICU ranged from 5 to 19 days [13].
The total cost incurred by the COVID-19 patients during the study period was 882,92,35.95INR.On average, the total per-patient cost of care for COVID-19 cases in the study setting comes to 1,16,174.14INR.The average duration of stay in the hospital was 14.78.The cost of investigation (average 43,563.62INR per patient) and drugs (32,686.18INR per patient) are the main contributors to the total cost.This could be because of the increased number of special investigations like HRCT chest, IL-6, D-Dimer, LDH, CRP, Procalcitonin, Ferritin, and other blood gas analyses to monitor oxygen saturation.However, this amount is lower than actual as the PPE cost could not be calculated for individual patients.
In the present study, though limited to ICU admission wherein the outcome was death, the median cost of stay of one day in ICU was 6041 INR per patient, and for non-ICU ward stay, it was 6000 INR.This was lower than the rates fixed by the State government of Haryana for private hospitals to reduce disparity in charges across various private hospitals so that inpatient care became equally accessible to all [14].It could also be because the rates we have taken are for CGHS packages, the drug cost as per the existing rate contract of ESIC, and the actual cost incurred on the procurement of non-RC drugs.A study done by Reddy et [16].Another study done by Kumar P et al. to estimate the per bed per day cost of delivery of health care in polytrauma and single-specialty ICUs in an apex trauma care facility in India reported multi-specialty ICU cost being Rs.14,977/bed/day and the neurosurgery ICU with Rs. 14,307/bed/day [17].
The reason for such high cost could be that the apex institutes of the country have a maximum number of nursing hours per patient in their ICUs as compared to any other public sector hospitals, and also, the cost of consumables in neurosurgery ICU is higher compared to general ICU.Shelat PR et al., in their costing analysis from a private hospital in India, reported the mean ward bed per patient cost as 6815 INR, the mean ICU bed cost was 15,325 INR, and per patient mean ventilator cost was 6450 INR.Per the patient, the current study's ventilator stay cost was 2180.89INR, which was lower [18].The current setting, being a public sector unit with central government rates applied, justifies the lower rates of ventilator charges.
A comparison was also attempted to see the variation in cost among the patients admitted with comorbidities and without co-morbidities.The cost of care in the ICU for the patients with co-morbidities is slightly lower than for the patients without any associated co-morbidities.This is because certain high-cost drugs like Ramdesivir were given to patients without any co-morbidities as per the protocol [19].Since literature is scarce, a comparison with other centers was limiting.There was no statistical difference between the length of stay of patients with and without co-morbidities or with different severity of COVID-19 at ICU.It has been observed in previous studies that predictions for survival in the ICU are greater than in the ward in selected patients.The severe COVID-19 patients in the current study stayed for fewer days in the ICU than mild-moderate patients.Although the outcome was the same in both categories, COVID-19 patients in the moderate category had a longer stay in the ICUE due to their disease progression to the severe category and, ultimately, death.
Limitations
Expenditure on patients admitted to ICU requiring high-cost drugs and/or investigations represents only the tip of the hospital expenditures.Invisible costs in the form of expenditures on transport, food, time, etc., borne by the patient's family were not accounted for.The other obstacle the author faced was calculating the cost of a PPE kit per patient, as one healthcare provider was using a single PPE kit for collectively managing patients, irrespective of the number of patients admitted at that time.Further, only deceased COVID-19 patients were included in the analysis; hence, the further cost of managing ICU patients who were cured can be taken up for future research.
Conclusions
It is to be reiterated that the above economic burden of deceased patients was borne by the ESIC, whose funds are contributed by employers and insured employees of factories.ESIC is mandated to treat insured patients only, but there was no discrimination in disastrous situations like this.The result of this study can help in the resource allocation for ventilators, oxygen, ICU beds, and budget for managing such pandemic situations where the health system feels stressed and stretched.
Mostly, elderly male with co-morbidities lost their battle after ventilator support in the ICU.Patients with co-morbidities and severe disease not only have a long duration of hospitalization and poor survival rate but also fetch an economic burden close to one lakh on the institute.
following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
vested in section 6(2)(i) of the Disaster Management Act 2005, the National authority had the responsibility for laying down the policies, plans, and guidelines for disaster management (COVID-19 pandemic) for ensuring timely and effective response to disaster[7].Employment Scheme Insurance Corporation (ESIC) proactively contributed its part in providing the best care to COVID patients by converting twenty-one of its hospitals across India into dedicated COVID-19 hospitals (DCH) and opening its doors to non-insured patients as well as serving its insured persons under ESI Scheme.More than 2400 isolation beds and 550 Intensive care unit (ICU) beds with 200 ventilators have been made available in these hospitals.ESIC Medical College & Hospital, Faridabad, is one such healthcare institution where, besides the COVID-19 testing facility, ICU facility and treatment facility including plasma therapy is also available[8].
TABLE 1 : COVID-19 severity-wise distribution of study variables
analysis of 76 deceased COVID-19 patients at the tertiary care center shows that the males, 43 (56.6%),outnumberedfemales, 33 (43.3%).The majority of deaths were in the 61-70 years old age group 23 (30.3%).Three-fourths of 43 (75%) of the patients were in the severe category at the time of admission.Out of this, a majority, 16 (28.1%),were71-80 years old.Co-morbidities were present in 63 (82.9%) of the study records.Ventilator support was given to 56 (73.7%) of those who died.The median duration of stay in the hospital for the study population was 12 days, {Q1 was 8 days, Q3 was 18 days, and the interquartile range (IQR) was 10 days}, and the median duration of stay in ICU was 8 days, {Q1 was 3 days, Q3 was 13 days and IQR 10 days}.-values in the tables Mann-Whitney U test; # -values in the Chi-SquareThe cost of stay in the ward was 3,79,000 INR, the cost of stay in ICU was 6,42,072 INR, and the cost of patients on ventilators was 1,22,130 INR.The expenses on drugs and investigation for these patients were 43,75,198 INR and 33,10,835 INR, respectively.Thus, the total cost incurred on the COVID-19 patients during the study period was 882,92,35.95INR.The total cost incurred on PPE kits used on these patients was taken from the stock register, which came out to be 48,32,852 INR.If it is added to the total cost incurred on The median total cost of managing per patient was 91,235.6INR(IQR-70,651.4); of this, the median total cost for ICU stay per patient was 6,904 INR (IQR; 8,630 INR) (Table1).*COVID-19patients,itamounts to 136,62,086.95INR.The major proportion of total expenses was contributed by PPE kits (34.5%), drugs (32.0%), and investigations (24.2%) (Figure1).FIGURE 1: Proportion of total cost under various heads Around three-fourths, i.e. 43 (75.4%), of the severe patients were put on ventilator support, and 20 (35.1%) of the severe COVID-19 patients had two co-morbidities.The median stay in the hospital was higher among severe category patients (median (IQR); 12 (7)) as compared to mild and moderate category (median (IQR); 11 (15)), (p:0.74).The median stay (median (IQR); 12 (7)) in ICU was lower in the severe category as compared to mild-moderate (median (IQR); 9 (9)), (p: 0.39)Table 1.The median total cost was higher among the mild-moderate group (median (IQR); INR 1,00,551.20(1,11,525.65)) in comparison to the severe category (mean (SD); INR 89,313.80(44403.00))(p:0.46)(Table 2).
TABLE 2 : Cost of stay, drugs and investigation
* -values in the tables Mann-Whitney U test; $ -values in the Independent t-test
TABLE 3 : Presence of associated comorbidity-wise duration of stay and cost distribution
* -values in the tables Mann-Whitney U test; $ -values in the Independent t-test [15]found that the maximum cost was incurred by hospital drugs and disposables, bed charges, equipment charges, bio-safety protective gear (PPE), and pathological and radiological tests in COVID-19 management[15].Similar findings were found in our study, too.In a study by Agrawal A et al. on duration and cost of stay in a medicine ICU from a tertiary care center in Mumbai, the average cost of a stay in ICU per patient was reported to be 3454 INR and the average cost of stay per patient per day was 842.4 INR.The average total expenditure per patient in the above study was 27,213 INR, and the average cost per day per patient amounted to 6637.3 INR.This was much lower than the cost of treatment of deceased COVID-19 patients in the current study | 2024-02-27T17:24:24.957Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "045af26b6ce2b655bf40655decdd1c4e2b9f70a4",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/196102/20240221-12284-1sqxnl1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f57f8a3cfaf15163d1cff254befdd252eeb1376",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": []
} |
133886332 | pes2o/s2orc | v3-fos-license | Progress in Understanding the Mechanism of Cr VI Removal in Fe 0 -Based Filtration Systems
: Hexavalent chromium (Cr VI ) compounds are used in a variety of industrial applications and, as a result, large quantities of Cr VI have been released into the environment due to inadequate precautionary measures or accidental releases. Cr VI is highly toxic to most living organisms and a known human carcinogen by inhalation route of exposure. Another major issue of concern about Cr VI compounds is their high mobility, which easily leads to contamination of surface waters, soil, and ground waters. In recent years, attention has been focused on the use of metallic iron (Fe 0 ) for the abatement of Cr VI polluted waters. Despite a great deal of research, the mechanisms behind the efficient aqueous Cr VI removal in the presence of Fe 0 (Fe 0 /H 2 O systems) remain deeply controversial. The introduction of the Fe 0 -based filtration technology, at the beginning of 1990s, was coupled with the broad consensus that direct reduction of Cr VI by Fe 0 was followed by co-precipitation of resulted cations (Cr III , Fe III ). This view is still the dominant removal mechanism (reductive-precipitation mechanism) within the Fe 0 remediation industry. An overview on the literature on the Cr geochemistry suggests that the reductive-precipitation theory should never have been adopted. Moreover, recent investigations recalling that a Fe 0 /H 2 O system is an ion-selective one in which electrostatic interactions are of primordial importance is generally overlooked. The present work critically reviews existing knowledge on the Fe 0 /Cr VI /H 2 O and Cr VI /H 2 O systems, and clearly demonstrates that direct reduction with Fe 0 followed by precipitation is not acceptable, under environmental relevant conditions, as the sole/main mechanism of Cr VI removal in the presence of Fe 0 .
Introduction
Heavy metal pollution has become a major area of concern because of high concentrations released into the environment. Due to their bioaccumulative and non-biodegradable properties, heavy metals can produce cumulative deleterious effects even in low concentrations in a wide variety of aquatic organisms [1]. Chromium may enter the aquatic ecosystems through the discharge of contaminated wastewaters from steelworks, metal finishing chromium electroplating, preservation of wood leather tanning, corrosion control, dyeing of textiles, manufacture of ceramics, catalysts, and pigments, etc. [2][3][4][5]. During the last two decades, much research work has been published regarding the use of metallic iron (Fe 0 ) for the treatment of hexavalent chromium (Cr VI ) contaminated waters. Forms of tested Fe 0 materials include cast iron, granulated iron, iron chips, iron coils, iron composites, nano-scale iron, powdered iron, sponge iron, and steel wool [6]. None of these material classes is uniform in its reactivity. For example, there are no typical iron fillings with a characteristic range of reactivity. This evidence suggests that the primary reason for controversial reports in the Fe 0 literature relies on the ill-defined nature of tested and used materials. The present work aims to It is now well known that the first process of water treatment based on the use of Fe 0 was described by Henry Medlock in his patent released in 1857. Furthermore, the full-scale water potabilization plant that began service in Antwerp around 1890 was based first on Bischof's "spongy iron filters", and then on Anderson's "revolving purifier" filled with Fe 0 grains [12]; however, the earliest literature reference regarding the Cr VI removal with Fe 0 -based filters, that could be found by the author, was published only in 1941 by Hoover and Masseli [13]. The two authors investigated Cr VI removal from plating wastewater by passing waste chromic acid solutions of varying concentration and acidity through a glass percolator filled with scrap sheet steel punchings. This study also compared the efficiency of Fe 0 with that of several other reducing agents (sodium sulfide, calcium sulfide, barium sulfide, sulfur dioxide, sodium sulfite, sodium bisulfite, calcium bisulfite, zinc hydrosulfite, ferrous sulfate, zinc dust,) used for the removal of aqueous Cr VI . Among all investigated reagents, Fe 0 (iron filings) was considered the most economically feasible [13]. The experimental data revealed that both pH and Cr VI concentration played a key role in the efficiency of wastewater treatment process. The extent of Cr VI removal significantly decreased with increasing pH and Cr VI initial concentration. Another important outcome of this study was the observation that higher Cr VI removal efficiency was obtained when Fe 0 was coated with a layer of copper. Therefore, it can be considered that Hoover and Masseli [13] were also (probably) the first who investigated the use of bimetallic combinations for water treatment, reporting the catalytic effect of a second metal, more noble, coated on Fe 0 , for increased efficiency of water decontamination. Hoover and Masseli also noticed an increase in pH of the column effluent, compared to the influent, and that hydrogen was generated during the process [13]. In spite of all these interesting observations, re-established later by numerous researchers [14], the mechanism of Cr VI removal was not addressed in the work of Hoover and Masseli [13]. However, it can be assumed that they have considered the reduction of Cr VI to Cr III by Fe 0 (direct reduction) as a main mechanism involved in Cr VI removal, as widely used in the cementation process [15,16]. Case et al. (1969Case et al. ( , 1974 The next important chapter in the history of Cr VI removal with Fe 0 is the research carried out between 1969 and 1974 by the group headed by O.P. Case (Australia). The starting point of these investigations was the well-documented cementation process, used for the extraction of metals from ores and for the recovery of metals from wastes [16]. In a first study, Case and Jones [17] applied this process to accomplish the simultaneous reduction of Cr VI and precipitation of Cu II present in brass mill effluents. Case and Jones also compared the treatment costs of a medium-sized brass mill effluent contaminated with Cr VI and Cu II via two technologies: (1) by a conventional system utilizing sulfur dioxide, which achieves only Cr VI reduction; and (2) by using scrap iron, for simultaneous removal of Cr VI and Cu II . It was demonstrated that Fe 0 -based treatment was the most advantageous technology, in perfect accordance with previous findings by Hoover and Masseli [13]. Even though this first study of Case and Jones did not investigate the process in a continuous system, it qualitatively demonstrated the feasibility of this treatment process [17]. A subsequent research report [18] continued the research started in 1969 with a more rigorous investigation of the treatment of Cr VI and Cu II polluted wastewaters in Fe 0 /H 2 O system. Both batch and dynamic continuous experiments were performed by using soft iron shot (approximately 4.37 mm in diameter) as a reducing agent. The continuous experiments were carried out using a reactor charged with a mixture of scrap iron and glass beads, which had a design very close to that of Anderson's revolving purifier. With regard to Cr VI removal, the author proposed the following mechanism [18]:
2.2.
At this point, it is important to underline several details: (1) the direct reduction mechanism (Equation (1)) was considered to have the main contribution to Cr VI removal; (2) the iron species resulted from the direct reduction of Cr VI was Fe III ; and (3) the source of Fe II acting as reducing agent in the indirect reduction of Cr VI (Equation (3)) was considered to be both Fe 0 corrosion (Equation (2)) and cementation of Cu II (Equation (4)): A pH rise was observed during the reduction of Cr VI , in concordance with observations made by Hoover and Masseli [13]; this phenomenon was attributed to the consumption of protons during the process. Under optimal conditions of pH (1.5-3.0), diffusion, and Fe 0 :Cr VI ratio, the reaction was both quantitative and extremely rapid. Cr VI reduction was observed to be more efficient under anoxic conditions. Furthermore, it was noticed that Cu II cementation catalyzes Cr VI reduction [18]. Again, the findings of Hoover and Masseli [13] are corroborated. The work by the group of O.P. Case resulted in the development of a patented rotating reactor for the simultaneous reduction of Cr VI and Cu II cementation from effluents [19].
McKaveney et al. (1972)
Silicon alloys (mainly of calcium, magnesium, and iron) were used by McKaveney et al. [20], both in batch and column-filtration experiments, for the removal of several heavy metals from water and brine, including Cd II , Cr VI , Cu II , Fe II , Fe III , Hg II , Pb II and Zn II . It was shown by the column experiments that chromium removal only occurred when the alloy had sufficient time to reduce Cr VI to Cr III . However, low Cr VI removal efficiency was reported for the MgFeSi alloy (mass composition: 8.8% Mg, 45.2% Si, 45% Fe) at pH 5.6, compared to removal of all other heavy metals, attributed to a slow kinetics. Therefore, either prolonged contact with the alloy, or acid addition to pH 3.0 was suggested in order to achieve a higher Cr VI removal efficiency. However, such acidic conditions are in contradiction with the working pH recommended to prolong the life of silicon alloys, which should be greater than 4.0. The authors suggested that Si alloys are acting as metallic exchangers and the mechanism responsible for the removal of heavy metals appears to be primarily electrochemical [20], analogous to cementation for divalent ions such as Cu II and Hg II [16]. For elements at higher oxidation states (>2), additional electrochemical mechanisms coupled with hydroxide formation through hydrolysis reactions were also suggested as possible. For instance, it was proposed that a Cr VI removal mechanism should comprise two steps: (1) Cr VI reduction to Cr III ; and (2) precipitation of Fe III and Cr III hydroxides [20].
Gould (1982)
Even though this study was not conducted via column dynamic experiments, it will be, however, discussed here, since, to the best of my knowledge, it can be regarded as the first kinetic study on Cr VI reduction by Fe 0 [15]. In this work, Gould reported on the effectiveness of relatively pure Fe 0 in reducing Cr VI to Cr III over a wide range of operational conditions [15]. The overall data presented clearly indicated that reduction rate was dependent on the hydrogen ion concentration (pH), Cr VI concentration, ionic strength, Fe 0 surface area, and mixing rate. The rate constant increased with increasing Fe 0 surface area, while decreasing with increasing Cr VI concentration. Increasing ionic strength was found to result in a rapid decrease of the rate constant at ionic strengths below 0.1 M; conversely, at ionic strengths in excess of 0.1 M the rate of reduction appears to be nearly independent of the ionic strength [15]. The rate of reaction increased rapidly as the mixing rate increased from 70 to 300 min −1 (rpm), after which it stabilized sharply. The rate of Cr VI reduction was found to be first order with respect to Fe 0 surface area and half-order with respect to both Cr VI and H + , as results from the following kinetic expression [15]: where: k (L cm −2 min −1 ) is the rate constant, A is the surface area of iron (cm 2 L −1 ), [Cr VI ] and [H + ] are the concentrations of Cr VI and H + (mol L −1 ). Reaction stoichiometry was found to be independent of experimental conditions, with one exception: the initial Cr VI concentration. With regard to the mechanism, this study undoubtedly indicated that Fe 0 should be regarded not only as a reducing reagent, but also as a generator of secondary reducing reagents (Fe II and H/H 2 ) [15]. It was suggested that reaction between Cr VI and Fe 0 involves not only heterogeneous (direct) reduction with Fe 0 (Equation (6)), but also homogeneous reduction with the secondary reducing reagent Fe II (Equation (7)) produced by the process of Fe 0 oxidative dissolution (Equations (6) and (8)) [15]: Fe 0 + 2H + → Fe 2+ + H 2 This is consistent with the mechanism previously proposed by Case [18]. Moreover, based on the observed high efficiency of Cr VI reduction, and because reaction stoichiometry was found to be independent of pH, it was suggested that some other mechanism may also be involved in Cr VI reduction. Both molecular hydrogen (H 2 ) and some active hydrogen species generated during iron corrosion (Equations (8)-(10)) were considered to act as reductant for Cr VI [15].
Bowers et al. (1986)
Bowers and co-workers tested the suitability of scrap iron fillings for Cr VI removal from plating wastewaters, using both batch and continuous-flow completely mixed reactors [21]. Results of the kinetic studies carried out over the pH range of 2.0-3.0 indicated that the reaction appears to be zero order with respect to Cr VI , which could suggest that surface oxidation of Fe 0 to Fe II is the limiting reaction step [21]. In addition, it was noticed that reduction rates of Cr VI strongly increased as pH decreased, in agreement with previous reports [13,15,18]. The mechanism proposed for Cr VI removal comprised two steps: (1) heterogeneous reduction with Fe 0 , and (2) homogeneous reduction with Fe II produced as a result of Fe 0 oxidative dissolution (Fe 0 corrosion). Another important outcome of this study was the evidence that Cr VI removal efficiency exceeded the theoretical solubility of Cr(OH) 3 [21], which can be attributed to Cr III adsorption on Fe III hydroxides. In addition, both the settleability and specific resistance of the resultant Cr(OH) 3 sludge were improved dramatically by co-precipitation with Fe(OH) 3 . Therefore, the results of Bowers et al. [21] can be regarded as the first hints for the potential importance of adsorption and co-precipitation in the process of Cr VI removal in Fe 0 /H 2 O systems.
Summary
There are several important conclusions that can be drawn from these early studies. First, the efficiency of Fe 0 in removing Cr VI from aqueous solutions was reported for the first time not 25 years ago, nor 50 years ago; this finding is nearly 80 years old. Second, the mechanism of Cr VI removal, which will be referred to as "reductive precipitation" in papers published starting with the mid-1990s [22], was also suggested much earlier. Third, even though the involved processes have not been studied in detail, it was clearly indicated that Fe 0 can act not only as a reducing reagent, but also as a generator of secondary reducing reagents, including Fe II and hydrogen species. Fourth, Fe 0 inevitably generates iron hydroxides/oxides that are adsorbent and, possibly, enmeshing agents for Cr VI
Background
During the 1980s a new concept emerged in the field of environmental remediation: the idea of using underground permeable reactive barriers (treatment walls) for in situ treatment of polluted groundwater [23,24]. A treatment wall (or a PRB) is a porous reactive or adsorptive medium that is placed in the path of a contaminated groundwater plume with the aim of either to capture the contaminants, or to transform them into less harmful substances, as the groundwater flows through the barrier under the natural hydraulic gradient, or both [25,26]. The main advantages of this concept include: (1) in situ treatment; (2) low operation and maintenance cost; (3) easy of monitoring; (4) no disturbing of the above-ground space due to treatment facilities; (5) treatment of large volumes of water containing low concentration of contaminants; and (6) simultaneous treatment of multiple contaminants [26][27][28][29]. Starting with the early 1990s, this concept stimulated considerable research concerning the use of various materials for the treatment of groundwater polluted with a wide range of contaminants. Due to its low cost and high availability, the reactive medium predominately selected for PRBs applications was metallic iron (Fe 0 ), largely termed as zerovalent iron (ZVI). ZVI is in essence an ill-defined material encompassing all Fe 0 -based alloys, commercially available as "granular iron", "iron filings", "iron chips", and "iron shavings", etc. [6,26,28,30,31]. Even though Fe 0 reactivity toward both inorganic and organic substances was reported much earlier by an important number of works [13,15,17,21,[32][33][34], the use of Fe 0 as a reactive material for water remediation received a great deal of attention only at the beginning of the 1990s, after the publication of the first experimental studies focused on the degradation of chlorinated aliphatics [35][36][37][38][39].
Early Laboratory-Scale Investigations for PRBs
There are few laboratory-scale works [22,25,[40][41][42] that have investigated the remediation of Cr VI contaminated waters with Fe 0 in the first years after Gillham's pioneering studies, and only two of them have actually been carried out by simulation of Fe 0 -based filtration systems (i.e., via column experiments) [25,40]. To the best of my knowledge, the first "post Gillham" work investigating remediation of Cr VI contaminated waters in a Fe 0 -based filtration system was reported by Blowes and Ptacek in 1992 [40]. In this study, three iron-based solids (pyrite, fine-grained (0.5-1 mm) Fe 0 fillings, and coarse-grained (1-5 mm) Fe 0 chips) were assessed for their ability to remove aqueous Cr VI , under both batch and dynamic conditions. Column experiments were conducted at flow rates typical of those normally encountered at sites of remediation, using two different reactive mixtures: one containing 50% mass Fe 0 filings, and the second containing 10% mass Fe 0 chips; the difference up to 100% was quartz sand (25 < mesh < 30) [40]. Cr VI breakthrough in the column with Fe 0 chips mixture was observed after treating 4.5 pore volumes, while for the column with Fe 0 filings mixture Cr VI was absent from effluent for more than 15 pore volumes. In addition, brown coatings, inferred to be ferric oxyhydroxides were observed on the Fe 0 chips, whereas little formation was noticed on Fe 0 filings [40]. The reported results suggested that all investigated reactive materials may be used to remove Cr VI at low groundwater velocities; however, aqueous Cr VI removal was most rapid for the fine-grained, and least rapid for coarse-grained Fe 0 ; therefore, only fine-grained Fe 0 was found to be suitable for locations with rapid groundwater flow. Unfortunately, no explanation was given by the authors neither for the observed differences in efficiencies of the two columns, nor for the precipitation of ferric oxyhydroxides with greater intensity on the surface of the Fe 0 chips [40]. In an extension of the article published in 1992 [40], Blowes and coworkers carried out new column experiments in order to evaluate the ability of four Fe-bearing solids (siderite, pyrite, fine-grained (0.5-1 mm) Fe 0 fillings, and coarse-grained (1-5 mm) Fe 0 chips) to remove dissolved Cr VI from synthetic groundwater [25]. While in the 1992 study columns were packed with a reactive mixture comprising one of the three reactive solids, calcite, and quartz [40], in the 1997 study columns were packed with layers of reactive mixtures [25]. The results confirmed that Cr VI removal was most rapid for the Fe 0 filings, and least rapid for the Fe 0 chips. Secondary phases such as goethite, lepidocrocite, maghemite, and possibly hematite were identified at the surface of reacted Fe 0 . Even though no discrete chromium mineral was detected, zones within the iron hydroxides contained Cr III ; however, while goethite contained up to 27.3% mass Cr(OH) 3 , all other phases were low in chromium. Additionally, it was noticed that Cr III was neither associated with all Fe 0 grains, nor uniformly distributed within specific areas of the iron hydroxides. Since the mass ratio of Fe to Cr was similar to that reported by previous studies (Fe:Cr = 3:1, [43]), it was suggested that Cr III was most probably incorporated into the iron hydroxides; nevertheless, the possibility that Cr III occurred as an adsorbed phase on goethite was not totally discounted [25]. The removal of Cr VI with Fe 0 was suggested to take place through the same "reductive precipitation" mechanism previously proposed by Cantrell et al. [22]: reduction of Cr VI to Cr III coupled with the oxidation of Fe 0 to Fe II and Fe III , followed by precipitation of a sparingly soluble Fe III -Cr III (oxy)hydroxide phase [25]: The long term stability of the Cr-bearing precipitates was also assessed by flushing the column with Cr VI -free calcium carbonate saturated solution; this process was accompanied by a gradual disappearance of the visible ferric oxyhydroxides, attributed to reduction of Fe III by Fe 0 [25]: During the leaching test it was observed that chromium concentrations remained below the level of detection (0.05 mg/L) until the experiment was completed, for an additional 350 pore volumes. This was an extremely important result, indicating that Cr III existent in the Fe III -Cr III (oxy)hydroxide phase will remain stable after the input of Cr VI ceases [25].
The mineralogical and geochemical nature of secondary reaction products formed on Fe 0 fillings and quartz grains throughout the column tests conducted by Blowes et al. [25] were further investigated by Pratt et al. [44]. Coatings on Fe 0 and quartz grains were identified as goethite; however, while the mineral layer on quartz grains was thin (<25 µm) and compact, Fe 0 fillings were encrusted with coatings of thickness varying in the 25-50 µm range. The most widespread morphology of goethite was a botryoidal texture, occurring probably at the points of grain contract; nevertheless, euhedral tabular crystals were also observed, occurring most likely in the open interstitial areas between grains [44]. It was also evidenced that all detectable chromium at the Fe 0 surface existed as Cr III species-the distribution of Cr III was heterogeneous, with the highest concentrations being found at the outermost edges of thin and compact goethite coatings. In addition, iron and chromium ions in the near-surface coatings acquired chemical and structural characteristics similar to Fe 2 O 3 and Cr 2 O 3 , which is distinct from the structure of the bulk phase [44].
Testing Fe 0 PRBs for Cr VI Removal at Pilot Scale
The first attempt to transfer the Fe 0 technology from laboratory bench-scale studies to field implementation was the pilot-scale field PRB initiated in September 1994 at an old hard-chrome plating facility near Elizabeth City, USA. The main objectives of this test were: (1) to evaluate the ability of a Fe 0 -based PRB to remediate, in situ, Cr VI contaminated groundwater; (2) to determine if the results of field tests are consistent with prior laboratory study results; (3) to evaluate the geochemical parameters that may best predict the PRB performance; and (4) to identify mineral phases formed at the surface of PRB that might affect its long-term performance [27]. In addition to chromate (in concentrations up to 12 mg/L), the contaminated groundwater also contained several chlorinated organic compounds, including trichloroethylene, cis-dichloroethylene, and vinyl chloride [45]. The PRB was comprised of four materials, mixed in equal volumes: two types of Fe 0 (low grade steel waste stock (Ada Iron and Metal, 1-15 mm), and heated cast iron (Master Builder's Supply, 0.2-4 mm)), gravel sand (1-4 mm), and native aquifer solid materials (<0.1 mm). The barrier had a staggered fence design with 21 cylinders (20 cm in diameter) installed from 3 to 8 m below ground surface [27,45]. Monitoring wells located within or down gradient of the iron cylinders revealed chromate concentrations less than 0.01 mg/L, coupled with trichloroethylene removal efficiencies greater than 70%. These "treated zones" were characterized by increased concentrations of dissolved Fe II (2-20 mg/L) and hydrogen (>1000 nM), elevated pH (7.5-9.9), reduced Eh (−100 to +200 mV), low dissolved oxygen (<0.1 mg/L), and the presence of sulfides both in aqueous and solid phases. Instead, in monitoring wells placed in "gaps" where groundwater does not intercept the iron cylinders, the geochemical parameters of groundwater remained essentially unchanged: little change in Cr VI concentration over time, no Fe II , low concentrations of dissolved hydrogen (<10 nM), low pH (5.6-6.1), oxidized Eh (+200 to +400 mV), high dissolved oxygen (0.6-2 mg/L), and absence of sulfides [45]. These geochemical changes were found to be identical to prior laboratory observations [42], being attributed to the following reactions [27]: After 20 months of testing, surface analysis of Fe 0 filings revealed the building of a significant layer of Fe oxide/hydroxide; chromium was also detected, but only at the surface of 1-15 mm Fe 0 . In spite of the observed Fe 0 passivation, two years after the emplacement of the PRB, there has been no indication of decreased permeability of the reactive mixture [27,45]. This observation disagrees with the concerns raised with regard to system longevity claiming that the maintenance of sufficient permeability within the reactive zone is questionable due to the deposition of secondary mineral layers at the surface of Fe 0 [46,47]. We can say today, armed with all the knowledge available to us now, that the unaffected porosity may be the result of the PRB design: the PRB was not made from pure Fe 0 , but from a reactive mixture comprising 50% Fe 0 and 50% inert materials; this is in accord with recent studies that have demonstrated that mixing Fe 0 and nonexpansive materials prevents the rapid clogging of the Fe 0 -based filters, being thus a pre-requisite for system sustainability [48][49][50].
Full Scale Fe 0 PRBs for Cr VI Removal
The success of the pilot-scale test at the Elizabeth City site eventually led to full-scale implementation of the PRB technology, in June 1996. The Fe 0 PRB had a continuous wall configuration (46 m long, 0.6 m thick, 7.3 m deep) and was designed to remediate overlapping plumes of Cr VI and trichloroethylene [51,52]. Laboratory experiments and cost analysis assessments were carried out prior to installation of the PRB in order to determine the reactive mixture that would be the best suited for simultaneously treating the Cr VI and TCE contaminated groundwater. Based on the results of these studies it was decided that the reactive medium of the PRB will be composed entirely of Peerless granular iron (100% Fe 0 ), with an average grain size of 0.4 mm. The total project cost was approximately 985,000 U.S. $; however, it was anticipated that using this PRB over a 20 year period would result in a saving of 4 million U.S. $ in operation and maintenance costs, compared to a pump-and-treat system [53]. Monitoring results of this PRB after 15 years of operation indicate consistent removal of Cr VI in any of the down gradient compliance wells, from influent concentrations of up to 10 mg/L to less than 3 µg/L; however, it took almost 2 years for the down gradient concentrations to decrease below remedial goals, due to slow desorption of the contaminants from the aquifer matrix [46,[52][53][54][55][56]. The PRB at Elizabeth City was found to be also a long-term sink for C, S, Ca, Si, Mg, N, and Mn present in groundwater [56]. The "reductive precipitation" mechanism was considered to be responsible for the removal of Cr VI at the Elizabeth City PRB [46]. Moreover, it was also assumed that Fe 0 was oxidized directly to Fe III , as a result of Cr VI reduction to Cr III [46,[52][53][54]: Ferrous iron detected in treated groundwater was attributed to [52,53,56]: (1) Fe 0 corrosion (Equation (14)), process that was also responsible for the increased concentration (>1000 nM) of H 2 ; (2) reductive dissolution of aquifer minerals, due decreased redox potential in regions down gradient from the reactive media; and (3) dissolution of new formed Fe II -bearing mineral phases. Subsequently, the precipitation of highly insoluble mixed Fe III -Cr III hydroxides (Equation (15)) was presumed to take place [51,52].
Primary authigenic precipitates identified in the Elizabeth City PRB were lepidocrocite, magnetite, ferrihydrite, carbonates (aragonite, iron carbonate hydroxide and/or siderite), carbonate green rust, and iron monosulfides (mackinawite) [47,55,56]. Analysis of mineral precipitates evidenced that chromium was present dominantly as Cr III [55]. As expected, the continued buildup of mineral precipitates was found to have a negative impact on the hydraulic performance of PRB. After four years of operation, a 0.032% reduction in porosity was estimated at 2.5 cm into the PRB, while at distances > 8 cm the porosity reduction was <0.002% [56]; instead, after eight years of operation, less than 15% of the total available pore space has been lost [55]. However, rates of mineral accumulation decreased with time, which was believed to indicate a net loss of Fe 0 "reactivity" [56]. Even though Fe II concentrations within the PRB increased from background levels (<0.5 mg/L) to as much as 14.8 mg/L, an important number of studies have not taken into consideration neither the coupling of Cr VI reduction with oxidation of Fe 0 to Fe II , nor the reduction of Cr VI with dissolved Fe II [46,[52][53][54].
In this regard, it should be pointed out here that, since the standard potential of the Fe II /Fe 0 and Fe III /Fe 0 couples is −0.44 and −0.04 V, respectively [57], from thermodynamic perspective it seems that oxidation of Fe 0 to Fe II is considerably more favorable than oxidation of Fe 0 to Fe III ; hence, the oxidation of Fe 0 will probably stop at Fe II , as suggested in previous works [15,18,21]. But, if Fe III was not the result of Cr VI reduction with Fe 0 , then which was the process that generated all the Fe III precipitated in secondary minerals at surface of Fe 0 ? Due to the low concentration of dissolved O 2 (<0.2 mg/L) [52], it is questionable whether oxidation of Fe II by O 2 could be responsible for all the observed Fe III mineral layers. Therefore, it is highly plausible that the presence of Fe III coatings may be explained by an important Cr VI removal pathway, overlooked by many of the aforementioned studies: the indirect reduction of Cr VI with Fe II . This mechanism would be in accord with previous studies reporting that Fe II is a potent reductant of Cr VI . For instance, it was demonstrated that, for equal concentrations of Cr VI and dissolved O 2 , Cr VI oxidizes Fe II faster than O 2 by a factor of 6 × 10 3 at pH 6, and 1 × 10 3 at pH 8 [58]. Nevertheless, it should be noted that contribution of indirect reduction with Fe II to the mechanism of Cr VI removal at the Elizabeth City PRB was, however, mentioned in two studies published under the leadership of R.T. Wilkin [55,56]. These works concluded that elevated Fe II concentrations downgradient of the PRB have led to the development of a "reducing zone" where Cr VI is removed from the groundwater. Another important step forward made by the group R.T. Wilkin in elucidating the mechanisms underlying the removal of Cr VI with Fe 0 was the suggestion that some of the Fe II -containing secondary minerals (e.g., mackinawite, carbonate green rust, magnetite) may also support Cr VI removal, either through redox reactions at the mineral-water interface, or by the release of Fe II to solution [55,56].
Willisau (Switzerland)
The Willisau PRB was implemented in November 2003 to treat groundwater contaminated with up to 10 mg/L Cr VI at a former wood impregnation factory that used a chromate solution to preserve timber from deterioration. The PRB had an innovative design, consisting of two different components: (1) a single row of cylinders for lower expected Cr VI concentrations; and (2) an offset double row of cylinders for higher expected Cr VI -concentrations. The reactive filling inside the cylinders (d = 1.3 m) was installed from 12 to 23 m below ground surface, and consisted in a mixture of Fe 0 shavings (5-20 mm) and gravel (2-5 mm) in the ratio of 1:3 (by weight); this ratio was selected to ensure an initial permeability of the reactive material approximately three times larger than the surrounding subsoil, and to prevent the rapid clogging of the barrier due to precipitation of secondary phases in pore spaces [59,60]. The double row of cylinders successfully treated the Cr VI contamination at normal groundwater flow velocities (residual Cr VI concentrations < 0.01 mg/L); however, during events of exceptionally high groundwater levels (which result in a substantial mobilization of Cr VI ) the remediation effectiveness was only 96%. In contrast to the double row, the remediation capacity of the single row was not efficient enough to reduce the Cr VI concentrations below the critical limit of 0.01 mg/L; this phenomenon was attributed to an inadequate overlap of the cylinders resulting in insufficient concentrations and mixing of dissolved Fe II in the Cr VI -contaminated plume [59]. Surface analysis of Fe 0 and gravel particles sampled after four years of operation showed that, on average, iron occurred in a mixture of goethite (∼60%), ferrihydrite (∼30%), and a small fraction (∼10%) of Fe II , mainly composed of magnetite. In addition, hematite, maghemite and lepidocrocite were also detected. While Cr VI was not detected, Cr III occurred in the form of two different (in terms of Cr/Fe ratio) mixed Cr III -Fe III hydroxides [60]. Based on these observations, the authors suggested following possible reaction pathways that may contribute to Cr VI removal: (1) heterogeneous reduction of Cr VI with Fe 0 ; (2) heterogeneous reduction of Cr VI with Fe II bearing solids; (3) homogeneous reduction of Cr VI with dissolved Fe II ; and (4) precipitation of the resulted Cr III as mixed Cr III -Fe III -hydroxides.
However, it was considered that, due to the rapid corrosion of Fe 0 , the direct reduction with Fe 0 was not significant; therefore, only a reduction of Cr VI with Fe II -containing minerals and with dissolved Fe II were taken into consideration as main paths for the first step of Cr VI removal, reduction to Cr III [59,60]. The occurrence of two different Cr III species at the surface of exhausted Fe 0 shavings strongly supports this conclusion: (1) Cr III -Fe III hydroxides with Cr/Fe ratio > 1/3, produced via heterogeneous reduction of Cr VI with Fe II bearing solids; and (2) Cr III -Fe III hydroxides with Cr/Fe ratio of about 1/3, resulted from the homogeneous reduction of Cr VI with dissolved Fe II . In addition, the existence of Cr III -Fe III hydroxides not only on Fe 0 shavings, but also on the surface of gravel particles, further suggested that the homogeneous reduction process with dissolved Fe II , occurring within the pores space, was a very important pathway [60]. Accordingly, one of the main limiting factors for the longevity of the PRB was found to be the availability and accessibility of Fe II [59]. After four years of operation, Fe 0 shavings were found to be covered by a layer of Fe-hidroxides, which lead to a volume increase; nevertheless, the reduction of pore space in the reactive media appeared to be minor [60]. The innovative design of the Willisau PRB possesses several advantages, including: (1) it represents a good geotechnical solution for installation at large depths, in heterogeneous soils; (2) low risk of disturbing the hydrological regime in case the filling material becomes partially clogged by ferric hydroxides; (3) minimizes the amount of reactive material needed, since it partly relies on a dispersive Fe II -plume; and (4) good remediation effectiveness even under exceptionally high groundwater level events [59,60].
More Recent Laboratory-Scale Reports (Post Elisabeth City PRB)
Following the articles evaluated in the previous sections, more recent studies mainly investigated the practical applicability and long-term efficiency of Fe 0 /H 2 O systems for Cr VI removal from polluted aqueous solutions. In this context, hundreds of papers were published in the last 20 years, mostly attempting to [14]: (1) study the influence of operational parameters on the efficiency of Cr VI removal in Fe 0 /H 2 O systems; (2) elucidate the kinetics and mechanism of Cr VI removal; (3) study the nature of secondary mineral phases precipitated at the Fe 0 surface; and (4) find methods to enhance the efficiency of Cr VI removal. With respect to the mechanism of Cr VI removal, the large majority of articles have indicated direct (heterogeneous) reduction with Fe 0 as the main removal pathway [25,27,52,54]. Unfortunately, these reports co-exist in the literature with publications demonstrating that Fe 0 surface is universally covered by oxide layers [61][62][63][64][65][66], and that Fe 0 is additionally passivated with corrosion products during the remediation process [44,52,61,67,68]. Numerous recent studies, aimed to gain insight into the principles governing the removal of Cr VI in Fe 0 /H 2 O systems, have also presumed that this process is exclusively the result of direct electron transfer from Fe 0 to Cr VI [69][70][71][72][73][74][75][76][77][78][79]. Since the surface of commercially Fe 0 materials is permanently covered by an outer layer of low electric conductive air-formed oxides (hematite, maghemite) [64], the electron transport from Fe 0 to Cr VI should be severely inhibited [69][70][71][72][73][74][75][76][77][78][79]. Moreover, Fe 0 efficiency should significantly decrease during the time, as its surface is progressively covered with additional secondary mineral coatings that prevents penetration of the Cr VI and stops the electron transfer [11,[80][81][82]. As a result, removal of Cr VI in Fe 0 /H 2 O systems via direct reduction with Fe 0 should, theoretically, have a very low efficiency [83][84][85][86]. Nevertheless, the long-term efficiency of Fe 0 /H 2 O systems for Cr VI removal in reactive walls has been undoubtedly demonstrated [9]. In recent years, several studies have attempted to predict and/or rationalize this observation. Possible reasons included: (1) auto-reduction of atmospheric non-conductive corrosion products yielding electronic conductive magnetite [64,65]; (2) conversion of ferrous hydroxides on Fe 0 to electronic conductive magnetite via the Schikorr disproportionation reaction at pH > 6.0 [87]; and (3) the existence of fissures/defects in the oxide layers, which may initiate pitting corrosion, and allow thus the penetration of Cr VI to Fe 0 core [85,88]. However, it is certain that the effectiveness of Cr VI removal in Fe 0 /H 2 O systems cannot be ascribed to such processes since: (1) theoretically, for the direct reduction with Fe 0 to occur, the oxide scale should be electronic conductive; however, it was demonstrated that even electron transfer through electrically conductive magnetite occurs at a much lower rate than on the bare Fe 0 surface [87]; therefore, even after the coating of Fe 0 by magnetite, a reduction of the contaminants may become negligible [89]; and (2) pitting is usually initiated by the presence of important concentrations of aggressive anions (e.g., Cl − ), which are not usually found in natural aquatic environments. In addition, the longer diffusion path to the bottom of the pit restricts the transport of aqueous oxidants from the bulk solution [85]. Therefore, a reasonable explanation for quantitative Cr VI reduction in the Fe 0 /H 2 O system should be given. The importance of indirect reduction is obvious but the paramount goal of decontamination is removal and not simple reduction. Obviously, since iron oxide layers are excellent absorbents for negatively charged Cr VI , adsorption of Cr VI onto the oxide layer is the first step that should be taken into account when discussing Cr VI removal in Fe 0 /H 2 O systems [14,90]. Despite being adsorbed, indirect reduction of Cr VI is still likely. This evidence was taken as example by Noubactep [80] but is still largely ignored in the scientific literature [83,[91][92][93]. Another argument put forward to rationalize Cr VI reduction in Fe 0 /H 2 O systems is the prevalence of secondary Fe II -bearing minerals phases formed as Fe 0 corrosion products. Enumerated minerals include ferrous sulfides, magnetite, makinawite, siderite, or green rust [6,9,55,94,95]. Even though reduction of Cr VI at the surface of secondary mineral layers was initially believed to be slow [88], recent studies revealed that, actually, Cr VI may be rapidly sequestrated at the surface of Fe II -bearing minerals containing structural Fe II and/or Fe II impurities, following an adsorption-reduction mechanism [96,97]. Cr VI adsorption onto positively charged iron and/or chromium oxyhidroxide layers surrounding Fe 0 particles was regarded not only as an intermediate step, but also as an important Cr VI removal mechanism by itself [98][99][100][101]. It has been shown that adsorption processes may contribute not only to the removal of Cr VI , but also to the removal of the resulted Cr III [61,86,92]. For instance, XPS analysis carried out on reacted Fe-Ni nanoparticles revealed that ratio between adsorbed Cr III and Cr VI was 7.87 [92]. Therefore, in addition to the heterogeneous reduction mechanism occurring at the surface of Fe 0 , dissolved Fe II and H/H 2 , both products of Fe 0 corrosion, may also be involved in the mechanism of Cr VI removal in Fe 0 /H 2 O system [62,67,86,91,[102][103][104][105][106][107][108]. Even though these reduction pathways have been suggested much earlier by several pioneering works in this field [15,18,21], they were often overlooked in articles describing the removal of Cr VI with Fe 0 -based PRBs, as well as in numerous more recent papers. Instead, there are also several recent studies that have clearly indicated that dissolved Fe II should also be taken under consideration as an important reductant of Cr VI . In a study that investigated Cr VI removal by Fe 0 in the presence of organic and inorganic complexing reagents it was revealed that, while EDTA and NaF enhanced the process, 1,10-phenantroline dramatically decreased Cr VI removal [108]. While the favoring effect of EDTA and NaF was ascribed to reduced passivation of Fe 0 due to complexation of Cr III and Fe III , the hindering influence of 1,10-phenantroline was attributed to its well-known specific ability to form a stable complex with Fe II . These outcomes indicated that Cr VI reduction with Fe II was the primary mechanism of Cr VI removal with Fe 0 , rather than Cr VI reduction with Fe 0 [108]. The results of two recent studies reveal that weak magnetic field (WMF) applied during Cr VI removal with Fe 0 significantly improved the efficiency of the process; this phenomenon was ascribed to the enhancement of Fe 0 corrosion process and acceleration of Fe II generation [106,109]. Over the pH range of 4.0-5.5, the highest Cr VI removal rate was observed at pH 5.0. In contrast, the removal rate was limited at pH 4.0 and 5.5 due to slow reaction between Cr VI and Fe II , and slow Fe II generation rate, respectively. Furthermore, Fe II was not detected until Cr VI was completely exhausted, which means that all Fe II released from Fe 0 corrosion was instantaneously oxidized by Cr VI . In the light of all these observations, it was concluded that homogeneous reduction with dissolved Fe II was the main mechanism and the limiting step of Cr VI removal [106]. In a work that studied the influence of humic acids (HA) and fulvic acids (FA) co-presence on the efficiency of Cr VI removal with Fe 0 , higher yields were observed with HA than with FA. Since the concentration of free Fe II was much higher in the HA solutions compared to the FA solutions, the better Cr VI reduction rates observed in the co-presence of HA were ascribed to a greater contribution of the indirect Cr VI reduction with Fe II to the overall removal process [110]. Liu et al. [111] have investigated the effect of citric acid co-presence and of photoirradiation on Cr VI removal with Fe 0 ; it was observed that Cr VI removal efficiency was not improved in the presence of citric acid, while introduction of photoirradiation in the presence of citric acid dramatically increased the reduction rate of Cr VI . This enhanced efficacy was ascribed to the formation of Fe III -citric acid complexes, which prevented Fe 0 passivation. Moreover, under the effect of photoirradiation, Fe III was reduced to Fe II which, subsequently, homogeneously reduced Cr VI [111]. Last but not least, another possible removal pathway that was recently suggested in the Fe 0 /H 2 O system is via co-precipitation (entrapment) of Cr VI in the structure of growing Cr III -Fe III oxyhydroxides [90,[112][113][114].
Summarizing, the mechanism of Cr VI removal in Fe 0 /H 2 O systems generally involves multiple pathways including: (1) adsorption of Cr VI onto Fe 0 or onto oxide layers existent at surface of Fe 0 ; (2) heterogeneous reduction of Cr VI with Fe 0 or, most probably, with Fe II -bearing secondary minerals coated on Fe 0 ; (3) homogeneous reduction of Cr VI with Fe II and/or H 2 ; (4) precipitation of mixed Cr III -Fe III oxyhydroxides; and (5) adsorption/co-precipitation/entrapment of Cr VI on/with/in Cr III -Fe III oxyhydroxides.
Summary and Conclusions
The conclusions to this section include several important components. First, both laboratory studies and field implementation of PRBs have proven that Fe 0 -based PRBs may be a cost-effective and efficient approach for the remediation of Cr VI polluted groundwater. Second, the efficiency of in situ remediation processes using Fe 0 -based PRBs is influenced mainly by the nature and concentration of contaminant species, nature of reactive mixture (type of Fe 0 , co-presence of adjuvants), and site-specific geochemistry. Third, the presence of a PRB affects not only the concentration of the targeted pollutant(s), but also, to some extent, the concentration of all major dissolved species. Fourth, remarkable progress was made in regard to the understanding of the mechanism of Cr VI removal in Fe 0 /H 2 O systems; in addition to the direct reduction mechanism, new pathways were indicated including adsorption and indirect reduction with secondary reducing agents (dissolved Fe II , adsorbed Fe II , Fe II -bearing minerals, H 2 ) produced as a result of Fe 0 corrosion.
Geochemistry of Chromium in the Context of Fe 0 -Based Filtration Systems
This section summarizes knowledge from the geochemistry of chromium that is relevant for the understanding of interactions in Fe 0 /Cr VI /O 2 /H 2 O systems. This effort encompasses investigations on the redox reactivity of Cr VI and Fe II -bearing minerals (e.g., Fe 3 O 4 or green rusts), as Cr is used also as a chemical surrogate for Tc (the rationale for this being that based on thermodynamic data (E values) Cr reduction occurs before the Tc reduction) [115].
Geochemistry of Chromium
Chromium is usually encountered in the environment at oxidation states of (+III) and (+VI), which are the most stable from a thermodynamic standpoint. These two chromium species display totally different chemical and toxicological properties [4,116,117]. Under environmentally circumneutral relevant pH values, Cr VI exists only as hydrogen chromate (HCrO 4 − ) and chromate (CrO 4 2− ) oxyanions; at pH values below 6.5 the HCrO 4 − anion is predominant, while at pH above 6.5 the CrO 4 2− ion dominates. Cr VI species are highly soluble and therefore easily transported in water resources. In contrast, aqueous Cr III occurs primarily as cationic (Cr(OH) 2+ , Cr(OH) 2 + ) or neutral (Cr(OH) 3 0 ) species [5,115,118,119]. Cr III tends to be extremely insoluble (<20 µg/L) between pH 7.0 and pH 10.0, with minimum solubility at pH 8.0 of about 1 µg/L. Hence, Cr III is readily immobilized at circumneutral pH by precipitation as hydroxides, having thus a much lower mobility than Cr VI [14]. Since Cr VI readily crosses cell membranes, it is highly toxic to most living organisms [4]. Cr VI compounds are well-established human carcinogens by the inhalation route of exposure; in addition, Cr VI exposures through drinking water are also likely to be carcinogenic to humans [120][121][122]. On the contrary, Cr III compounds are poorly transported across membranes, and therefore, toxicity of Cr III is 500 to 1000 times less to a living cell than Cr VI [123,124]. Additionally, Cr III is recognized as a micronutrient essential for the metabolism of lipids and proteins, being also involved in the biological activity of insulin [125,126].
Chromium Removal by Fe II Species
Cr VI removal by reduction to Cr III with ferrous iron (Fe II (aq) or Fe II -bearing minerals including Fe 3 O 4 , FeS 2 and green rusts) and subsequent adsorption, precipitation, co-precipitation, or coagulation is well documented [5,6]. The following mechanism of Cr VI removal by Fe II is widely accepted in the geochemical literature: (1) Cr VI is reduced to Cr III by Fe II ; (2) Fe II is oxidized to Fe III ; and (3) Fe III rapidly precipitates as hydroxide. The reduced Cr III is easily adsorbed or co-precipitated with the ferric hydroxide [10,127]. While this view corroborates thermodynamic data, it is still to be made convincing why quantitative Cr VI reduction should precede adsorption. This concern is sustained by the fact that cationic Cr III adsorption onto the positively charged surface of iron oxides and oxyhydroxides at pH values higher than 4.0 is not always favorable (depending on the specific (hydr)oxide). As an example, under subsurface conditions where magnetite (Fe 3 O 4 , pH pzc~5 .0) is the major mineral, the Fe 3 O 4 surface is positively charged at pH < 5.0; therefore, anionic soluble Cr VI species are expected to be strongly attracted via electrostatic interactions. For pH > 5, more negatively charged surfaces are developed, further reducing the attraction of Cr VI species [5,115]. However, some quantitative adsorption may still occur, suggesting inner-sphere adsorption mechanisms via ligand exchange reactions [115]. A more rationale view is that negatively charged Cr VI species are adsorbed onto positively charged Fe (hydr)oxides and reduction occurs in an adsorbed state [128]. The very low solubility of Cr III phases implies that quantitative re-dissolution will not occur. Moreover, the formation of Fe III /Cr III mixed (hydr)oxides further decreases the solubility of the solid phase.
Overview of Reactions of Engineering Importance
Under environmental conditions, virtually all transformations from Cr VI to Cr III and vice versa are mediated by constituents that are ubiquitous in nature. Depending mostly on the water flow velocity (i.e., on contact time) and on the intrinsic reactivity of Fe II -bearing phases, interactions with contaminated water may not achieve an equilibrium state. In such cases, kinetics of the transformations between Cr VI and Cr III become important [5]. Chemical reduction of Cr VI to Cr III is a demonstrated path for Cr removal in many water treatment strategies. Ideally, Cr VI reduction is followed by precipitation of the soluble Cr III species to particulate Cr(OH) 3 or adsorbed solids (flocs) that can be filtered from the water. The most common reducing agent is Fe II , with reaction times on the order of seconds to hours, depending on pH [5]. The state-of-the-art knowledge from the chromium geochemistry can be summarized by the following two-step mechanism of Cr VI removal at Fe 3 O 4 surface [129]: (1) electrostatic adsorption of Cr VI anions at Fe 3 O 4 surface; and (2) the electron transfer reaction between Cr VI and the structural Fe II to form Cr III (OH) 3 . The Cr VI reduction is accompanied by simultaneous homogenous oxidation of Fe II to Fe III . Nascent Fe III hydroxides are powerful adsorbing and enmeshing agents for Cr VI . Due to similarities in atomic size, Fe and Cr form mixed oxides that are non-conductive for electrons and yield to passivation of magnetite. Accordingly, Cr VI reduction by Fe 3 O 4 is rarely quantitative (e.g., >70%). The removal mechanisms suggested by Kendelewicz et al. [129] is valid under a wide range of reaction conditions, despite difference in Cr speciation, pH values, background electrolyte and changes of the adsorbing surface charge [115]. It can be postulated that its validity does not depend on the nature of the Fe II -bearing material, and, in particular, that it will still be valid if Fe II results from Fe 0 oxidative dissolution (Fe 0 corrosion).
Analysis of the Fe 0 /Cr VI /O 2 /H 2 O System
Aqueous Cr VI removal in the presence of Fe 0 depends primarily on the chemical thermodynamics of five redox systems (Table 1): (1) Fe II /Fe 0 (Equation (19)); (2) Cr VI /Cr III (Equations (20) and (25)); (3) H + /H 2 (Equation (21)); (4) Fe III /Fe II (Equation (22)); and (5) O 2 /HO − (Equation (23)). Both the aqueous solution behavior and the redox thermodynamics are of interest. In addition, the reactions kinetics is a decisive factor for the design of remediation systems [130,131]. In the context of water treatment by granular Fe 0 , the negative potential of the redox couple Fe II /Fe 0 (Equation (19)) is to be exploited to transform the highly soluble Cr VI into sparingly soluble Cr III in an electrochemical process (Equation (26)). Further electrochemical processes include the reduction of water (Equation (27)), Fe III (Equation (28)) and dissolved O 2 (Equation (29)) with Fe 0 . Of these, only water reduction (Equation (27)) is likely to be quantitative because of the non-conductive nature of the oxide scale and its role as a physical barrier. Equation (28) is disfavored by the low solubility of Fe III species. There are myriad abiotic chemical reactions likely to occur in Fe 0 /H 2 O systems (Equation (30) through Equation (44)). Equation (30) accelerates Fe 0 corrosion by consuming Fe 2+ (LeChatelier) and thus increasing the production of iron oxides/hydroxides for Cr VI adsorption and co-precipitation. Equation (31) sustains the chemical reduction of adsorbed Cr VI ; it is not a reductive precipitation, but a reduction of adsorbed Cr VI . This reaction path is not necessarily quantitative. Equation (32) is slow with soluble Cr(OH) 3 [137]. Equations (36)- (39) shows that Cr III is a hard Lewis acid with a high tendency to undergo hydrolysis [138]. Equations (40)- (42): because in most natural waters the aqueous concentration of Cr III is very low, and the kinetics of polymerization are slow under environmentally relevant pH and temperature values, polymeric Cr III species are never significant in natural unpolluted aquatic systems [116,119,138]. Equations (43) and (44) describe the formation of mixed (oxy)hydroxides within Fe 0 /H 2 O systems, process that occurs at pH greater than 4; it was hypothesized that Cr 0.25 Fe 0.75 (OH) 3 will form if Fe III and Cr III are generated only by the reaction between Fe II and Cr VI [76,119,137]. Summarizing, the analysis of the Fe 0 /Cr VI /O 2 /H 2 O system suggests that adsorption of negatively charged HCrO 4 − /CrO 4 2− onto positively charged iron (hydr)oxides is the most likely reaction path. This high affinity coupled to the barrier function of the oxide scale implies that "reductive precipitation" is at most a side removal path.
The Mechanism of Cr VI Removal Revisited
Knowledge from: (1) the chromium geochemistry (Section 4); and (2) the theoretical analysis of the Fe 0 /Cr VI /H 2 O system (Section 5.1) univocally prove the reductive precipitation theory for aqueous Cr VI removal in the presence of Fe 0 as being faulty. This corresponds to an alternative concept introduced by Noubactep in 2006 [11,80,114,[139][140][141][142][143], but largely ignored within the Fe 0 research community [11,82]. According to Ghauch [82], only some five research groups have tested (and validated) the alternative concept worldwide. The alternative concept argues that the Fe 0 /H 2 O system is a complex system in which quantitative contaminant reduction, when it occurs, is not the cathodic reaction coupled to the anodic dissolution of Fe 0 . Accordingly, (quantitative) Cr VI reduction in Fe 0 /H 2 O systems is mediated by Fe II species (and, possible, also by H/H 2 ) resulting from electrochemical dissolution with water or H + . The particularity is that Fe II is continuously produced, while freshly generated iron hydroxides act as excellent adsorbents for Cr VI and Fe II (structural Fe II or Fe II (ads) ). It should be kept in mind that solid structural Fe II seems to be a stronger reducing agent than both dissolved Fe II [128] and Fe 0 . Similarly, innerspherically adsorbed Fe II is also more reducing than dissolved Fe II . All these facts excellently explain the better Cr VI removal efficiency of a Fe 0 /sand mixture with iron hydroxides coated on sand, compared to that of Fe 0 alone (same Fe 0 mass) [144].
Application to Water Filters
The knowledge that Fe 0 is mostly generator of iron hydroxides and oxides, acting as coagulating/adsorbing agents, was already successfully applied in Europe, around the year 1890, for safe drinking water production [12]. However, today available Fe 0 -ammended filtering systems are based on a more pragmatic approach. Two examples will be given for illustration: the SONO arsenic filter [145][146][147] and the Indian Institute of Technology in Bombay (IITB) arsenic filter [148]. It is important to underline here that both filters are not specific for arsenic removal, and would remove Cr VI and other contaminants as well. The first arsenic filtration system (3-Kolshi filter) was developed in 1999 by Abul Hussam and his brother Abul Munir, after two years of research motivated by the need to develop a simple and low cost water treatment system for mitigation of the arsenic crisis in Bangladesh. The 3-Kolshi filter (made entirely from readily local available materials) consisted in three clay containers placed one top of another, with water flowing through a series filters made of sand, iron chips, and wood charcoal. Even though it was successfully tested for its efficacy in removing arsenic from groundwater, the 3-Kolshi filter had a major problem: the rapid clogging of the iron material [149,150]. To solve this issue, the team led by Abul Hussam and Abul Munir released, in 2001, the SONO filter; in this new filtration assembly, the clay containers were replaced by plastic buckets and, most important, iron chips were replaced by a composite iron matrix (mixture of metal iron and iron hydroxides). This technology was patented in 2002 and, by 2010, about 160,000 SONO filters were deployed in Bangladesh, India, and Nepal. Even though SONO filters are not freely accessible to people in need, at a price of $35-40 (for an expected life span of at least 5 years), and with operating costs up to $10/5 years, they are one of the most affordable water filters available today [145,149,150]. The ITTB filter is the most recent result of continuous research in the development of a robust, low cost, and simple arsenic removal water treatment system for poor communities in low income areas. It uses non-galvanized iron nails which, under the mild oxidizing environment (presence of dissolved oxygen), are corroded. The formed Fe II is oxidized to Fe III , forming a high oxidizing intermediate which co-oxidizes As III to As V ; subsequently As V is adsorbed on the corrosion products existent at the surface of Fe 0 . Along with a high performance for removing arsenic, this technology also has the advantage that the Fe 0 requirement is 20 times less than similar arsenic removal efficiencies reported in the literature for other methods. The IITB filter is a non-patented system designed for small communities. Its implementation started in 2008 and there are already some 60 systems working under maintenance of the rural population in India (mostly in West Bengal) [150].
The most important concern of Fe 0 -based filters is related to permeability loss (reduction of the hydraulic conductivity), which leads to an incomplete utilization of Fe 0 [26]; the long-term permeability can be prolonged only if the volumetric expansion of iron corrosion products (which is the main cause of permeability loss) is properly considered during the design of Fe 0 -based filters [151]. The two examples considered herein have differently solved the clogging problem. On the one hand, SONO filters used a composite iron matrix with high initial porosity, capable to store the iron corrosion products [145]. On the other hand, IITB filters perform a sort of flocculation in a contactor placed on top of the filter, where the water is contacted with iron nails and with air. Subsequently, the formed hydrous ferric oxide floccules are filtered on a fixed-bed filled with layers of coarse and fine gravel. During this process, the oxidation of Fe II to Fe III and As III to As V occurs. This configuration shows that the IITB filter can be regarded as a modification of the revolving purifier (Anderson Process) [12,152] with the added advantage that no revolution is needed and the system can operate energy free.
According to 2017 World Health Organization Joint Monitoring Programme Report, 844 million people still lacked a basic drinking water service; this includes 263 million people who spend over 30 min to collect water from sources outside the home, and 159 million people collected drinking water directly from surface water sources (58% lived in sub-Saharan Africa) [153]. Since Fe 0 -amended filters were successfully tested in the battle against arsenic poisoning in southern Asia, they may also offer a unique opportunity to solve the worldwide shortage for safe drinking water provision on a self-reliant manner.
Concluding Remarks
The mechanism of contaminant removal in Fe 0 /based systems and the identity of the redox active species involved in the mechanism were the subject of an active debate in the last years. The first concept proposed in the early nineties for the removal of Cr VI with Fe 0 , and widely accepted since then, was the reductive-precipitation mechanism. This concept attributed the efficiency of Fe 0 /H 2 O systems to Cr VI chemical transformations, mainly to direct reduction with Fe 0 and subsequent (co-) precipitation of the resulted cations. Recently, a new approach (the adsorption-co-precipitation concept), provided added perspectives to the mechanism of contaminant removal in Fe 0 /H 2 O systems, trying to demonstrate that direct reduction (if applicable) is less important than had previously been assumed. According to this new concept, contaminants are quantitatively removed in Fe 0 /H 2 O systems principally by adsorption and co-precipitation, while reduction, when possible, is mainly the result of indirect reducing agents produced by Fe 0 corrosion. Based on the current knowledge, this review clearly demonstrates that Cr VI removal in Fe 0 /H 2 O systems is actually a very complex mechanism, including the adsorption, reduction, and co-precipitation/entrapment processes. Therefore, the new adsorption-co-precipitation concept should not be considered as a contradiction, but as an extension to the reductive-precipitation theory. | 2019-04-27T13:09:51.003Z | 2018-05-17T00:00:00.000 | {
"year": 2018,
"sha1": "85026ebbd64a1a42bc0ba228e466d36658daa313",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/5/651/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d61e537c57160482312d5d6d95cc1733743845af",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
264590371 | pes2o/s2orc | v3-fos-license | Apollo: Zero-shot MultiModal Reasoning with Multiple Experts
We propose a modular framework that leverages the expertise of different foundation models over different modalities and domains in order to perform a single, complex, multi-modal task, without relying on prompt engineering or otherwise tailor-made multi-modal training. Our approach enables decentralized command execution and allows each model to both contribute and benefit from the expertise of the other models. Our method can be extended to a variety of foundation models (including audio and vision), above and beyond only language models, as it does not depend on prompts. We demonstrate our approach on two tasks. On the well-known task of stylized image captioning, our experiments show that our approach outperforms semi-supervised state-of-the-art models, while being zero-shot and avoiding costly training, data collection, and prompt engineering. We further demonstrate this method on a novel task, audio-aware image captioning, in which an image and audio are given and the task is to generate text that describes the image within the context of the provided audio. Our code is available on GitHub.
INTRODUCTION
Humans perceive the world through different types of data (e.g., images and sounds) that they get from their senses.Similarly, to understand the world, artificial intelligence research also tries to solve problems that use multimodal data (Antol et al., 2015;Paz-Argaman et al., 2020;Ji et al., 2022;Rassin et al., 2023).Solving multimodal tasks requires interpreting and reasoning over heterogeneous data, which poses several challenges, such as the training process (Wang et al., 2020).
Large pre-trained foundation models demonstrate distinct expertise and encompass comprehensive knowledge within specific domains and modalities they are trained on.For example, BERT (Devlin et al., 2018) and GPT3 (Brown et al., 2020)are proficient in processing language, while CLIP (Radford et al., 2021) excels in grounding text to visual content.However, the large and increasing variety of multimodal tasks (e.g., vision and language navigation (Ku et al., 2020), and video question-answering (Lei et al., 2018)), do not have foundation models.Previous efforts to tackle complex multimodal tasks are either (1) fully-supervised, require expensive paired input and output task-specific data (Chen et al., 2019;Li et al., 2022); (2) semi-supervised -task-specific uncoupled data for each modality or domain (Nukrai et al., 2022;Guo et al., 2019;Zhao et al., 2020;Gan et al., 2017;Su et al., 2022); (3) few-shot -a few coupled task-specific examples; and (4) Zero-shot (ZS) -no task-specific data.The approaches for ZS contain a sequence-to-sequence unified approach that is trained on multiple tasks (Lu et al., 2022;Zhu et al., 2022;Gupta et al., 2022).However, as the list of tasks is fixed, so any new task requires changes to the model and additional training.
Socratic models, an approach for few-shot and ZS learning, composes pre-trained models by directly using language as the intermediate representation by which the modules exchange information with each other (Zeng et al., 2022).Thus, this approach heavily relies on a large language model (LLM) and requires prompt engineering which does not have a proper methodology.Relying on LLMs might be sub-optimal, particularly for multimodal tasks that do not involve language, e.g., music and vision tasks (Qiu & Kataoka, 2018;Aleixo et al., 2021).
In this paper, we propose a different approach to multimodal tasks that leverages the expertise of foundation models and shares knowledge through a common latent space without relying on language as a mediator.The importance of knowledge sharing between experts can be illustrated by the Apollo program, which required the collaboration of experts from diverse fields, such as physics, chemistry, and biology, to achieve the common goal of landing a man on the moon.By sharing their knowledge, these experts were able to overcome the challenges and undertake a task never done before.Our premise that complex tasks, like the Apollo, require multiple experts, inspired our approach which relies on synergy and knowledge sharing between pre-trained transformer components through gradient updating of a combined loss at inference time.This allows our model to perform new tasks in a zero-shot setup without any further training or tuning steps.Unlike Socratic models, the proposed framework, which we named APOLLO, is not limited to language models.It can be applied to a variety of transformer models of different modalities, such as audio and vision, moving beyond LLMs and not depending on prompts.Furthermore, APOLLO enables decentralized command execution, allowing each model to contribute and benefit from the expertise of others.
We demonstrate our approach on two tasks.On the well-known task of stylized image captioning (Zhao et al., 2020;Guo et al., 2019;Nukrai et al., 2022;Mathews et al., 2016;Gan et al., 2017), our ZS Apollo method gained an absolute improvement of up to 58% in style accuracy and up to 2.3% in relevance text to the image, compared to the state-of-the-art semi-supervised models on the SentiCap (Mathews et al., 2016) and FlickrStyle10K (Gan et al., 2017) benchmarks.We further demonstrate this method on a novel task, audio-aware image captioning, in which an image and audio are given and the task is to generate text that describes the image within the context of the provided audio.
THE APOLLO METHODS
The cutting-edge models across diverse modality domains primarily rely on transformer-based architectures (Vaswani et al., 2017).Our objective is to leverage the expertise of multiple pre-trained transformer models to generate output through shared impact between the models.A Transformer model consists of two primary components: an encoder and a decoder.Each component comprises L layers of encoders and decoders, and within these layers, multiple attention heads are present, each with query (Q), key (K), and value (V ) functions.The attention mechanism enables the model to selectively focus on different parts of the input data.This focus is determined by the interactions between Q and K, which produce attention scores and influence the distribution of V .Function Q operates on the input token embedding, while K and V generate subsequent output tokens by considering past tokens.This implies that both the K and the V can influence the final prediction output, given Q.To exercise control over the model's output, we seek to influence the 'context cache', which contains both the key (K) and the value (V ), thus guiding the model's predictions towards a desired direction.We consider a probability vector for the output of a transformer model , where P Tj represents the probability of candidates {x i } n i=1 conditioning on modalities {m i } M i=1 .The probability is parameterized by an expert transformer T j for which we select a subset of K and V from certain layers l to define a context, C l Tj .
Two Experts We generalize the loss function used by Tewel et al. (2021) to any two transformer models where transformer T 1 shares knowledge with T 2 .We get the following loss: In order to guide the model's prediction, we minimize the loss in equation 1 over the context C T 2 , which implements the following concept: The first term in equation 1 pulls the preference tokens of transformer T 2 towards the target token preferences of T 1 through t gradient steps, potentially overriding the original knowledge of transformer T 2 .To preserve the transformer's original knowledge, an additive regularization term constrains the transformer's deviation from its initial preference, Figure 1: An overview of Decentralization of Guidance Efforts approach.
T2 .λ is a hyper-parameter that balances the two loss terms.The guidance method implemented by equation 1 is denoted as Experts-Summation.
Multiple Experts We consider a framework that contains M≥2 expert-transformers {T j } M j=1 .The Experts-Summation can be extended to the multi-expert case by simply summing multiple weighted terms in the loss function: T M , P Tj .This extension comes at the cost of tuning multiple hyper-parameters, making it challenging to find the balance between all experts' loss components.Therefore, we propose a new guidance loss inspired by the attention concept, which offers a safer alternative -Experts-Product: The target probability in the second term of equation 2 is the element-wise multiplication of all experts' probabilities, denoted as {P Tj } M −1 j=1 .This operation merges the experts' preferences and directs the transformer T M toward a common region, while maintaining proximity to the initial suggestion boundaries, as guided by the first loss term.It does not add hyper-parameters comparing to Experts-Summation and yet it effectively enforces P T
(t)
M to agree with the experts common support.
Decentralization of Guidance Efforts
In the case of M expert-transformers, the straightforward way to apply all the experts' preferences to P T is by optimizing a flat objective function, as in Expert-Product.One challenge in accommodating all preferences simultaneously is the lack of effective communication among the guiding experts themselves.Alternatively, we propose a hierarchical optimization process, in which one domain expert guides another, and the latter guides the top-level expert model.This allows experts to share their knowledge not only with the top-level expert model but also with each other.In this process, a mediator expert is responsible for producing the final recommendation for the top-level expert model.This expert considers the perspective of the other experts and adapts to minimize potential conflicts in their guidelines.To better understand this approach, we demonstrate it on a case of M = 3 expert transformers as presented in Figure 1.In this example, expert 1 (e 1 ) and 2 (e 2 ) are domain-experts who guide a top-level expert -(e 3 ) which plays a central role in the system.P e1 , P e2 , P e3 denote the probabilities for the candidates {x i } n i=1 over experts 1,2,3 respectively.The objective of aligning Expert 3 (P e3 ) with both Expert 1 (P e1 ) and Expert 2 (P e2 ) is achieved by solving the hierarchical optimization problems defined by the following equations: First, we optimize the probability P e2 over context cache C 2 (equation 3).Second, we optimize the model probability P e3 at the top-hierarchy by adjusting C 3 (equation 4).This approach decentralizes the guidance efforts among multiple models, enhancing the interaction between the experts.
STYLIZED IMAGE CAPTION GENERATION
Goal Our objective in this task is to generate captions that accurately describe the input image while incorporating the desired style.We aim to achieve this without training any model.Instead, our approach focuses on leveraging the expertise of diverse models and utilizing their capabilities to generate captions with the desired style.
METHOD
APOLLO-CAP In order to generate captions for images with a specific style, we use multiple experts.We use the LLM GPT-2 (Radford et al., 2019) to iteratively predict tokens.We use GPT-2 instead of its advanced versions, e.g., GPT-3, because GPT-2 is open-source, allowing us to modify its internal representations, such as its keys (Q) and values (V).We use an image-text alignment model -CLIP (Radford et al., 2021) to evaluate the relevance of each candidate token to the given image.Each candidate token is appended to the current partial sentence (X t,i = x i+1 , x i , ..., x 0 ), and combined with the image as input to CLIP.The cosine similarity S CLIP between each candidate and the image is computed in the embedding space, and probabilities are generated by applying softmax with a smoothing temperature parameter τ .
We consider the first layer output K and V as CLIP's context for guidance purposes.Our last expert is a Style-Text Alignment We employ a style classification model and score each candidate based on its alignment with the desired style.We generate probabilities for all candidates by applying softmax with a smoothing temperature parameter.We use roBERTa (Liu et al., 2019) for sentiment realization and DeepMoji (Felbo et al., 2017) for applying romantic and humorous style.
Algorithm 1: Optimizing GPT-2 towards an image and style Algorithm 2: Optimizing CLIP image embedding towards the desired style
GRADIENT UPDATES FOR MODEL GUIDING
By combining the image-oriented and style-oriented probabilities, we can manipulate GPT-2 through its context vector to generate an image caption with the desired style.
Let I be the input image, x i+1 the next candidate token, C i the GPT-2's context vector, and p xi+1 = GP T (x i , C i ) the probability predicted by GPT-2 for x i+1 .The goal is to iteratively optimize the context C i in order to improve the description of the image with the desired style.The optimization steps are outlined in Algorithm 1.For each generated token, a total of T optimization steps are performed as follows: An alternative probabilities of the next token are calculated according to a set of experts (row 5).Then, a loss function is computed incorporating the experts prediction (row 6).As suggested by ZeroCap (Tewel et al., 2021), a regularization term is added to keep the optimized probability close to the original probability generated by GPT-2 in the initial step.Minimizing this loss over the context vector results in an image-style-aware probability.The context vector is updated by applying a single gradient step (row 7).This optimization loop is repeated for each generated token until the captioning process is complete.The outer loop is executed with 5 beams, and the inner loop is applied to the top K=512 tokens.
Next, we provide a detailed implementation for each guidance approach described in Section1.1.
APOLLO-CAP: Sum of Experts After the generative transformer calculates its probability for the next token, each expert calculates its alternative probability.To align image and text, we calculate the CLIP probability p CLIP xi+1 for the top 512 candidates (see equation 5) to determine the best probability vector for image-text correspondence.In addition, style-aware probability p ST Y LE xi+1 is computed based on the style model's scores to encourage a certain style (see equation 6).The guidance loss L is computed as a weighted sum of the cross-entropy between the augmented probabilities and the baseline GPT-2 probability: APOLLO-CAP: Product of Experts Similarly to sum of experts, CLIP probability p CLIP xi+1 and style probability p ST Y LE xi+1 are computed according to equation 5 and equation 6 respectively.The guided loss L is composed of two terms: (1) the cross entropy between the product of CLIP and STYLE probabilities with the current GPT suggestion, and (2) a regularization term: APOLLO-CAP: Decentralization We suggest optimizing CLIP's image embedding such that the resulting text-image matching will be more style-oriented.Since CLIP is a transformer encoder, we apply the decentralization concept described in Section 1.1 as follows: Let x be candidate captions for image I.We denote CLIP's first layer K, V outputs by C as context vector for optimization.Let P ST Y LE be the probability vector produced by the style expert model given x, and P (0) CLIP = CLIP (x, I|C (0) ) be CLIP's initial probability prediction for x given I conditioning on the initial context C (0) .We compute the target probability as the product of the style expert probability and CLIP's initial probability: P target = P (0) CLIP • P ST Y LE .We apply J gradient steps to optimize CLIP's image embedding.As a result, the optimized CLIP produces higher probabilities for captions that fit the image content from the specific style perspective.This approach is presented in Algorithm 2. We denote the output probability as p CLIP −ST Y LE xi+1 , and then incorporate it into the loss function presented in equation 8, resulting in the guidance loss L: supervised model for image captioning shows fluent language but also weaker style realization.APOLLO-CAP-PD outperforms the other approaches in the total image-text-style matching trade-off.
EXPERIMENTAL SETUP
Data We evaluate our approach on the two benchmarks, SentiCap (Mathews et al., 2016) for positive and negative styling and FlickrStyle10K (Gan et al., 2017) for humor and romantic.
Evaluation Metrics To evaluate the results, we examined the following attributes of the captions: (1) fluency, i.e., the coherency and naturalness of the generated text; (2) Text-Image correspondence (TIC), and (3) style accuracy.We evaluate fluency using the perplexity function of GPT-2, which measures the model's ability to predict the next word in a sequence.Lower perplexity values indicate better fluency of the generated captions.The perplexity scores were clipped to the maximal value of 1500 and then normalized by 1 − perplexity 1500 , formalizing a fluency score (the higher the better).In order to quantify the alignment between an image and its caption (TIC), we used CLIPScore (Hessel et al., 2022) -the cosine similarity between the CLIP embedding of the image and the caption.We measured style accuracy using large pre-trained models -roBERTa (Hartmann et al., 2022) and DeepMoji (Felbo et al., 2017).roBERTa is a sentiment classification model that generates probability for either positive or negative.In order to evaluate the emotional styles of Flickrstyle10khumorous and romantic, we employed DeepMoji model.Given a text input, DeepMoji generates a 64-dimensional probability vector for various emotions which are aggregated to represent humorous and romantic styles (see Appendix A.3).
Models
We demonstrated the three zero-shot methods described in Sections 2.2 by plugging-in the loss functions in equation 7,equation 8, equation 9 into ZeroCap as drop-and-replace of its original loss.Specifically, we employed the techniques Experts-Summation which will be referred to as APOLLO-CAP, Expert-Product (APOLLO-CAP-P) and combination of Decentralization of Guidance Efforts with Expert-Product (APOLLO-CAP-PD).
Baselines We conducted a comparative analysis of our method with the current state-of-the-art technique for generating stylized image captions, namely CapDec (Nukrai et al., 2022).CapDec, a semi-supervised method, relies on training a decoder using stylized text to generate stylized captions.It achieves this by leveraging the shared embedding space of text and images in CLIP.Following CapDec's training protocol, we trained on SentiCap and Flickrstyle10k datasets until the validation set loss reached a plateau.Additionally, we compared our results to the ZeroCap model (Tewel et al., 2021), which incorporates a style injection manipulation.We implemented three different manipulation techniques: (1) ZeroCap + PM, in which the style is injected into the LLM via prompting.We used the following prompts: for a positive style -"The beautiful image of a"; for a negative style -"The disturbing image of a"; for a humorous style -"The humorous image of a"; and for a romantic style -"The romantic image of a".(2) ZeroCap + IM, in which the style is injected via images into the CLIP model.We perform arithmetic operations on the input image embedding by adding the CLIP embedding of an emoji that represents the desired style (e.g., a smiley emoji for positive sentiment), and subtracting a neutral emoji embedding to discard the attributes belonging to the emoji itself.Finally, we implemented (3) ZeroCap + IPM, a combination of both aforementioned manipulation techniques.Although these methods are based on a zero-shot model, they require careful selection of prompts and images to achieve the desired style.
Quantitative analysis
Table 1 shows our results for SentiCap (top table) and Flickrstyle10k (bottom table ).The APOLLO-CAP-based models outperformed all baselines in terms of style accuracy across all benchmarks.Although the ZeroCap-based approaches gained the highest TIC scores, they were partially successful in generating the required style, and in the qualitative test hereafter they performed the worst compared to the other approaches.The results also show that APOLLO-CAP-PD surpassed the state-of-the-art model, CapDec, in style accuracy on all styles, and in TIC on all styles except for the romantic style, while only slightly reducing the fluency score.It is important to note that this minor impact on fluency is acceptable, as a fluency score of 0.8 already indicates a good fluency level.Upon observing the results based on APOLLO-CAP, we can see that APOLLO-CAP-P and APOLLO-CAP-PD achieve significantly higher results in TIC and style accuracy, than APOLLO-CAP.APOLLO-CAP-PD outperforms APOLLO-CAP-P on TIC across all styles, but it is unclear which method APOLLO-CAP-P or APOLLO-CAP-PD performs better on the style accuracy.Additionally, the fluency scores for all of these approaches are sufficient, exceeding 0.8.The ZS methods based on APOLLO-CAP and ZeroCap exhibit larger vocabularies than the CapDec, which was trained on the task-specific dataset.
Qualitative Analysis In Figure 2 we present a comprehensive comparison of several approaches: APOLLO-CAP-PD (our leading approach), CapDec, and ZeroCap+IPM.We show results for the styles: positive, negative, humorous, and romantic.When comparing APOLLO-CAP-PD to CapDec, we observed that the former exhibits broader world knowledge in its captions, while the latter focuses mainly on technical details.For example, in the negative caption, APOLLO-CAP-PD identified the scene as a tournament, whereas CapDec provided drier factual information ("a man jumping...").Moreover, CapDec used mainly common adjectives (e.g., 'dirty') to embed the style.In contrast, APOLLO-CAP-PD provided creative descriptions, such as 'criticized for a lack of energy', which contextualize the style within the narrative.This difference may be explained by CapDec mimicking the limited style displayed in training data, while APOLLO-CAP-PD leverages a general LLM that naturally implements styles in a storyline.
Ablation Study Figure 3 provides a comprehensive comparison of all approaches, illustrating a positive image caption.CapDec properly described the fact that a woman is cutting a cake and also added positive adjectives, yet it lacked real-world knowledge.In this case, it missed the celebration context.While ZeroCap's approach captured some relevant details they also exhibited instances of hallucination, as seen in examples like 'shoes-free wedding' and 'picnic beach'.APOLLO-CAP expressed the celebration of the wedding, and yet it dropped an important part of the content -the cake.In comparison, APOLLO-CAP-P included the important details -the celebration, the wedding, the cake, and relevant style adjectives; however, the fluency is degraded.Finally, APOLLO-CAP-PD met all the criteria -relevance ('wedding', 'cake' and 'night'), style ('perfect'), and fluency that is reflected in the proper integration of the content words.
Model Ensemble Analysis
We provide a detailed explanation of the optimization process of APOLLO-CAP-PD for an input image of a vase with roses and a desired positive style.In this section, the notations GPT (0) and GPT ( * ) , CLIP (0) and CLIP ( * ) refer to algorithms 1 and 2, respectively.The superscript zero denotes the expert model with its original context vector and the asterisk denotes the expert model with its optimized context vector.After generating the first token, 'The', Figure 4, displays the probability of the top-1 candidate tokens according to each expert.GPT-2 (0) assigns the highest probability to 'last', while CLIP (0) identified 'love' as the next most likely token, possibly due to the common association of vases with roses in a romantic context.In contrast, roBERTa preferred 'beautiful' as the next token, possibly indicating its frequent use as a positive adjective to start a sentence.After one optimization step, CLIP ( * ) maintained its original preferences but also increased the probability of 'beautiful', aligning with both the desired style and the image.Finally, with the combined loss propagation of GPT-2, CLIP and roBERTa expert models, GPT-2 ( * ) selected 'beautiful' as the token, which appears to be the most probable choice in this context.This suggests that APOLLO-CAP-PD considers various aspects, including style, image relevance, and text fluency.Model We adapted the stylized image captioning system of APOLLO-CAP-P (Section 2.2) by replacing the style component with an audio counterpart -CLAP (Wu* et al., 2023), which assess the correspondence between text and audio.We projected the audio clip and the candidate captions onto the CLAP embedding space and calculated the cosine similarity between each candidate and the audio embedding vectors.We then replaced the style probability in equation 6 with the audio probability and applied the rest of the algorithm without further changes.
AUDIO-AWARE IMAGE CAPTIONING
Qualitative Analysis Figure 5 shows examples of captions generated by APOLLO-CAP-P for images from the test set in the presence of audio featuring kids' laughter.APOLLO-CAP-P managed in all images to add the context of the audio -laughing.ZeroCap, which does not process audio, does not reflect this context.In the first left image APOLLO-CAP-P reasons that an image with many people walking is a parade and because they are with very little clothing he makes it into something funny (connected with the audio) -an 'underwear parade'.The second image humorously connects the image of a mother eating with the sound of a baby laughing, suggesting that the baby is eating as well, since the mother is eating -"It is Breastfeeding Time".Even when the image seems gloomy as in the rightmost image, the model manages to generate a caption that connects laughter with the negative sentiment of the image.These results demonstrate our method's ability to process audio and still show scene-level understanding.
Quantitative Analysis Table 2 shows our APOLLO-CAP-P approach manages to get the text-audio correspondence (TAC), while maintaining the high fluency and text-image correspondence (TIC).
RELATED WORK
In recent years, there has been a shift in modeling towards transformer-based methods (Vaswani et al., 2017) which learn context and process sequential data through their attention mechanism.The next revolutions in machine learning came with the rise of foundation models, which are transformer-based models that have been injected with prior knowledge through pre-training on large datasets (Devlin et al., 2018;Lan et al., 2019;Yang et al., 2019;Zan et al., 2022;Kim et al., 2021;Zaheer et al., 2020;Baevski et al., 2020).These models have been shown to perform on various tasks, domains, and even multimodality (Liu et al., 2023;Yang et al., 2022) Their distinct capabilities mainly depend on their training data, for example, models that are trained on pairs of images and texts demonstrate capabilities in vision and language tasks (Tan & Bansal, 2019;Lu et al., 2019;Radford et al., 2021).
Foundation models' capabilities to perform in zero-shot have been utilized in Socratic approach which combines foundation models with frozen LLMs and bridges the gap through language, via prompting (Zeng et al., 2022;Tiong et al., 2022;Wang et al., 2022;Tsimpoukelli et al., 2021;Huang et al., 2023;Xie et al., 2022).In contrast to Socratic models, a different approach, that does not rely on prompting, guides the LLMs by tuning their prior knowledge in their attention mechanism with visual cues (Tewel et al., 2021).In our work, we present a generic approach for guiding multiple transformer models through gradient updates, which can be employed across different modalities.
CONCLUSIONS
We propose a modular framework that leverages the expertise of large pre-trained models and jointly solves complex tasks in a zero-shot setting without relying on prompting.Our approach enables decentralized control, allowing models to exchange expertise.We demonstrated our approach on two tasks.Our method achieves state-of-the-art results on two benchmarks for stylized image captioning.
To demonstrate the method's capabilities, we tested its ability to work on audio, by introducing the novel task of audio-aware image captioning, in which an image and audio are given and the task is to generate text that describes the image within the context of the provided audio.6 showcases supplementary results obtained using our model, APOLLO-CAP-P, on six test images from the SentiCap dataset.Each image is accompanied by audio featuring children's laughter.
Approach
To underscore the impact of the audio, we have included ZeroCap's results, which were generated for the images without audio.
Figure 2 :
Figure 2: Examples of our APOLLO-CAP-PD compared to SOTA models.
Figure 6 :
Figure 6: APOLLO-CAP-P caption examples for images and audio clips featuring children's laughter.
Table 1 :
Averaged Scores for CapDec, ZeroCap Manipulations and Apollo-Cap Approaches ZeroCap best fits the image content, but fails to generate style with only input manipulations.CapDec as a semi
Table 2 :
(Mathews et al., 2016)r approach's ability to generalize to other modalities by introducing a novel task -Audio-Aware Image Captioning, that integrates audio into image captions.The input of Averaged Scores for ZeroCap and APOLLO-CAP-P with laughter audio content on 50 images from the Senticap test set.TIC -text-image correspondence, TAC -text-audio correspondence the task is both an image and an audio clip, and the task is to generate text that describes the image within the context of the provided audio.Data The test set contains 50 randomly sampled images from the Senticap(Mathews et al., 2016)test set, and the validation set contains five images from the Senticap validation set.For all images, we included an audio clip of children's laughter that we collected from https://freesound.org 1 .
Table 3 :
APOLLO-CAP hyper-parameters used for SentiCap A.4 QUALITATIVE RESULTS FOR AUDIO-AWARE IMAGE CAPTIONINGFigure | 2023-10-31T06:41:16.782Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "9b117b2cbeb09b56e5134b490b5b290980206477",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9b117b2cbeb09b56e5134b490b5b290980206477",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252257024 | pes2o/s2orc | v3-fos-license | The Resilience of Polar Collembola (Springtails) in a Changing Climate
Highlights • Based on an extensive review of the available literature, polar Collembola have high levels of genetic diversity, wide thermal tolerance ranges, physiological plasticity, generalist-opportunistic feeding habits and considerable capacity for behavioural avoidance.• The resilience of polar Collembola to climate change is predicated on their ability to resist environmental perturbation and ability to recover following any disturbance.• Inherent characteristics of polar Collembola may enhance resilience of polar taxa in response to climate change.• The recovery of taxa following disturbance in the Arctic is likely to be more positive relative to the Antarctic.
Introduction
Polar regions are experiencing rapid and extreme climatic changes ( Hogg and Wall, 2011 ;Convey and Peck, 2019 ;Siegert et al., 2019 ). These changes modify the abiotic environment and terrestrial habitats are already experiencing rising and increasingly variable temperatures ( Convey et al., 2018 ;Koltz et al., 2018a ;Clem et al., 2020 ). Concurrently, patterns of precipitation are changing, resulting in decreased winter snowpack with decreased insulation of soils, increased summer snow melt, and extended biologically active periods Schmidt et al., 2019 ). Thawing permafrost and melting glaciers further modify summer water availability ( Nielsen and Wall, 2013 ;Everatt et al., 2015 ). Glacial retreat will expose new habitat, with icefree areas in Antarctica predicted to increase by 25 % above current levels by 2100 ( Lee et al., 2017 ). Many of these changes, such as increased maximum temperatures, can cause physiological stress for polar inhabitants, as they exceed the parameters under which the biota has evolved, particularly in Antarctica. Conversely, some changes may showing elongated appendages including legs, antennae, furcula (indicated by blue arrow), and 'ventral tube' (collophore, visible between the second and third pairs of legs); B) an eudaphic taxon ( Tullbergia mediantarctica ) from the southern Transantarctic Mountains, showing lack of pigmentation or eye spots, short appendages and absence of a furcula; and C) an Antarctic hemiedaphic (intermediate soil profile) taxon from the Antarctic Dry Valleys ( Gomphiocephalus hodgsoni ), showing reduced appendages and furcula (indicated by blue arrow). Scale bars (500um) are shown for each taxon. All images copyright University of Waikato. and productivity support diverse multitrophic communities including migratory invertebrate-feeding birds Wirta et al., 2015 ).
Collembola (springtails; Fig. 2 ) are among the most abundant and widely-distributed arthropods globally and are central to nutrient cycling in most soils, particularly in species-poor polar ecosystems ( Danks, 1990 ;Krab et al., 2013 ;Hogg et al., 2014 ). There are 12 species of Collembola on the Antarctic continent, all of which are endemic ( Wise, 1967 ;Wise, 1971 ). Higher species richness is found in the sub-Antarctic with 34 species on Macquarie Island alone ( Phillips et al., 2017 ). The Arctic supports at least 420 species of Collembola (only 14 of which are known endemics; Danks, 1990 ;Hodkinson et al., 2013 ). Collembola have been considered as indicators of environmental change ( Danks, 1992 ;Rusek 1998 ;Hopkin, 1997 ;Ponge et al. 2003 ), and are recognised bioindicators of overall soil health ( Greenslade, 2007 ;Nakamori et al., 2010 ). Here, we consider the con-sequences of climate change for polar Collembola which can also provide insights into the resilience of polar terrestrial ecosystems more generally.
Climate change is primarily a press perturbation ( sensu Bender et al., 1984 ) and will result in a suite of potentially interacting stressors ( Fig. 3 ). 'Resilience' is narrowly defined as the product of a species' ability to survive (its resistance) and its ability to recover from perturbation (see Box 1 ;Holling, 1973 ), and provides an holistic framework for determining how populations will respond to change (see review by Nimmo et al., 2015 ). Resilience can be influenced by both intrinsic and extrinsic factors ( Nimmo et al., 2015 ). Intrinsic characteristics that promote resistance typically include physiological and behavioural traits (and their plasticity) at the individual level, whilst dispersal and recolonization abilities as well as population-level reproductive rates influence recovery potential ( Nimmo et al., 2015 ;Hughes et al., 2019 ). Extrinsic factors such as the presence of vegetation or biotic interactions can moderate organismal responses to change, to produce contrasting responses among populations or species with otherwise similar resistance capacities ( Nimmo et al., 2015 ). Understanding the factors that influence polar species' resistance and recovery capacities will allow for interpretation of potential responses within a resilience framework and could be used to inform management and conservation decisions ( Oliver et al., 2015 ;Convey and Peck, 2019 ).
Studying (and ultimately predicting) organismal responses to environmental change in polar regions remains confounded by logistic constraints (e.g. seasonally-restricted access to sites), patchy baseline data -especially in the Arctic ( Nielsen and Wall, 2013 ) -and limited research on the influence of extrinsic factors on resistance capacities of individual organisms. Extrinsic factors, such as biotic interactions Koltz, et al., 2018b ;Caruso et al., 2019 ), natural variability in environmental conditions, and interacting stressors ( Kaunisto et al., 2016 ) can all exacerbate, or ameliorate stress. However, few studies have explicitly compared the resistance capacities of Collembola, even for genetically isolated populations of broadlydistributed species (see Sengupta et al., 2016 ; , for two notable exceptions).
Box 1 . Definitions of key terms in resilience (after Ingrisch and Bahn, 2018 ). Resistance: the capacity to limit the impact of a disturbance and maintain survival. Recovery: the capacity to return to a pre-disturbance state or an alternative stable state. Resilience: the capacity to resist disturbances and recover accordingly.
The genome encodes the physiological and behavioural responses of an individual and provides the raw material for evolution. A geneticallydiverse population is therefore more likely to include individuals with genotypes that can respond to novel conditions, improving resistance ( Somero, 2010 ). Overall, the capacity to evolve in natural systems within the timescales of climate changes depends on genetic variation within and among populations and the ability to buffer environmental changes through physiological and behavioural responses ( Somero, 2010 ;Sunday et al., 2014 ;Marshall et al., 2020 ). Physiological and behavioural plasticity operate in concert and allow organisms to endure environmental extremes. Although polar marine organisms appear to lack such plasticity ( Buckley and Somero, 2008 ), terrestrial organisms are usually physiologically plastic, and can change their environmental tolerances over scales of hours and days to seasons ( Teets and Denlinger, 2013 ;Sinclair et al., 2015 ). Furthermore, Fig. 3. Conceptual diagram showing a hypothetical and fluctuating press disturbance (A) and corresponding ecological response (B) resulting from climate change. Once maximal tolerance limits (maximum resistance capacity) are exceeded, steep declines in associated ecological responses such as fecundity or abundance are expected (B). Increased tolerance levels increases initial resistance (C), which may delay ecological responses (D). The fluctuating nature of almost all climates allow opportunities for recovery ( * ) which can influence ongoing resistance capacities.
mobile animals can modify their exposure to stress, or even avoid it altogether. Life forms of Collembola are often classified depending on where in the soil profile they reside, which impacts their exposure to stressors. Epiedaphic Collembola ( Fig. 2 A) are surface-dwelling; whereas euedaphic species ( Fig. 2 B) live deeper in the soil profile; and hemiedaphic species ( Fig. 2 C) are intermediate between these extremes (Christiansen, 1964;Hopkin 1997 ). However, beyond determining fine-scale distribution (e.g. Hertzberg et al., 1994 ;Hayward et al., 2004 ;Sinclair, et al., 2006a ;Caruso et al. 2010 ), behavioural plasticity of polar arthropods, including the ability to avoid stress, has received less attention than other ecological characteristics (e.g. Krab et al., 2013 ;Sengupta et al., 2017 ;Koltz et al., 2018b ).
Any recovery of polar biota following perturbation will depend on fecundity, life history and the likelihood of available habitat being recolonised (i.e. dispersal characteristics) Hågvar and Pedersen, 2015 ;Oliver et al., 2015 ). Recovery potential intersects with resistance to shape overall resilience ( Fig. 3 ). For example, if populations have low resistance but high recovery potential then the risk of extinction is minimised (and overall resilience is increased). However, if populations have high resistance but low recovery potential, they may persist until a stress event occurs at the margins of tolerance, from which they cannot recover (decreased overall resilience) ( Nimmo et al., 2015 ). Further, if genes underlying the ability to resist and recover from environmental changes are negatively correlated with each other, one strategy (resistance or recovery) may dominate at the expense of the other. It is therefore important to consider that if population size recovers to pre-disturbance levels but genetic variation is lost (i.e. a genetic bottleneck), the resistance and resilience of the population to continued and future environmental changes will likely decline ( Oliver et al., 2015 ). One exception to this would be if a bottleneck event resulted in directional selection that increased resistance (e.g. selection for warm-adapted individuals in a warming climate, provided they can still survive cold winters).
We examine the potential resilience of polar Collembola to a changing climate and focus on three aspects that we think may drive resistance: 1) genetic diversity; 2) physiological tolerances and behavioural plasticity; and 3) biotic and ecological responses. We then discuss the recovery potential of polar Collembola. Together, this allows us to iden-tify key traits underlying resistance and recovery potential as well as recommend profitable research avenues integrating this information to more fully understand resilience.
Genetic diversity and adaptive capacity
Genetic diversity provides the raw material for evolution and thus determines the probability that a population will persist by having genetic variants (individuals) capable of surviving an environmental perturbation ( Hoffmann et al., 2003 ;Somero, 2010 ). Mutation is the ultimate source of that genetic variation ( Sung et al., 2012 ). However, the rate of beneficial mutations is almost certainly too slow relative to the timescales of rapid climate change ( Thomas et al., 2010 ;Sung et al., 2012 ;Lynch et al., 2016 ). Polar Collembola evolve and mutate more slowly than their temperate counterparts because they take longer to reach maturity and have lower reproductive rates ( Convey, 1996 ;Thomas et al., 2010 ). Selection, meiotic recombination, and dispersal (gene flow) are therefore the main factors likely to maintain genetic variation of polar Collembola during rapid decadal climate changes. Selective responses to severe environmental changes can be rapid ( < 10 generations) in the laboratory (experiments using Drosophila : Gibbs et al., 1997 ;Gefen et al., 2006 ), and meiotic recombination facilitates the potential selection of individuals most suited to the local environment ( Becks and Agrawal, 2010 ). Most Antarctic Collembola reproduce sexually ( Janetschek, 1967 ;Wise, 1967 ;Peterson, 1971 ). However, soil-dwelling species are often asexual, which means that more taxa may be asexual in the Arctic where there is greater soil development and higher species diversity ( Chernova et al., 2010 ;Bokhorst et al., 2018 ). If individuals can survive acute climate change effects, rising temperatures could shorten generation times leading to increased reproductive rates and accelerating local adaptation by increased mutation and potential selection ( Birkemoe and Leinaas, 1999 ;Thomas et al., 2010 ;Sengupta et al., 2017 ). Assessing genetic diversity within and among populations can help to determine which populations have greater genetic variation and hence individuals with phenotypes potentially better able to respond to changing environmental conditions.
Current knowledge of genetic diversity in polar Collembola
Historical geological and glaciation events have structured the genetic diversity of polar Collembola ( Ávila-Jiménez and Coulson, 2011 ;Collins et al., 2020 ). The long-term isolation of the Antarctic continent has elicited high levels of endemism ( Pugh and Convey, 2008 ; in contrast to the Arctic which has a higher degree of physical connectivity with lower latitudes, and lower levels of endemism ( Ávila-Jiménez and Coulson, 2011 ). Within Antarctica, large-scale geographic barriers such as glaciers and mountain ranges have prevented dispersal, resulting in highly-structured and geneticallydifferentiated populations ( Nolan et al. 2006 ;McGaughran et al., 2008 ;. Collembola are susceptible to desiccation and unlikely to routinely disperse aerially over long distances, although sporadic dispersal events can occur ( Coulson et al. 2002a ;Sinclair and Stevens, 2006 ;Hawes et al., 2007 ;Vega et al., 2020 ). Over long distances, Collembola likely disperse by rafting on the surface of inland and coastal waters ( Hawes, 2011 ;Collins et al., 2020 ). Consequently, estimates of divergence times between populations of Antarctic species often correlate with periods of ice sheet collapse and open seaways Bennett et al., 2016 ;Collins et al., 2020 ). The Arctic fauna has also been influenced by historical glacial cycles with ocean currents facilitating dispersal and recolonization ( Ávila-Jiménez and Coulson, 2011 ).
Genetic variation is generally high in polar Collembola ( Porco et al., 2014 ;, and might confer greater resistance relative to less diverse populations ( Hawes et al., 2010 ). Knowledge of genetic diversity for polar Collembola has been largely informed by variation in sequence fragments of the mitochondrial cytochrome c oxidase subunit I (COI) gene ( Costa et al., 2013 ;Pentinsaari et al., 2020 ). COI data have been used to reconstruct phylogeographic patterns and infer historic levels of population connectivity ( Stevens et al., 2007 ;. COI data are available for all known continental Antarctic species including ten Ross Sea Region species ( Beet et al., 2016 ;Collins et al., 2020 ), two from the Antarctic Peninsula (one of which has been recently redescribed from Friesea grisea to F. antarctica ) Torricelli et al., 2010 ;Carapelli et al., 2020 ) and one from East Antarctica ( Stevens and D'Haese, 2014 ). COI data indicate that genetic variation within some species may be high (greater than 2 % intraspecific pairwise divergence) with sequences differing by 1.7-14.7 % between populations of seven Antarctic species . By way of comparison COI sequencing of 33 species found across a 330 km range in temperate Estonia revealed that 57 % of species had intraspecific divergences of < 2 %, and 81 % of species had divergences < 5 % ( Anslan and Tedersoo, 2015 ). Despite the prevalence of Holarctic Collembola, there are high levels of localised COI diversification and potential cryptic speciation in the Arctic ( Hogg and Hebert, 2004 ;Porco et al., 2014 ;Pentinsaari et al., 2020 ). Genetic variation within the mitochondrial COI gene can also reflect genetic variation across the wider genome ( Stevens and Hogg, 2003 ;Monsanto et al., 2019 ). For example, genome-wide single nucleotide polymorphisms (SNPs), were used to corroborate the three genetically distinct regional populations of Cryptopygus antarcticus antarcticus previously identified by COI sequencing 2019 ). Thus, spatially explicit surveys using COI sequences likely provide an informative baseline for detecting climate change-induced alterations in diversity and distribution patterns.
Potential impacts of climate change on genetic diversity
Climate change will likely influence extant genetic diversity in four complex and interacting ways. First, changes in mean and maximum temperatures will likely increase mortality, directly leading to genetic bottlenecks ( Convey et al., 2018 ). Second, increased population connectivity could simultaneously threaten the unique diversity borne of isolation while increasing recovery and recolonisation following bottlenecks in the Antarctic and Arctic. Genetic bottlenecks can have persistent effects on population dynamics depending on the sex-ratios of surviving individuals with small populations more at risk of genetic drift and decreased fitness of individuals (the Allee effect, e.g. Courchamp et al. 2008 ), increasing the likelihood of localised extinctions ( Oliver et al., 2015 ). Third, warmer temperatures may facilitate the spread of invasive species into areas of the sub-Antarctic, Antarctic Peninsula and High Arctic. These invasive species may outcompete local populations and decrease overall genetic diversity ( Chown et al., 2012 ;Phillips et al., 2017 ;Hughes et al., 2020 ). Finally, altered connectivity arising from increased habitat availability ( Lee et al., 2017 ) and enhanced dispersal opportunities (via meltwater and open seaways) ( Collins et al., 2020 ) could directly affect existing genetic diversity (via altered rates of gene flow) and mediate the impacts of other processes including genetic bottlenecks.
Climate change is likely to decrease extant genetic diversity through more frequent genetic bottlenecks, increased competition, and homogenisation of divergent populations via increased connectivity Baird et al., 2019 ;. Warmer conditions will increase the hydrological connectivity among habitats, enhancing dispersal opportunities (e.g. rafting) between sites in addition to humanmediated transfer of native and non-native species ( Baird et al., 2020 ). Many genetically divergent Antarctic populations have been isolated for thousands to millions of years and are likely adapted to their local, and microscale, environment ( Convey and Peck, 2019 ;Siegert et al., 2019 ;Collins et al., 2020 ). Increased gene flow from larger source populations could disrupt co-adapted gene complexes, purging unique genetic diversity, and decreasing the ability of populations to respond to future perturbations ( Case and Taper, 2000 ;Convey and Peck, 2019 ;Siegert et al., 2019, Gutt et al., 2021. By contrast, increased connectivity between less-divergent populations could introduce new and favourable alleles to enhance individual fitness levels ( Costa et al., 2013 ;Nielsen and Wall, 2013 ). Increased population connectivity could also improve recovery from bottlenecks by increasing population density and genetic diversity ( Hertzberg et al., 1994 ;Jangjoo et al., 2016 ). Although data on polar Collembola are lacking, experiments on other invertebrates are beginning to reveal that low levels of gene flow can increase a population's adaptive potential while the hybridization of long-term isolated populations has only a limited or negative effect on adaptability ( Swindell and Bouzat, 2006 ;Hudson et al., 2021 ;Hoffmann et al., 2021 ). Overall, an increase in connectivity and gene flow is only likely to negatively impact the adaptability of unique isolated Antarctic populations ( Gutt et al., 2021 ).
Arctic Collembola are more connected at the landscape scale than their Antarctic counterparts. Consequently, many Arctic taxa have Holarctic distributions ( Ávila-Jiménez and Coulson, 2011 ;Hodkinson et al., 2013 ). However, the effects of warming on heavily-glaciated Arctic regions such as Greenland are likely to be similar to those experienced in Antarctica. Much of the Arctic is already unglaciated and hydrologically connected, particularly in tundra areas where water has infilled thermokarst (irregular pocked surface landscapes arising from frost heaving and melting permafrost) ( Abnizova and Young, 2010 ). This increased connectivity has also disrupted soil structure and soil community stability stemming from permafrost degradation Farquharson et al., 2019 ). Available genetic data indicate low levels of population differentiation (high connectivity) for many species of Arctic Collembola ( Hogg and Hebert, 2004 ;Porco et al., 2014 ). For example, low levels of population differentiation were observed for 38 of the 45 morphologically identified species in the Canadian sub-Arctic (Churchill, Manitoba) ( Porco et al., 2014 ), and 17 of the 19 species found in the Canadian mid to high Arctic ( Hogg and Hebert, 2004 ). This high level of population connectivity could mitigate the impacts of permafrost degradation and thermokarst expansion by increasing recolonisation rates ( Nielsen and Intensive sampling and COI sequencing of Antarctic Collembola have revealed inherent levels of genetic variation, population connectivity and evolution. By contrast, although Arctic species and habitats are diverse, few Arctic Collembola have been sampled and sequenced. Improving baseline measures of Arctic genetic diversity and distribution will be critical for detecting future climate change impacts. Expanding knowledge of the genetic basis of key traits underlying the distributions of polar Collembola will help determine their ability to evolve and resist disturbance. In particular, mitogenome sequences currently exist for several Antarctic species (see Carapelli et al. 2019 ;Monsanto et al. 2019 ), and genome-wide studies are now tractable and affordable ( Hohenlohe et al., 2021 ). These could be used to estimate the rates of evolution (and therefore adaptive potential) in polar populations ( Faddeeva-Vakhrusheva et al., 2016Wu et al., 2017) as well as identify the resistance potential of Collembola, for example by allowing sequence-level analysis of stress response genes such as heat shock proteins ( Michaud et al. 2008 ;Cucini et al. 2021 ). Ultimately, connectivity, selection, and distribution need to be monitored temporally to understand the changing genetic diversity and resilience of polar Collembola.
Physiological tolerances and behavioural plasticity enable survival in extreme and highly variable environments
Polar Collembola are exposed to extreme environmental conditions, including winter temperatures below -30 °C Velasco-Castrillón et al., 2014 ), daily temperature fluctuations of 30 °C or more ( Worland and Convey, 2001 ;Sinclair et al., 2003 ), and surprisingly high microclimate temperatures in summer ( Sinclair et al., 2006b ;Convey et al., 2015 ;Convey et al., 2018 ). Arid polar desert conditions and high solar radiation further expose Collembola to intense ultraviolet light ( Hawes et al., 2012 ;Beresford et al., 2013 ) and high desiccation stress ( Sinclair et al., 2006b ;Elnitsky et al., 2008 ;Holmstrup, 2018b ). Collembola are soft-bodied and respire through a permeable exterior cuticle which makes them susceptible to desiccation, although susceptibility varies within and among species ( Hertzberg and Leinaas, 1998 ;Worland and Block, 2003 ;Aupic -Samain et al., 2021 ). Determining how resistant polar Collembola will be to climate changes requires an integration of their exposure to extremes, the proximity of their physiological limits to current environmental extremes, and consideration of the effects of interacting environmental changes ( Somero, 2010 ).
Behaviour and microhabitat occupation modifies exposure to environmental stressors
Environmental extremes only affect animals that are exposed to them. Thus, behavioural avoidance of environmental stressors is a key response to climate changes ( Sunday et al., 2014 ;Kovacevic 2019 ). For example, Collembola can avoid exposure to a range of soil pollutants ( Boitaud et al., 2006 ;Boiteau, Lynch, and MacKinley 2011 ;Zortéa et al., 2015 ), and the fine-scale distribution of Collembola in Antarctica is associated with soil moisture content ( Hayward et al., 2004 ;Sinclair et al., 2006a ). Unfortunately, not all environmental stressors can be easily avoided at the scale of an individual collembolan and the role of behaviour in mitigating natural environmental stressors is still poorly understood ( Boiteau and MacKinley, 2013 ;Krab et al., 2013 ).
Collembola can generally evade stress by microhabitat selection, for example migrating deeper into the soil, or moving among surface microhabitats ( Hertzberg et al., 1994 ;Sinclair et al., 2006a ;Krab et al., 2013 ). Thus, effective behavioural avoidance requires an ability to move, and the availability of suitable habitat. Mobile surface-dwelling species (which may have longer limbs and furculae) can walk faster and "spring " away from disturbances, but may be unable to burrow effectively in the soil pack ( Hopkin, 1997 ;Krab et al., 2013 ). Deeperdwelling species are less mobile although they are inherently better buffered from environmental extremes and thus, initially, more resistant to climate changes ( Hopkin, 1997 ;Detsis, 2000 ;Ponge, 2000 ). Collembola that reside deeper in the soil profile tend to be less heat tolerant ( Kovacevic, 2019 ), and may be killed when stressors penetrate deeper into the soil profile ( Thakur et al., 2017 ).
In summer, migrating into the soil profile or under large rocks buffers temperature extremes ( Huey et al., 1989 ;Huey et al., 2021 ), while simply moving into the shade is enough to mitigate UV, heat and desiccation stress ( Hawes, Marshall, and Wharton 2012 ;Dahl et al., 2017 ;Asmus et al., 2018 ). Collembola clearly take advantage of microhabitat variation: in a mesocosm experiment involving sub-Antarctic Collembola, abundance in deeper soil layers increased by 75 % during a heat wave event, and this response was exacerbated by drought ( Kovacevic, 2019 ). Experimental warming in a sub-Arctic peatland reduced collembolan densities at the soil surface, although downward migration of larger surface-dwelling species (e.g. see Fig. 2 A) was not detected, perhaps due to difficulties of moving through the smaller soil pore sizes (and often waterlogging) found with increasing depth in many of these habitats ( Krab et al., 2013 ).
Behavioural avoidance is easier for Collembola in the sub-Antarctic and Arctic where soils are more developed and vegetative communities are more abundant and diverse, providing greater variation in microhabitats ( Coulson et al., 2003 ;Wilhelm et al., 2011;Boike et al., 2018). For example, temperatures exceeded 0°C only 74 times across the summer at a polar desert site in Svalbard, whereas nearby vegetated sites had more than 120 days above 0 °C ( Convey et al., 2018 ). Of course, more heavily vegetated sites also carry additional risks of competition and predation, demonstrating a trade-off that possibly drives occupation of the more extreme sites ( Coulson et al., 2003 ;Convey et al., 2018 ). On the Antarctic continent and in High-Arctic polar deserts, vegetation is scarce and soil development is limited, with shallow top-soils (often only a few centimetres in depth) underlain by permafrost, which limits access to the soil column (Bockheim et al., 2007;Seppelt et al., 2010).
Capacity for behavioural avoidance is truncated in winter. At night (at sub-polar latitudes in the summer) and during the polar winter, there is no insolation to heat some microhabitats, and soil with shallow permafrost permits no downward escape from sub-freezing temperatures Sinclair and Sjursen 2001a ). Over winter, snow cover provides considerable buffering from extreme air temperatures and the worst desiccation stress ( Gooseff et al., 2003 ;Pauli et al., 2013 ). Snow also accumulates differentially by aspect or in patterned ground (e.g. Gooseff et al., 2003 ;Scott et al., 2008 ), which means that some microhabitat variation is still available. It is often assumed that polar Collembola are inactive and immobile over winter, where critical thermal minima can be below -10°C (e.g. Sinclair et al., 2006b ). However, Collembola are active under the snow in the (very cold) Canadian prairies ( Aitchison, 1979 ), which suggests that there may be winter activity (and therefore capacity for microhabitat selection) in at least some polar environments.
Climate changes are introducing more extreme and variable finescale environmental stressors, and it is uncertain whether collembolan behaviour will be effective in avoiding these stressors or novel combinations thereof ( Høye et al., 2021 ). Furthermore, there are complications in evaluating the relationship between microhabitat selection and physiological tolerances and plasticity (see Hawes et al. 2008 ). Surfacedwelling species are likely to have an advantage in behaviourally avoiding adverse environmental conditions associated with climate change due to an increase in vegetation/ microhabitats, improved mobility, and capacity to move between optimal microhabitats (see Kutcherov et al., 2020 for an example of habitat change rapidly modifying collembolan responses to temperature in Iceland). By comparison, deeper-dwelling species may be less exposed, initially, to changing temperature. However, they may also be exposed to large changes in hydrology and permafrost. Eventually, the full soil profile will change, challenging even the deeper-dwelling soil-taxa. Unfortunately, very little is known about behavioural responses to environmental conditions in Collembola, and almost nothing about the responses of polar soil-dwellers; filling these gaps is essential for predicting the exposure to (and avoidance of) changing environmental conditions, and building a framework for interpreting resilience.
Tolerance of environmental extremes requires plasticity
Physiological tolerances determine survival when environmental extremes cannot be avoided. These tolerances are often divided into basal tolerances (the steady-state tolerance) and plasticity (the extension of those tolerances in response to changing environmental conditions; Somero, 2010 ). However, in nature, even basal tolerances change throughout the year ( Cannon and Block, 1988 ) and extreme temperatures have been a significant evolutionary pressure on the evolution of Collembola ( Zizzari and Ellers, 2014 ;Carapelli et al., 2019 ). As thermal variability and the frequency of extreme temperatures increases with climate change ( Meredith et al., 2019 ), we expect that extreme thermal tolerances will remain key to resistance and resilience of polar springtails.
All Antarctic (and most Arctic) Collembola appear to be freezeavoidant, keeping their body fluids liquid at sub-freezing temperatures via ice-binding (i.e. antifreeze) proteins ( Sinclair and Sjursen 2001 ;Graham et al. 2020 ) and by accumulating small molecules such as glycerol ( Cannon and Block, 1988 ;Sømme, 1999 ). Some Arctic species use cryoprotective dehydration, relying on external ice to remove body water, thereby concentrating the remaining body fluids and preventing them from freezing ( Holmstrup and Sømme, 1998 ;Worland et al., 1998 ;Sørensen and Holmstrup, 2011 ;Holmstrup, 2018a ). Cold tolerance strategies are not well studied except for a few species, so alternative strategies may yet be discovered. For example, freeze-tolerant mites -also previously presumed to be exclusively freeze-avoidant -have been reported from temperate Canada ( Anthony and Sinclair, 2019 ). The lethal temperatures of cold-hardy, freeze-avoidant Collembola coincide with the supercooling point (SCP, the temperature at which they freeze). The SCPs of Antarctic Collembola can be very low -for example, a minimum of -38°C in early spring for Gomphiocephalus hodgsoni on Ross Island ( Sinclair and Sjursen, 2001a ). Thus, native Collembola appear well-equipped to survive the polar winter . Importantly, warm winters do not necessarily reduce cold-related mortality. Mid-winter snow melt exposes Collembola to extreme cold temperatures, which means that climatically warmer winters can increase cold stress ( Coulson et al., 2000 ;Bokhorst et al., 2012 ;Williams et al., 2015 ).
In summer, polar collembolan SCPs are often bimodal, with a high group (putatively those with food in their guts; Sømme, 1986 ) whose SCPs can be as high as -2 °C, and a low group (those moulting or with empty guts) with SCPs 10°C or more lower (e.g. Cannon and Block, 1988 ;Worland et al., 2006 ). Individual Collembola appear to be able to shift between these groups in a matter of hours ( Worland and Convey, 2001 ;Sinclair et al., 2003 ;Worland, 2005 ). This plasticity is critical to allow feeding and also to ensure survival of low summer temperatures and freeze-thaw cycles Sinclair et al., 2003 ). The frequency of such freeze-thaw may increase with climate change ( Nielsen and Wall, 2013 ). Collembola (particularly small deeperdwelling individuals) are more sensitive to freeze-thaw cycles than mites which could lead to community level changes ( Coulson et al., 2000 ;Bokhorst et al., 2012 ).
In summer, bare ground and (in some places) dark rocks in polar regions can capture a surprising amount of heat from the sun. Heat tolerances have been less-commonly measured, but reported hightemperature thresholds for polar Collembola range from 34 to 40 °C ( Hodkinson et al., 1996 ;Sinclair et al., 2006b ;Everatt et al., 2013 ;Everatt et al. 2014 ). This suggests that many Collembola may have thermal tolerances similar to their non-polar counterparts: upper functional thermal limits of Australian and South African Collembola range from 30-45 °C ( Janion-Scheepers et al., 2018 ; Liu et al., 2020 ). Regardless, microclimate temperatures in some microhabitats at Cape Hallett, Antarc-tica, regularly exceeded the critical thermal maximum of two of three Collembola species at the locality, highlighting their potential vulnerability to continued warming ( Sinclair et al., 2006b ). Further studies have also suggested that there is little plasticity and acclimation capacity in heat tolerances ( Slabber et al., 2007 ;Everatt et al., 2013 ;Janion-Scheepers et al., 2018 ;Phillips et al., 2020 ). High temperature tolerance can decline rapidly in dry conditions or during long exposures ( Hertzberg and Leinaas, 1998 ), suggesting that these acute measures probably underestimate the risk of high temperature exposure. Climate change is expected to yield longer periods of more extreme temperatures in both the Arctic and Antarctic ( Meredith et al., 2019 ), although at the highest latitudes, changes in cloud cover (increasing cloudiness, particularly over areas of sea ice retreat) will likely have the greatest impact on surface temperatures Meredith et al., 2019 ). Even sub-lethal warming could increase the time spent above optimum temperatures for growth and corresponding reductions in fecundity ( Sweeney and Vannote, 1978 ).
Collembola can rapidly increase their heat tolerance through the heat shock response, largely mediated by heat shock proteins ( Escribano-Álvarez et al., 2022 ;Sørensen et al., 2003 ). For example, heat survival of Orchesella cincta increases by > 60 % after only an hour at 35°C ( Bahrndorff et al., 2009 ). This improved thermotolerance can persist for two days, thereby improving resistance to future thermal extremes and stochasticity ( Bahrndorff et al., 2009 ). However, the heat shock response is energetically expensive and can reduce subsequent activity, foraging, reproduction and development ( Zizzari and Ellers, 2011 ;Klepsatel et al., 2016 ). Thus, induction of heat shock at sub-lethal temperatures, and repeated and fluctuating temperatures ( Marshall and Sinclair, 2012 ;Colinet et al., 2015 ;Dillon et al., 2016 ), could have long term consequences on individual performance and population dynamics. Understanding the performance implications of real-world temperature regimes remains a challenge for any ectotherm ( Dillon et al., 2016 ), and is especially relevant for contextualising existing thermal tolerance data in a resilience framework.
Water balance is critical for polar Collembola
Desiccation susceptibility determines microhabitat selection and local distribution in both the Antarctic ( Hayward et al., 2004 ;Sinclair et al., 2006a ;2006b ) and Arctic ( Hertzberg and Leinaas, 1998 ). Surface-dwelling Collembola are generally more resistant to desiccation and exhibit a lower water loss rate ( Kaersgaard et al. 2004 ;Lindberg and Bengtsson, 2005 ;Makkonen et al., 2011 ). By comparison, soil-dwelling species usually have more permeable integuments and are less resistant to desiccating conditions ( Aupic -Samain et al., 2021 ). Under experimental conditions, exposing Collembola to dry air, smaller individuals tended to be more sensitive to desiccation stress due to their larger surface area to volume ratios ( Hertzberg and Leinaas, 1998 ). However, dry air is not necessarily representative of drought conditions within soils. For example, Hilligsø and Holmstrup (2003) , found that drought conditions within a simulated soil environment did not have a disproportionate effect on (temperate) Folsomia candida juveniles or smaller individuals. Experimentally reducing water availability considerably reduced collembolan density in both the sub-Arctic ( Makkonen et al., 2011 ) andsub-Antarctic ( McGeoch et al., 2006 ; but see, e.g. Aupic -Samain et al., 2021 andHolmstrup et al., 2013 for equivocal responses in temperate regions). Like most other traits, collembolan desiccation tolerance is plastic. For example, pre-exposure of Antarctic Cryptopygus antarcticus to mild desiccating conditions improved desiccation survival by 35 % , and the Arctic Megaphorura arctica (formerly Onychiurus arcticus ) is inherently desiccation tolerant ( Hodkinson et al., 1994 ;Worland 1996 ;Holmstrup and Sømme, 1998 ). While desiccation survival can be high in natural conditions, collembolan reproduction can be highly sensitive to declines in soil water potential (Wang et al. 2022). Given its clear importance in determining distribution, influencing reproduction, and responses to climate change, the mechanisms un-derlying variation in water balance and desiccation susceptibility must be a priority for future physiological investigations in polar Collembola.
Modelling the impacts of large-scale climate changes on local-scale soil moisture levels is critical to predicting the resilience of polar Collembola. Precipitation models are complex and highly variable on several spatial scales with predictions involving both increases and decreases in snowfall and precipitation depending on the region and landscape dynamics (see review by Box et al., 2019 ). In the Arctic, an increase in desiccation stress is likely to arise in areas with permafrost degradation (thaw increases soil drainage and drying) and where increased soil evaporation (from warmer temperatures) is not offset by rates of precipitation ( Box et al., 2019 ). Furthermore, winter warming (Arctic winter temperatures have risen by 3.1 °C since 1971; Box et al., 2019 ) is contributing to declines in snowpack accumulation which impacts not only summer soil moisture levels but the thermal buffering of overwintering Collembola ( Box et al., 2019 ;Høye et al., 2021 ). In the Antarctic, soil moisture will predominantly be influenced by winter snow accumulation and increased glacial melt ( Convey and Peck, 2019 ). Snow cover itself varies considerably with microtopography at a scale that directly influences soil moisture and Collembola distribution ( Sinclair and Sjursen, 2001b ;Sinclair et al., 2006a ). Thus, it is very challenging to translate global-or regional-scale changes in precipitation to population-level impacts. Polar areas with intermediate warming and sufficient soil moisture are likely to see dramatic increases in Collembola abundance ( Convey and Peck, 2019 )
Environmental stressors interact in nature and vary among populations
In nature, environmental stressors interact with one another to either exacerbate or mitigate the stress ( Todgham and Stillman, 2013 ). Understanding these interactions requires extensive multifactorial experiments (e.g. Brennan and Collins, 2015 ). Although frameworks exist to predict the outcome of interactions among stressors a priori based on shared mechanisms in a comparative phylogenetic context (e.g. Kaunisto et al., 2016 ), they remain to be applied in a polar context. We can, however, identify clear interactions based on our existing knowledge, and at least take them into account when considering Collembola responses to climate change.
For example, desiccation co-occurs and interacts with responses to other stresses. At high temperatures, vapour pressure deficit and therefore water loss rates increase, and cuticular hydrocarbons can melt further exacerbating cuticular water loss ( Chown and Nicholson, 2004 ). Thus, high temperature mortality in terrestrial arthropods is often a product of thermal stress per se and water loss ( Chown et al., 2011 ). In Antarctic Collembola, more heat-tolerant species are often also more desiccation resistant ( Sinclair et al., 2006b ), although the mechanistic links have not been explored. At low temperatures, some of the mechanisms of desiccation stress and cold stress -particularly loss of ion homeostasis -appear to overlap in chill-susceptible insects ( Sinclair et al., 2013 ), and ice in the environment can dehydrate permeable, unfrozen Collembola ( Holmstrup et al., 2002 ). Some soil-dwelling Collembola in moist Arctic habitats exploit this in a strategy termed cryoprotective dehydration ( Holmstrup et al., 2002 ;Sørensen and Holmstrup, 2011 ). The Antarctic Cryptopygus antarcticus shows responses to dehydration consistent with the capacity for cryoprotective dehydration . Thus, water availability could directly impact collembolan survival and also modify resistance to thermal stressors in ways that are currently poorly understood.
A less well-explored potential interaction among stressors traverses the boundary between the changes to the physical environment wrought by climate change with the all-pervasive pollution output of human activities. Based on work using Collembola as an ecotoxicological model, we know that they are susceptible to pollution ( Hopkin, 1997 ;Mooney et al. 2019 ) and at least some pollutants modify responses to other environmental stressors in temperate Collembola. For example, some detergents and polycyclic aromatic hydrocarbons reduce desicca-tion and high temperature tolerance ( Sjursen et al., 2001 ;Sørensen and Holmstrup, 2005 ;Mikkelsen et al., 2019 ), mercury reduces cold tolerance ( Holmstrup et al., 2008 ), and microplastics perturb the gut microbiota ( Ju et al., 2019 ). Conversely, increased temperatures make Collembola more susceptible to copper toxicity ( Callahan et al., 2019 ). Many of these experiments rely on high concentrations of pollutants that are not environmentally realistic. However, there is significant pollution associated with human activities at the local scale in both the Arctic and Antarctic ( Errington et al., 2018 ;Ferguson et al., 2020 ;Rudnicka-K ępa and Zaborska, 2021 ). Pollutants generated elsewhere are also deposited into polar soils, including persistent organic pollutants (e.g. pesticides Ma et al., 2011 ), heavy metals ( Chu et al., 2019 ), black carbon ( Schacht et al., 2019 ), nitrogen ( Stewart et al., 2014 ) nanoparticles ( Kumar et al., 2012 ), and microplastics ( Obbard, 2018, González-Pleiter et al. 2021. In addition to the increased deposition of many of these pollutants, warming can exacerbate their impacts. For example, newly-active organic matter in melting permafrost mobilises methylmercury ( Yang et al., 2016 ;Obrist et al., 2017 ), while warming coupled with atmospheric nitrogen deposition increases currently limited plant productivity ( Stewart et al., 2014 ). Experiments using lower, more realistic levels of pollutants will be needed to properly evaluate their potential influence. The longer-term implications of these pollutants in determining the resistance or recovery potential of polar Collembola are currently unknown and require urgent attention.
Polar Collembola can display remarkable physiological tolerances to extreme conditions with individuals in some cases tolerating temperatures below -30 °C and above + 30 °C. Research-to-date has also highlighted high levels of physiological plasticity and acclimation capacity, particularly in cold tolerance ( Sinclair and Sjursen 2001a ;Sinclair et al. 2003 ;Worland and Convey 2008;Bahrndorff et al. 2007 ). The impacts of winter warming is largely unknown owing to the inherent logistical constraints of studying polar Collembola in winter. However, summer abundances of Collembola in Greenland have declined in response to warmer winters, with impacts more pronounced in drier habitats ( Koltz et al., 2018a ). This reinforces the need to better study the impacts of multiple interacting stressors, particularly in natural communities. Stressors can further prompt transgenerational and multigenerational impacts in Collembola ( Hafer et al., 2011 ;Szabó et al., 2019 ), although this has not been investigated in polar species. Future physiological studies should explicitly account for size and developmental stages to assess levels of intraspecific variation in thermal tolerances. Disproportionate effects on juveniles are likely to result in high mortality (low resistance), and fewer individuals reaching maturity would also limit recovery Widenfalk et al., 2018 ). A few Arctic studies have begun comparing physiological tolerances of species with widely distributed populations (e.g. Bahrndorff et al., 2007 ;Sørensen and Holmstrup, 2013 ;Sengupta et al., 2016 ;Sengupta et al., 2017 ). No such study has been conducted in the Antarctic despite the high levels of genetic differentiation documented among populations . Until recently, the links between physiology and genetics had not been explored for Collembola and available studies include only a few genes and proteins (e.g. heat shock proteins, aquaporins, Faddeeva et al., 2015 ;Faddeeva-Vakhrusheva et al., 2016 ;Cucini et al., 2021 ). A broader investigation of the molecular responses to environmental stressors and interacting stressors would help determine whether polar Collembola have the genetic capacity and physiological adaptability to survive climate changes.
Biotic interactions and resistance to climate change
Polar Collembola often live in biologically simple systems with limited trophic structure Hogg et al., 2006 ). In Antarctica, biotic interactions are particularly limited and abiotic conditions are largely thought to regulate populations Caruso et al., 2019 ;Lee et al., 2019 ). By contrast, terrestrial Arctic foodwebs are more complex, with higher levels of trophic structure, competition, predation (e.g. by spiders, mites) and a wider range of available trophic niches ( Danks, 1990 ;Post et al., 2009 ;Koltz et al., 2018b ). Climate change is likely to disrupt terrestrial food-webs as biota across trophic levels exhibit differential responses. For example, flowering periods may no longer coincide with peak availability/activity of pollinators ( Urbanowicz et al., 2018 ;Tiusanen et al., 2019 ). Predicting the potential biotic responses of lower trophic level taxa is therefore critical for understanding ecosystem resilience.
Collembola mediate complex soil decomposition interactions
Collembola are omnivorous detritivores with diets of bacteria, fungi, and plant and animal material ( Hopkin, 1997 ). Current understanding of polar collembolan diets is largely based on dissection and morphological identification of gut contents ( Broady, 1979 ;Hodkinson et al., 1994 ;Davidson and Broady, 1996 ). Such studies indicate considerable flexibility in feeding habits and that the studied species can readily exploit a wide range of available food resources ( Broady, 1979 ;Davidson and Broady, 1996 ;Bokhorst et al., 2007 ). In a changing environment, a generalist opportunistic diet should confer a high level of resilience for individual taxa.
Collembola tend to have the greatest influence on soil decomposition when feeding on microbial communities dominated by fungi ( Wardle et al., 2004 ;A'Bear et al., 2014 ). In Alaska, Koltz et al. (2018c) found that 99.6 % of carbon cycled by invertebrates originated from detrital matter and was primarily cycled by fungal consumers such as Collembola. Collembolan grazing pressures on bacteria and fungi also limit the ability of microbes to compete with plants for available nutrients ( Chauvat and Forey, 2021 ). Accordingly, any changes to Collembola feeding habits are likely to strongly influence soil nutrient cycling ( Koltz et al., 2018c ), although speciesspecific, and/or ontogenetic shifts in food preferences are currently unknown. Under warming conditions, Collembola abundances will increase thus increasing community-level detritivory. Higher temperatures can also increase rates of metabolism and potentially rates of compensatory feeding, particularly if food quality declines ( Sweeney and Vannote, 1978 ;Verberk et al., 2021 ). Together these processes will increase rates of decomposition and nutrient cycling in polar ecosystems provided they are not limited by water availability or by increased predation ( Thakur et al., 2017 ).
Increased habitat complexity will alter food-web structure
Warmer air temperatures are resulting in the greening of both polar regions which increases habitat complexity ( Parnikoza et al., 2009 ;Myers-Smith et al., 2020 ;Peng et al., 2020 ). Warming is also aiding the survival and spread of non-native plant species ( Chown et al., 2012 ;Hughes et al., 2015 ;Newman et al., 2018 ). The establishment of non-native plants modifies local abiotic conditions, including increased shading, soil moisture, and organic matter as well as increasing available trophic niches ( Coulson et al., 2003 ;Convey and Peck, 2019 ). Collembola are often the dominant arthropods in simple soil ecosystems . However, as polar systems increase in biotic complexity, additional arthropod taxa, including nonnative species are likely to establish ( Coulson et al., 2003 ;Convey and Peck, 2019 ). While increased niche diversity may foster an associated increase in Collembola diversity, the presence of other arthropod taxa will shift community composition and decrease the relative role of Collembola. In particular, the arrival of ecosystem engineers, such as earthworms, may pose significant challenges to current inhabitants by improving soil habitability and the likelihood of further non-native species establishing ( Hughes et al., 2013 ;Hughes et al., 2020 ;Wackett et al., 2018 ;Blume-Werry et al., 2020 ). The arrival and spread of predators such as invasive carabid beetles on South Georgia and the Kerguelen Islands pose a direct threat to extant populations, although they are currently unlikely to survive in continental Antarctica .
Collembola are a major prey item of spiders and mites in the Arctic Koltz et al., 2018b ) and predatory mites in the continental ( Gless, 1967 ;Fitzsimons, 1971 ) and maritime Antarctic ( Jumeau and Usher, 1987 ). In the maritime Antarctic, densities of the predatory mite Gamasellus racovitzai and predation rates are currently low enough that they are unlikely to have a significant impact on Collembola abundances ( Lister et al., 1987 ). However, many mites are more heat and desiccation tolerant than Collembola and predation by mites is likely to increase under warming conditions ( Everatt et al., 2013 ). Under experimental conditions, increased densities of Arctic wolf spiders led to a decline in Collembola abundances with an accompanying decline in decomposition rates ( Koltz et al., 2018b ). Under warming conditions, collembolan abundance still declined even with low predator abundance Koltz et al., 2018b ). Long-term observational data from Greenland showed that Collembola declined from 1996 until 2011, during which time spider abundances increased ( Koltz et al., 2018a ). These trends were reversed in 2011 with cooler summer temperatures and a resulting decline in spider abundances and an increase in Collembola abundances across three different habitats ( Høye et al., 2021 ). Thus, spiders benefitted while Collembola were negatively affected by higher temperatures ( Høye et al., 2021 ).
Predation pressure will likely exacerbate other stressors particularly for smaller species and juveniles. For example, Thakur et al. (2017) found that Proisotoma minuta were driven to extinction under a combination of warming and predation pressure while the larger Folsomia candida were less affected. In another experiment involving four species of Collembola, predation had the largest impact on the two smaller species, with the highest impact (75 % decline in abundance) on the smaller, less mobile, species ( Aupic -Samain et al., 2021 ). In this same study, a combination of low moisture, warming, and predation resulted in a > 89 % decrease in abundance of all four species relative to low moisture and warming alone ( Aupic -Samain et al., 2021 ).
Invasive Collembola may have a competitive advantage
The arrival of non-native and/or invasive species will provide further challenges to inhabitants of polar ecosystems. New arrivals have the potential to disrupt existing residents through increased competition for resources, elevated predation pressure and the transformation of soil systems through the arrival of ecosystem engineers. The potential for successful establishment of invasive species varies across polar regions with some locations exposed to considerably higher rates of propagule pressure ( Chown et al., 2012 ;Newman et al., 2018 ;Vega et al., 2019). Unfortunately, some of the most rapidly warming polar regions are also exposed to the most propagule pressure. The maritime Antarctic and the western Antarctic Peninsula receive the highest number of visitors each year, both researchers and tourists, which increases the risk of species introductions ( Chown et al., 2012 ;Duffy et al., 2017 ;Hughes et al., 2020 ). In the Arctic, increased shipping, human occupation, mineral exploration and tourism are also likely to increase the risk of introductions ( Ruiz and Hewitt, 2009 ;Coulson et al., 2013 ;Hodkinson et al., 2013 ).
In Antarctica, high endemism and currently low levels of competition and predation suggest an increased vulnerability to non-native and invasive species ( Hughes et al., 2015 ;Enríquez et al., 2018 ;Chown et al., 2022 ). The Antarctic Circumpolar Current has restricted all but sporadic natural dispersal events to the continent . Increasing habitat availability coupled with increased human activity are likely to reduce dispersal barriers for non-native species. Many sub-Antarctic and maritime Antarctic islands are already climatically suitable for nonnative Collembola. Hypogastrura viatica was first identified on Deception Island in the 1940s and already appears to be displacing and outcompeting Cryptopygus antarcticus in South Georgia ( Convey et al., 1999 ;Hughes et al., 2015 ;Enríquez et al., 2018 ). A total of 36 non-native Collembola species was reported by Baird et al. (2019) in the Antarctic (including the sub-Antarctic). Recent range expansions have also been reported for existing non-native species ( Greenslade and Convey 2012 ;Phillips et al., 2017 ;Enríquez et al., 2019 ). Areas of the western Antarctic Peninsula are predicted to become habitable for globally invasive species within the next decades ( Duffy et al., 2017 ). Protaphorura fimata, a palearctic species (already found in the sub-Antarctic) is one of 13 species (including plants, freshwater and marine invertebrates) identified as posing a high risk of becoming invasive in the Antarctic Peninsula region . Even with increasing climate suitability, the establishment of non-native species in continental areas of the Antarctic (e.g. McMurdo Dry Valleys) remains less likely ( Duffy et al., 2017 ;Duffy and Lee, 2019 ). Unfortunately, warming may still trigger the loss of diversity if an existing endemic species can, under changing conditions, outcompete other native species. In Svalbard, six non-native Collembola have established accidentally from imported soils although none appear to have spread (two are considered high risk) ( Coulson, 2015 ). Since the last glacial maximum, the Arctic has been susceptible to natural dispersal from lower latitude species, some of which establish, while others are considered vagrant ( Coulson et al., 2002b ;Alsos et al., 2007 ). Under warming conditions, non-native and vagrant species could become invasive. With the exception of Svalbard, the identification and monitoring of non-native Collembola in the Arctic is currently limited by a lack of baseline data ( Hogg and Hebert, 2004 ;Porco et al., 2014 ;Coulson, 2015 ).
The relative resilience of native and non-native Collembola will depend on local environmental conditions. Native species are particularly adapted to their local environment and many new arrivals may not survive. Accordingly, successful arrivals are likely to exhibit traits such as active dispersal ( Enríquez et al., 2018 ), generalist feeding habits, and wider thermal tolerances . Nonnative species (regardless of phylogeny or place of origin) consistently have higher upper thermal limits compared to native species ( Slabber et al., 2007 ;Janion-Scheepers et al., 2018 ;Phillips et al., 2020 ). In a warmer and wetter climate, heightened thermal tolerances coupled with faster reproductive rates may provide non-native species a competitive edge and pose a significant threat to the resilience of existing native taxa.
Resilience of polar Collembola in a changing world
The resilience of polar Collembola to climate change is predicated on appropriate resistance capacities and the ability to recover following disturbances. Potential insights can be gained from historical responses to glacial cycles, observational evidence in Antarctica and the Arctic, as well as recovery rates following disturbances in lower latitude environments. However, these lower latitude studies often monitor recovery following short-term, acute disturbances while longer-term climate changes are unlikely to return to pre-disturbance conditions. In this context, successful recovery would be when communities reach a new stable state which would occur through a combination of local adaptation and recolonization processes.
Polar Collembola have persisted through glacial cycles ( McGaughran et al., 2019 ;Collins et al., 2020 ) with recolonization dependent on migration of individuals from refugial habitats following disturbance. Resilience will depend on whether individuals already inhabit (or are able to migrate to) future refuges from the most damaging environmental changes. Numerous Antarctic glacial refugia have been identified through phylogeographic analyses with geothermal sites also representing possible oases Fraser et al., 2014 ;Collins et al., 2020 ). Past Arctic refugia have included areas of Beringia and much of Siberia in addition to localised cryptic refugia and nearby lower latitude areas ( Babenko, 2005 ;Ávila-Jiménez and Coulson, 2011 ). Whether similar refugia to escape climate change exist, remains to be seen. The existence of Holarctic species indicates the potential for widespread dispersal via open sea-ways followed by subsequent localised diversification ( Ávila-Jiménez and Coulson, 2011 ). Glacial retreat provides a useful analogue for likely recolonisation scenarios in Antarctica and areas of the Arctic Hågvar, 2010 ;Hågvar and Pedersen, 2015 ). Glacial retreat at a High Arctic Svalbard site revealed that after initial colonisation by three to four species (within two years), additional species did not arrive until 100-150 years later. Two low-mobility, deeper-dwelling species only appeared towards the end of the chronosequence -1900 years later . This suggests that recovery can be very slow, and if resistance capacities are limited, the overall resilience of polar Collembola is likely to be limited in the face of rapid environmental change.
Antarctic Collembola appear particularly vulnerable to disturbance and are exceedingly slow to recover. Anecdotal evidence from Ross Island in the Ross Sea region suggests that the relatively widespread Gomphiocephalus hodgsoni was once common in the vicinity of Hut Point near Scott's 1901 expedition hut ( Wise, 1967 ). Construction activities for McMurdo Station and Scott Base, in the late 1950s, would have resulted in considerable disruption. Despite numerous searches by several different researchers, Collembola have not been recorded on this part of Ross Island for at least 60 years ( Stevens and Hogg, 2002 ;Beet and Lee, 2021 ). The presence of extant populations of G. hodgsoni within ∼25 km of the Station suggests that individuals do not effectively disperse and/or recolonise disturbed areas. This is further supported by high levels of genetic differentiation observed within single Dry Valleys indicating limited dispersal and isolation over evolutionary time scales . Accordingly, recovery within decadal timescales is unlikely for polar Collembola ( Convey, 1996 ). Recovery in areas of human disturbance will be further complicated by the possibility of alien and invading species which may be better at colonising these disturbed sites ( Duffy and Lee, 2019 ;Hughes et al., 2020 ).
In the Arctic, recovery of taxa following disturbance is likely to be more successful relative to the Antarctic. Higher densities, more widespread distributions, increased inter-population connectivity, and increased niche diversity all increase intrinsic capacities to recolonize disturbed areas. For example, increased vegetative abundances increase niche diversity (and potential refugia) which when coupled with higher abundances reduce the likelihood of complete extirpation of a population or species ( Asmus et al., 2018 ;Myers -Smith et al., 2019 ). However, recovery would be dependent on the existing inhabitants of the disturbed communities and the proportionate rates of predation. If the surviving community is predominantly composed of smaller individuals, predators could have a more dramatic effect thus limiting any potential recovery ( Thakur et al., 2017 ;Koltz et al., 2018b ). Alternatively, recovery could be facilitated if predators were also negatively affected by disturbance ( Koltz et al., 2018a ;Høye et al., 2021 ).
Studies on a diverse range of disturbances such as opencast mining ( Dunger et al., 2002 ;Dunger et al., 2004 ), fire ( Huebner et al., 2012 ;Malmström, 2012 ), deforestation ( Čuchta et al., 2019 ) and drought ( Lindberg and Bengtsson, 2005 ) have identified common patterns in Collembola recovery following acute disturbances. The first species to recolonise are generally those with a high dispersal capacity, surface-dwelling (epiedaphic) nature, and generalist/opportunistic feeding habits ( Malmström, 2012 ). In Antarctica, most taxa live in the soil profile or beneath rocks and appear to have very limited dispersal, and hence low ability to recolonize habitats ( Janetschek, 1967 ;. In the Arctic, collembolan communities are more diverse and disturbances are likely to have variable effects on different taxa. Deeperdwelling asexually reproducing species are slower to recolonise and thus are likely to be less resilient to disturbances ( Huebner et al., 2012 ;Malmström, 2012 ). Ultimately, while some species will recover within decadal timescales, whole communities will not (see Malmström, 2012 ). Lower latitude studies have demonstrated that whole community recovery is slow, even after 50 years following opencast mining, Collembola assemblages still failed to resemble neighbouring undisturbed communities ( Dunger et al., 2004 ).
Conclusions
Polar taxa have adapted over millennia to habitats that are now changing faster than any other on the Earth. Polar Collembola possess a suite of characteristics that enable their survival in extreme conditions and may help them adapt to changing conditions. These include high levels of genetic diversity, wide thermal tolerance ranges, physiological plasticity, generalist-opportunistic feeding habits and considerable capacity for behavioural avoidance. However, the biggest threats to polar Collembola are likely to be increasingly extreme and variable temperature regimes, drought, and changing biotic interactions. More diverse communities are likely to have some member taxa that are able to resist or recover from disturbances ( Somero, 2010 ). Climate change will exacerbate the variance and extremes of environmental conditions which is generally assumed to favour Collembola adapted to variability. Overall, deeper-dwelling species that fail to resist climate change may not recover in ecologically relevant timescales, especially given the current, rapid rates of change ( Malmström, 2012 ). The Arctic, with higher levels of diversity, may have higher levels of taxonomic redundancy which could moderate ecosystem response ( Koltz et al., 2018a ;Meredith et al., 2019 ). Unfortunately, areas such as the McMurdo Dry Valleys of Antarctica, with very low levels of taxonomic diversity are potentially more vulnerable and Collembola there will probably see the most profound changes .
Ongoing understanding of the issues covered in our review will facilitate an integrative approach to study the effects of climate change on polar Collembola. For example, in Antarctica, established baseline genetic data presents opportunities to investigate the interaction between genetic diversity and physiological tolerances at the finer population and individual scale, without the widespread influence of biotic interactions. Profitable areas of research that would benefit from immediate attention include: 1) improved baseline levels of species and genetic diversity for the Arctic fauna; 2) evaluating the behavioural avoidance capacity of polar Collembola to stressors in natural systems; 3) determining physiological tolerances (heat, cold, drought, pollution, and their interactions) for a wider range of Arctic and Antarctic taxa; 4) using genome and transcriptome sequencing to understand the genetic and physiological mechanisms of polar Collembola responses to stressors and their interactions; and 5) employing molecular tools to catalogue the diets a broad array of species and life stages. Collectively, these avenues of research will help to further illuminate the resilience of polar Collembola as well as their role in mediating the resilience of wider polar terrestrial ecosystems to climate change.
Declaration of Competing Interest
The authors declare no financial conflicts associated with this research. Brent J. Sinclair is Editor-in-Chief of Current Research in Insect Science. Given his role, he had no involvement in the evaluation or peer review of this manuscript, and has no access to information regarding its peer review. | 2023-01-19T22:53:24.445Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "6804224cecb621dc79954266c2ce60b5910e6b57",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.cris.2022.100046",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6804224cecb621dc79954266c2ce60b5910e6b57",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117767267 | pes2o/s2orc | v3-fos-license | Search for ZZ and ZW Production in ppbar Collisions at sqrt(s) = 1.96 TeV
We present a search for ZZ and ZW vector boson pair production in ppbar collisions at sqrt(s) = 1.96 TeV using the leptonic decay channels ZZ -->ll nu nu, ZZ -->l l l' l' and ZW -->l l l' nu. In a data sample corresponding to an integrated luminosity of 194 pb-1 collected with the Collider Detector at Fermilab, 3 candidate events are found with an expected background of 1.0 +/- 0.2 events. We set a 95% confidence level upper limit of 15.2 pb on the cross section for ZZ plus ZW production, compared to the standard model prediction of 5.0 +/- 0.4 pb.
The measurements of ZZ and ZW production provide a direct test of the standard model (SM) prediction of triple-gauge-boson couplings [1].The presence of unexpected neutral triple-gauge-boson couplings (ZZZ and ZZγ) can result in an enhanced rate of ZZ production, and an anomalous W W Z coupling can increase the ZW production rate above the SM prediction.The W W Z and ZZγ couplings have been studied by the CDF and DØ experiments through the study of W W , ZW , W γ, and Zγ production [2][3][4][5][6][7][8].The DØ experiment has measured an upper limit on the cross section for ZW production [4].No limit has been set on the cross section for ZZ production from hadron collisions, but production properties have been studied at LEP II in e + e − collisions at √ s = 183 − 209 GeV [9].A comprehensive review of the limits on anomalous W W Z, ZZZ, and ZZγ couplings at LEP II can be found in Ref. [10].The production of W Z and ZZ boson pairs is also of interest because the decays cause significant backgrounds in searches for the SM Higgs boson.
In this report we present a search for ZZ and ZW production using the three decay modes ZZ → llνν, ZZ → lll ′ l ′ , and ZW → lll ′ ν, where l and l ′ are electrons or muons, predominantly from direct W or Z decays, but also with a small contribution from the leptonic decay of tau leptons.This study is based on (194 ± 12) pb −1 [11] of data collected by the upgraded Collider Detector at Fermilab (CDF II) from March 2002 to September 2003 using pp collisions at √ s = 1.96TeV.CDF II is a generalpurpose detector at the Tevatron accelerator at Fermilab.The main components used in this analysis are a silicon vertex detector, a central tracking drift chamber, central (|η| < 1.1 [12]) and forward (1.1 < |η| < 3.6) electromagnetic and hadronic calorimeters, and muon chambers.The silicon detector and central tracking chamber are located inside a 1.4 T superconducting solenoidal magnet.A more detailed description of the detector can be found in the CDF technical design report [13] and in a recent publication describing a measurement of W W pair production [8].
The data samples are collected by a trigger system that selects events having electron candidates in the central calorimeter with E T > 18 GeV, or muon candidates with p T > 18 GeV/c.Events for this analysis are then selected by requiring at least two leptons with E T > 20 GeV and |η| < 2.5 for electrons, or p T > 20 GeV/c and |η| < 1 for muons.An electron is identified as energy deposited in the central electromagnetic calorimeter which is matched to a well-measured track reconstructed in the central tracking chamber, or, for an electron with |η| > 1.2, as energy deposited in the forward electromagnetic calorimeter with an associated track utilizing a calorimeter-seeded silicon tracking algorithm [14].In addition, electrons must have appropriate shower profiles in the electromagnetic calorimeters.A muon is identified as a track in the central tracking chamber, with energy deposition in the calorimeter consistent with a minimum ionizing particle, and with a track segment in the muon chambers.If a minimum ionizing track points towards a gap in the muon chamber coverage, it is still considered a muon candidate in events that have an additional electron in the central calorimeter or a muon with a muon chamber track segment.All charged leptons are required to be isolated from additional nearby calorimeter activity.The transverse energy deposited around an electron or muon in a cone of radius ∆R = ∆φ 2 + ∆η 2 = 0.4, excluding the calorimeter energy matched to the lepton candidate, is required to be less then 10% of the electron E T or muon p T .
The signature of neutrinos in the decays of ZZ → llνν and ZW → lll ′ ν is missing transverse energy ( / E T ), measured from the imbalance of E T in the calorimeter and the escaping muon p T (when muon candidates are present).
The next-to-leading-order (NLO) ZZ and ZW cross sections in pp collisions at √ s = 1.96TeV are calculated by the MCFM [15] program using the CTEQ6 parton distribution functions [16].They are σ(pp → ZZ) = (1.39 ± 0.10) pb and σ(pp → ZW ) = (3.65 ± 0.26) pb.We study events in three categories designed to encompass the main leptonic branching ratios of the ZZ and ZW decays.The first includes events with four charged leptons, which is sensitive to ZZ → lll ′ l ′ (l and l ′ = e, µ, τ ) with a branching ratio of 1.0%.Since we only select events with electrons and/or muons, we are only sensitive to final-state taus through their subsequent decay to leptons.The second category, which includes events with three charged leptons plus / E T , consists predominantly of ZW → lll ′ ν (branching ratio of 3.3%).Events from ZZ → lll ′ l ′ , where one lepton is not identified, can also fall into this category.The third category includes events with two charged leptons plus / E T , which is sensitive to ZZ → llνν (branching ratio of 4.0%) and ZW → lll ′ ν, where one lepton is not identified.
Our strategy is to first select events containing a Z boson and then require additional leptons and/or large / E T in the event.The Z boson is identified by one pair of same-flavor oppositely-charged leptons (e + e − or µ + µ − ) with an invariant mass between 76 GeV/c 2 and 106 GeV/c 2 .The four-lepton category selects a second lepton pair using the same criteria.The three-lepton plus / E T category selects, in addition to the Z boson, a third charged lepton with E T > 20 GeV (p T > 20 GeV/c for muons) and / E T > 20 GeV.For two-lepton events, in order to reduce the significant contribution from W W boson pairs, we select Z bosons using a narrower invariant mass range of 86 < M ll < 96 GeV/c 2 .Two additional requirements are designed to suppress the Drell-Yan background in the two-lepton category.The first requires / E T significance ( / E sig T ) to be larger than 3 GeV 1/2 , where / E sig T is defined as / E T / E T and the sum is over all calorimeter towers above a given threshold.If muons are identified in the event, E T is also corrected for the muon momenta.We find that / E sig T is a better discriminant than / E T in controlling the Drell-Yan background, and that the maximum expected signal significance is achieved when / E sig T is at least 3 GeV 1/2 .The second requirement is for ∆φ between / E T and the closest lepton or jet, ∆φ( / E T , lepton/jet), to be larger than 20 • , in order to reduce the likelihood of falsely-reconstructed large / E T due to mismeasured jets or leptons.Finally, in the twolepton category we only consider events with zero or one jet to suppress t t background.In this analysis, jets are reconstructed using a cone of fixed radius ∆R = 0.4 [17], and are counted if E T > 15 GeV and |η| < 2.5.
The main background in the four-lepton and threelepton categories is from "fake-lepton" events, in which jets have been misidentified as leptons in Z/W + jets events.The backgrounds in the two-lepton category include W W , t t, Drell-Yan, and fake-lepton events.
For each of the three categories of events, the total efficiency for accepting a ZZ or ZW event can be expressed as ǫ total = ǫ ID × ǫ trigger × ǫ geom−kin , where ǫ ID is the efficiency for identifying the number of leptons appropriate for a given category, ǫ trigger is the efficiency for the event to pass the trigger requirements, and ǫ geom−kin is the efficiency for the leptons to fall within the geometric acceptance of the detector and for the events to pass all kinematic requirements for each signature.The total efficiencies times branching ratios are listed in Table I.The lepton identification efficiencies are measured using Z → ee/µµ data.The Z → ee events are selected with one identified electron and a second deposition of energy in the electromagnetic calorimeter and an associated track.The Z → µµ events are selected with one identified muon and a second track.The lepton pairs are required to have an invariant mass consistent with a Z and tracks of opposite charge.The unbiased lepton is used to measure the identification efficiency.The trigger efficiencies are measured using data from independent trigger paths.The geometric and kinematic efficiencies are determined using a pythia event generator [18] with a geant-based detector simulation [19].Table I also shows the expected numbers of ZZ and ZW events, each calculated using σ × ǫ total × Ldt, where σ is the aforementioned NLO theoretical cross section.In each of the three categories a relatively small, but non-negligible, fraction of the total efficiency is due to final-state tau leptons decaying to electrons and muons.Overall, we expect 2.31 ± 0.29 ZZ plus ZW events in (194 ± 12) pb −1 of data.
The systematic uncertainties associated with the signal acceptances are dominated by the Monte Carlo simulation of / E sig T in the two-lepton category.This uncertainty is estimated by comparing distributions of / E sig T between data and Monte Carlo in inclusive W events, where neutrinos from the W decays produce large / E sig T .Relative to the signal acceptance, the uncertainty due to / E sig T is 10% for ZZ and 6% for ZW .The other systematic uncertainties include those from lepton identification efficiencies (1%), trigger efficiencies (1%), the efficiency of the zero or one jet requirement (2%), dependence on dif-ferent PDF's (2%), and the calorimeter energy scale and resolution (3%).The total uncertainty in the efficiency estimate is 11% for ZZ and 8% for ZW .
Backgrounds to the ZZ and ZW events are determined using a combination of data and Monte Carlo simulations.The W W , Drell-Yan, and t t background estimates are obtained using pythia Monte Carlo samples with the expected numbers of events normalized to the theoretical cross sections: 12.4 pb for W W from the MCFM program, 330 pb for Drell-Yan from pythia (M (γ * /Z 0 ) > 30 GeV/c 2 and including a K-factor of 1.4 [20]), and 7 pb for t t [21].A systematic uncertainty of 14% on the W W background results from the same effects that lead to the uncertainties on the ZZ and ZW acceptances.The Drell-Yan background has a 50% uncertainty from two main sources: 35% from the modeling of / E sig T , and another comparable amount from ∆φ( / E T , lepton/jet).The first is estimated from the comparison of / E sig T distributions between Drell-Yan data and Monte Carlo, and the second from the observed change in efficiency of the ∆φ( / E T , lepton/jet) requirement after adjusting the jet energy scale.The t t background has a 15% uncertainty due primarily to the uncertainty in the jet energy scale.
The fake-lepton background is obtained entirely using data.First, the probability for a jet to be misidentified as an electron or muon (fake rate) is estimated from jet-triggered data samples after subtracting real leptons from W and Z decays.The lepton fake rates are averaged over four samples with increasingly harder jet E T spectra.The observed differences between the jet samples are used to estimate the uncertainties in the lepton fake rates.The fake-lepton background is then determined by applying these fake rates to jets in lepton-triggered events which would have passed the event selection had one jet faked a lepton.This background has a 41% uncertainty, dominated by the uncertainty associated with the lepton fake rates.The backgrounds in the three event categories are summarized in Table I.
After all selection criteria we observe 3 events in the data * , all of them in the two-lepton plus / E T category (2 ee and 1 µµ), compared to a SM signal plus background expectation of 3.33 ± 0.42 events.In Figure 1 we present distributions of lepton p T and η in the two-lepton plus / E T category, comparing data to SM expectations.The probability for the background of 1.02 ± 0.24 events to fluctuate to give three or more events is 9%.Therefore, we set a 95% confidence level upper limit on the ZZ and ZW combined cross section by applying a Bayesian method [23] with a flat prior cross section probability above zero.Using the Poisson statistics for the data and including the assumed Gaussian uncertainties in the expected signal and background, we find that the ZZ and * Another event passes all the requirements for ZZ → eeee in the four-lepton category, except for an isolation cut on one electron [22].ZW combined cross section is less than 15.2 pb at the 95% confidence level.Using this analysis we conclude that about 1 fb −1 of data is needed for a 3σ measurement of the SM cross section for ZZ + ZW production.In summary, a search for ZZ and ZW production in pp collisions at √ s = 1.96TeV has been performed using the leptonic decays of the vector bosons.In a data sample corresponding to an integrated luminosity of 194 pb −1 , 3 candidates are found with an expected background of 1.02 ± 0.24 events.The predicted number of ZZ and ZW events is 2.31 ± 0.29.A 95% confidence level limit on the sum of the production cross sections for pp → ZZ and pp → ZW is measured to be 15.
FIG. 1 .
FIG.1.Distributions of (a) lepton pT and (b) lepton η of the candidate data events, and the expected SM contributions in the two-lepton plus / ET category.
TABLE I .
The expected contributions from SM ZZ, ZW and background sources in 194 pb −1 , and the observed number of candidates in the data.The parentheses show the total efficiency times branching ratio for accepting ZZ or ZW events.Systematic and statistical uncertainties, and the uncertainties of the ZZ and ZW NLO cross sections, are included. | 2019-04-14T02:27:20.748Z | 2005-01-10T00:00:00.000 | {
"year": 2005,
"sha1": "22f71c096e007340e9a75209fc098f509ee68394",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/hep-ex/0501021",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3e854de155f53f8183394e73b85a89734dddebf9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255746356 | pes2o/s2orc | v3-fos-license | Phlegmonous Gastritis in a Patient With Nonalcoholic Steatohepatitis-Related Cirrhosis: A Case Report and Review of Literature
It is sometimes difficult to diagnose phlegmonous gastritis clinically. We herein present a rare autopsy report of a patient with phlegmonous gastritis associated with nonalcoholic steatohepatitis-related cirrhosis. The patient died of hepatic failure two weeks after exacerbation of anorexia and rapid progression of liver dysfunction. Autopsy revealed cholangitis lenta and sepsis-induced liver dysfunction, which was attributed to phlegmonous gastritis due to Moraxella (Branhamella) catarrhalis. Phlegmonous gastritis has seldom been reported in patients with liver cirrhosis. We believe the importance of keeping in mind that phlegmonous gastritis could be one of the complications of advanced liver cirrhosis.
Introduction
Phlegmonous gastritis is a nonspecific purulent inflammatory disease of the stomach that might involve only the submucosal layer or the full thickness of the stomach wall [1]. The risk factors for phlegmonous gastritis include alcohol abuse, diabetes mellitus, chronic gastritis, gastric ulcer, endoscopic treatments, and immunodeficient status [2]. The symptoms often include chills, fever, upper abdominal pain, nausea, and vomiting. However, the symptoms are sometimes mild, making diagnosis difficult.
Phlegmonous gastritis has been reported to be associated with various conditions, including gastric carcinoma/lymphoma, human immunodeficiency virus infection, Kaposi's sarcoma, myeloma, leukemia, Sjogren's syndrome, rheumatoid arthritis, hemodialysis, superior mesenteric artery syndrome, esophageal rupture, postsurgery, and patients treated with tumor necrosis factor (TNF) alpha receptor antagonists and chemotherapy [3][4][5][6]. Couveilhier provided the first autopsy report of phlegmonous gastritis in 1820 [7], and only one case of chronic hepatitis accompanied by phlegmonous gastritis has since been described [8]. Likewise, only one case report of liver cirrhosis (LC) with phlegmonous gastritis has been described [9]. We herein report a rare case of nonalcoholic steatohepatitis-related cirrhosis with sepsis-induced worsening of liver dysfunction secondary to phlegmonous gastritis.
Case Presentation
A 60-year-old male had been treated for nonalcoholic steatohepatitis-related cirrhosis at an outside hospital accredited by the Japan Society of Hepatology. He could not receive a hepatic transplant due to obesity (body weight: 111 kg; body mass index: 39 kg/m 2 ). One month before admission to our hospital, he developed Miller-Fisher syndrome as a complicating condition. He was transferred to our hospital for the treatment of cirrhosis and rehabilitation. He remained on the waiting list for the hepatic transplant, and we were in contact with the transplant department. He had been treated for portal vein thrombosis and hepatocellular carcinoma (using radiofrequency ablation) six years before starting this treatment. Physical examination at his current presentation showed grade 1 hepatic encephalopathy, jaundice, and splenomegaly. Laboratory findings on admission showed pancytopenia (white blood cell count {WBC}: 2.0×10 9 /L; hemoglobin: 11.9 g/dL; platelets: 46×10 9 /L), hypoproteinemia (total protein: 6.9 g/dL; albumin: 1.8 g/dL), liver dysfunction (total bilirubin {T-Bil}: 4.5 mg/dL; aspartate aminotransferase {AST}: 53 U/L; alanine aminotransferase: 17 U/L), hyperammonemia (212 μg/dL), and low prothrombin time (29.5%) ( Table 1). This case was recognized as rapid progression of liver dysfunction in the last two weeks before death.
These findings did not change during this hospitalization. Esophagogastroduodenoscopy on hospital day 49 revealed chronic gastritis and grade A gastroesophageal reflux disease. The patient was treated with vonoprazan fumarate, ursodeoxycholic acid, levocarnitine chloride, branched-chain amino acid-enriched nutrients (Aminoleban EN; Otsuka Pharmaceutical Co., Tokyo, Japan), and spironolactone. No prominent changes in his clinical course were seen until clinical day 125. At this time, the patient experienced exacerbation of anorexia, but no fever, nausea, vomiting, epigastralgia, or diarrhea. His clouding of consciousness existed from the evening before death (The Glasgow Coma Scale {GCS} 9 points). His body temperature on the day of his death was 39.0°C. Sequential organ failure assessment (SOFA) score on the day of death was 11 points excluding one item (GCS=9, score 3; mean arterial pressure {MAP} 53 mmHg, score 1; PaO 2 /FiO 2 not measured; platelets 53×10 9 /L, score 2; bilirubin 22.8 mg/dL, score 4; and creatinine 1.34 mg/dL, score 1). Thus he might be septic.
From day 125 to days 134 and 141, serum levels of T-Bil increased rapidly from 4.4 mg/dL to 8.4 and 22.8 mg/dL, aspartate aminotransferase (AST) increased from 60 U/L to 79 and 94 U/L, gammaglutamyltransferase (γ-GT) increased from 9 U/L to 16 and 20 U/L, c-reactive protein (CRP) increased from 0.3 mg/dL to 0.19 and 3.48 mg/dL, and WBC increased from 1.50 ×10 9 /L to 1.72 and 4.82 ×10 9 /L. He died of progressive hepatic failure on day 141 (Table 1). Subsequently, autopsy was performed. Two major pathologic lesions were observed in the stomach and the liver.
In the stomach, the mucosa of the lesion was swollen and hemorrhagic, and histologic examination showed diffuse phlegmonous gastritis, characterized by infiltration of the entire thickness of the gastric wall with neutrophils and Gram-negative diplococci. The bacteria were considered as probably being endotoxinproducing Moraxella (Branhamella) catarrhalis (Figures 2a-2c). In the liver, micronodular cirrhosis corresponding to steatohepatitis-related cirrhosis was obvious, and small nodular liver cirrhosis and hepatic atrophy were observed. Macrovesicular steatosis and ballooned hepatocytes could be observed. Accordingly, this was indicative of nonalcoholic steatohepatitis-related cirrhosis and due to this additional occurrence of sepsis-induced liver dysfunction, the biliary pathways were considered likely to have been damaged by endotoxin, resulting in the degeneration and subsequent proliferation of dilated canals of Hering near the portal vein. However, few pathological changes were evident in the lobular bile ducts. Further, numerous bile thrombi were seen congesting and proliferating the canals of Hering, with scattered neutrophils and lymphocytes, corresponding to sepsis-or endotoxinrelated cholangitis lenta (Figures 3a, 3b). This was the probable cause of death in this patient.
Discussion
Phlegmonous gastritis can be caused by the following bacteria: Streptococcus , Staphylococcus aureus, Escherichia coli, Haemophilus influenzae, and Proteus species [10]. Phlegmonous gastritis caused by B. catarrhalis has never been previously reported. The mortality rate of phlegmonous gastritis is reportedly about 42% [11]. As B. catarrhalis is widely known to cause respiratory tract infection, a possible mode of infection, in this case, was direct gastric mucosal invasion after swallowing the pathogenic bacteria. Another possible reason for infection was lowered resistance due to malnutrition, since our patient with cirrhosis suffered from loss of appetite, leading to a compromise of the sterilizing ability of the stomach by acid secretion inhibitors. Moreover, this patient was immunocompromised due to decompensated cirrhosis [12].
In this case, cholangitis lenta and sepsis-induced liver dysfunction were present. Cholangitis lenta (also known as ductular cholestasis) is usually caused by Gram-negative bacteria [13]. The mechanisms by which cholestasis contributes to sepsis-induced liver dysfunction might include the production of inflammatory mediators and dysfunctional bile excretion caused by endotoxin translocation [12,14]. Another mechanism might be lipopolysaccharide-impaired organic anion transportation inside the capillary bile ducts [12,15]. In addition, cholangitis lenta caused by inflammatory cells invading around capillary bile ducts [13] and sepsisassociated intrahepatic cholestasis with bile thrombi in capillary bile ducts have been reported [16].
On the other hand, the possibilities of translocated routes other than phlegmonous gastritis might exist. For example, this patient had cholangitis, which could be a bacterial translocation route. However, acute cholangitis by B. catarrhalis has never been reported previously. In histological examination, numerous bile thrombi, neutrophils, and lymphocytes were found to proliferate the canals of Hering, indicating the sepsisrelated cholangitis lenta. Moreover, B. catarrhalis existed with neutrophils in all layers of the stomach. Thus, it was most likely that phlegmonous gastritis was orally developed, which caused sepsis and then cholangitis. Table 2 summarizes two cases in the literature of phlegmonous gastritis associated with chronic hepatic disease in addition to the present case [8,9]. The underlying hepatic diseases were alcoholic liver disease and LC of unknown etiology. Both those patients were less than 60 years of age and had chronic liver disease that progressed rapidly and had a fatal course. However, each case was caused by a different bacterium. Hence, phlegmonous gastritis should be recognized as a severe complication of chronic liver disease.
Elevated T-Bil levels were previously reported as an indicator of poor prognosis in sepsis-induced liver dysfunction [17]. Additionally, γ-GT levels serve as an early marker of prognosis. Although our patient did not have fever or inflammatory findings, both T-Bil and γ-GT levels increased in the last two weeks before death. The reason why did both bilirubin and γ-GT levels increase might be that cholestasis persisted in the parenchyma and bile infarction occurred.
The following diagnostic modalities are reportedly useful for diagnosing phlegmonous gastritis: ultrasound, esophagogastroduodenoscopy, and ultrasonic endoscopy [18]. In our case, if the patient had undergone esophagogastroduodenoscopy at an earlier stage of phlegmonous gastritis, the outcome might have been different. Thus, we believe that the accumulation and analysis of more cases are important for elucidating liver dysfunction secondary to phlegmonous gastritis.
Conclusions
We have reported a rare autopsy case of nonalcoholic steatohepatitis-related cirrhosis that caused sepsisinduced liver dysfunction via phlegmonous gastritis. Phlegmonous gastritis should be recognized as an important complication of chronic liver disease, given the possible lack of specific symptoms and capacity for rapid progression.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-01-12T16:43:34.027Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "4fea62a2afc3ae2274278f8e32c30d344475e511",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/129529-phlegmonous-gastritis-in-a-patient-with-nonalcoholic-steatohepatitis-related-cirrhosis-a-case-report-and-review-of-literature.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c41d52a706cdef3e5c045e9b80ae39ccf3a904a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242751445 | pes2o/s2orc | v3-fos-license | Spontaneous lumbo-sacral epidural hematoma presenting as Cauda Equina syndrome: a case report
Spinal epidural hematoma (SEH) is a rare condition and it accounts for less than 1% of all spinal canal space occupying conditions. Spontaneous SEH most commonly occurs in the cervical and thoracic regions. They present with neck or back pain with radiculopathy and/ or myelopathy. Early surgical decompression is the recommended treatment in the presence of progressive neurological deficits. Spontaneous SEH (SSEH) presenting as Cauda Equina syndrome (CES) are rarely reported in the literature. We present a case of SSEH presenting late with CES. Due to delay in presentation and multiple co-morbidities, patient was treated conservatively.
INTRODUCTION
Spinal epidural hematoma is a rare condition and it accounts for less than 1% of all spinal canal space occupying conditions. Spontaneous SEH (SSEH) commonly occurs in cervical and thoracic region and early surgical decompression is usually recommended. We present a rare case of SSEH presenting as cauda equina syndrome, managed conservatively because of multiple co-morbidities and high risk of bleeding.
CASE REPORT
A 54 years old male foreign citizen, presented with acute onset of severe low back pain radiating to both lower limbs with associated weakness of both legs, 4 weeks prior to his admission. His weakness resolved significantly over 3-4 hours of presentation. He also had urinary retention, constipation and hypoesthesia over perianal region. He was diabetic and hypertensive. He had chronic kidney disease with renal failure for which he had renal transplant 25 years back. He had undergone aortic valve replacement surgery 3 months back and was on anticoagulants (Warfarin). He also had multiple other surgical procedures like cholecystectomy, hydrocele surgery, vocal cord surgery, cataract surgery in the past. He was on tablet Warfarin 3.5 mg once daily and tablet Ecospirin 75 mg once daily and immunosuppressant. His INR was 2.28. Neurological examination confirmed partial cauda equine syndrome with good motor power (Grade-4), intact sensory system and urinary retention requiring indwelling catheter. MRI of lumbosacral region showed extradural hematoma at L5-S1 level (Figure 1a, b; 2 a, b). After detailed discussion with patient and his physicians we planned for surgical decompression and evacuation of hematoma. He had episode of hematuria after starting bridge therapy with heparin and his INR was still high. Hence patient was counseled in favor of conservative treatment as there was considerable risk of anesthesia and bleeding from surgery.
DISCUSSION
Spinal epidural hematoma (SEH) is a rare condition (<1% of spinal canal space occupying lesions). 1 Common causes of SEH are trauma, coagulopathy, post-surgical hematoma, and vascular malformations like arteriovenous malformations, vertebral hemangiomas, obstetrical birth injury, lumbar punctures, spinal manipulation (chiropraxy), epidural injections, missile injuries, hypertension and physical exertion. 2,3 After spinal surgery, SEH is more frequent when operated for metastasis. 4 SSEH is defined as blood within the spinal epidural space without a history of trauma.
Cauda equina syndrome is defined as a constellation of symptoms and signs including back pain, radicular pain that can be uni-or bilateral, motor loss, sensory loss, and urinary dysfunction associated with decrease in perianal sensation. 5 The sources of bleeding in SSEH are hypothesized as from rupture of epidural veins, arteries, or from vascular abnormalities. 1 The classical presentation of SSEH is acute onset of severe, often radiating, back pain followed by features of nerve root and/or spinal cord compression, which develop minutes to hours later. There are scattered reports of presentation of massive SSEH in high level swimmer and as abdominal pain in child. 6,7 Magnetic resonance imaging (MRI) is the investigation of choice. Surgical intervention in the form of laminectomy and clot evacuation is the recommended treatment. Preoperative neurological status and the timing of surgery are very important in the final outcome and results are satisfactory if operated within 12-24 hours of onset of symptoms. 8 There are scattered reports of successful conservative management of four cases of SSEH and a case of recurrent cervical epidural hematoma spontaneously resolving in a 37 week primigravida. 9,10 Hematoma may resolve spontaneously within four months and outcomes are reported to be poor without surgical intervention9. Non-operative management is thus reserved only for those who are symptomatic but not good surgical candidates. 11 Our patient had multiple co-morbidities. After stopping oral anticoagulant and starting bridge therapy with heparin, patient had frank hematuria. Hence, we offered him conservative treatment.
CONCLUSION
SSEH is a rare cause of back pain and permanent neurological deficit. Several causes have been described as possible risk factors for development of SSEH. There are scattered reports of conservative treatment in selective cases. The mainstay of the treatment is surgical. | 2021-09-09T20:48:31.900Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "49e6122b6d480574efe553c33202b2da94d7ad8a",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/9886/6660",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d8f3fa3016827a8644f2b5e1159d515f8e0b704",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
243871547 | pes2o/s2orc | v3-fos-license | Delivery of User Intentionality between Computer and Wearable for Proximity-based Bilateral Authentication
Recent research discovers that delivering user intentionality for authentication resolves a random authentication problem in a proximity-based authentication. However, they still have limitations – energy issue, inaccurate data consistency, and vulnerability to shoulder surfing. To resolve them, this paper proposes a new method for user intent delivery and a new proximity-based bilateral authentication system by adopting it. The proposed system designs a protocol for authentication to reduce energy consumption in a powerconstrained wearable, applies a Needleman-Wunsch algorithm to the matching of time values as well, and introduces randomness to a user behavior that a user must perform for authentication. We developed a prototype of our authentication system on which a list of experiments was conducted. Experimental results show that the proposed method results in more accurate data consistency than conventional methods for user authentication intent delivery. Eventually, our system reduces authentication failure rate by 66.7% compared to conventional ones. Keywords—Security; authentication; internet of things; user intentionality; proximity-based authentication; bilateral
I. INTRODUCTION
An ID-password authentication has enjoyed a variety of applications for a long time. While it is simple to use, it requires mental efforts to remember IDs and passwords as well as physical efforts to input them directly. It is recommended to use different passwords for different IDs. However, people in real life use the same password for multiple IDs because it is easy to remember one password. Once the password is exposed, however, user's accounts can be exposed to security risks.
With advancement of Internet of Things (IoT) technologies and pervasive computing, recent proximity-based authentication performs without requiring both mental and physical from users. The proximity-based authentication initiates authentication when a wearable user approaches a certain distance from the authentication device (say, a computer). We note that once the user is within the distance, the authentication automatically proceeds regardless of the user's intention to authenticate. That is, the user proceeds with authentication that she does not know, named a random authentication problem. Moreover, the user's lack of understanding of authentication intent leads to the problem of continuing authentication; an authentication process starts whenever the user passes a certain distance of the computer that she wanted to authenticate.
A user authentication intent delivery solves the problems. It proceeds with authentication via a user's specific behavior, enabling proximity-based authentication to work with the sensor values in the wearable and data collected from the computer for this behavior. Conventional methods for user authentication intent delivery use a mouse in the computer to calculate acceleration values using mouse position values and distance traveled values to collect them with time values and use a keyboard to press the keyboard and time to press the time. The wearable collects acceleration sensor values and the time values and transmit them to the computer. The computer checks the consistency of these data to determine whether to authenticate.
However, the conventional methods have limitations as follow. First, they require the wearable device to keep running built-in sensors and recording data, which consumes energy faster in the small, power-constrained device. Next, they predefine the type of behavior that a user must perform for authentication and the number of actions that the user repeats the behavior, which could be vulnerable to external attackers. Last, the conventional methods do not make use of time values when checking data consistency, which may result in less accurate matching. This paper proposes a new method that delivers user intentionality for authentication and resolves the limitations and eventually proposes a new proximity-based bilateral authentication system by adopting the new method. To address the energy concern, the proposed system designs a new protocol for authentication where an authentication process is initially detected by the wearable. The system resolves the second limitation by applying randomness to the number of actions; that is, it changes the number each time a user proceeds with authentication. Last, our system enhances accuracy of data consistency by applying a Needleman-Wunsch algorithm to the matching of time values as well as acceleration sensor data.
A prototype is developed where we use a Galaxy Watch as a wearable, and experiments are conducted to evaluate performance of the proposed system. Experimental results show that the proposed system reduces error in data consistency by 46.6% on average (from 0.3593 to 0.1918). The improved accuracy affects performance of authentication; our system reduces authentication failure rate by 66.7% compared to the conventional method. *Corresponding Author. www.ijacsa.thesai.org The rest of the paper is composed as follows. Section II reviews two popular authentication methods and their limitations. Section III describes a conventional method for a user authentication intent delivery in detail. Section IV proposes a new proximity-based authentication system that delivers user's intentionality for authentication in an accurate manner. Experiments and performance evaluation of the proposed system are discussed in Section V. The last section concludes the paper.
II. RESARCH BACKGROUND
This section reviews a widely used authentication method, an ID-Password authentication, and discusses limitations. It also describes a proximity-based authentication method that can resolve the limitations.
A. ID-Password Authentication
The most popular user authentication method has been an ID-password authentication [1]. It determines whether these data are equivalent to the values stored in the database by the user entering their own ID and password. Recently, authentication security through secondary authentication has been strengthened. The security of authentication is increasing with the addition of various secondary authentication methods, including authentication methods using existing ID passwords [2], sending messages using smartphones to enter additional code of messages [3], and user verification methods using specific applications.
Limitations: The ID-Password authentication method has limitations; it requires mental and physical efforts from users. In the case of mental effort, it is likely to be resolved if the passwords and IDs of all accounts are unified, but if passwords and IDs are exposed, all accounts may be at risk. On the contrary, if all IDs and passwords are set differently, mental efforts are needed too much because one should remember the whole thing. In the case of physical effort, the process of entering an ID and password is more mobile than expected because it uses a mouse and keyboard to enter characters. Recently, additional authentication methods using secondary passwords and QR codes [5] have been utilized by utilizing smartphones [4] to increase security. This method is certainly highly reliable in security, but there is a hassle of unlocking a smartphone, using an application, or checking a message and entering it back into the computer.
B. Proximity-based Authentication
Proximity-based authentication is a technology that logs in or out users from applications, devices, websites, etc. using the distance between users and authentication devices as a key value [6]. To be successful in authentication, it is necessary to have auxiliary devices such as smartwatches and wearables near devices that users want to authenticate. Proximity-based authentication automatically initiates authentication when a user approaches a certain distance of the authentication device. At this time, authentication devices and users use wireless communications such as Bluetooth and Wi-Fi. The proximitybased authentication system is shown in Fig. 1. A computer and a user proceed with authentication by exchanging authentication tokens with each other [7] when the user is within a certain distance of the server. Limitations: Proximity-based authentication automatically begins a process when a user approaches a certain distance of the authentication device. The authentication that occurs at this time proceeds regardless of the user's intention to authenticate. In other words, authentication proceeds even if a user approaches a device within a certain distance without any intention of authentication, so the user proceeds with authentication that he or she does not know [8]. Authentication fails if the user deviates from the effective distance for authentication with the authentication device while the authentication is in progress. This is an important part of the authentication process that is irrelevant to the user's intentions described earlier. Authentication is attempted if the user passes the effective distance of the authentication device, at which point authentication attempts-authentication failures [9] are repeated because authentication fails if it is outside the effective distance. If this process is repeated, certain devices lock down the authentication and cause problems that prevent users from proceeding with authentication in the situation they want to authenticate. The following section describes how one can communicate users' authentication intent in proximitybased authentication in detail.
III. DELIVERY OF USER INTENTIONALITY FOR AUTHENTICATION: CONVENTIONAL APPROACH
In proximity-based authentication, a user authentication intent delivery allows accurate authentication to proceed by delivering authentication intention from assistive devices (e.g., wearables and smartwatches) or authentication devices (e.g., computers). Two typical technologies for user authentication intent delivery include wristband-based authentication for desktop computers (SAW) [10] and proximity-based user authentication on voice-powered Internet-of-Things devices (PIANO) [11]. This section describes a conventional approach that accurately conveys users' authentication intent in proximity-based authentication. Since our scenario sees authentication between a wearable and a computer, a review in this section is mainly based on the former.
A. User Authentication Intent Delivery
A user authentication intent delivery is generally based on near-field based authentication, where wearable users and computers are paired over wireless communication within a certain distance, then double-click a specific button on the computer keyboard to confirm the user's intention to authenticate. Afterwards, values for a user's specific behavior are collected from wearable and computer, and authentication is carried out by matching these data [12]. www.ijacsa.thesai.org 1) System architecture: After wearing a wearable device, the user double-taps the computer's keyboard to verify the computer's authentication intent. When the computer confirms the authentication intent, it requests data about the acceleration sensor of the wearable. Users take certain actions, using a mouse or keyboard. On wearable devices, the acceleration sensor value and the gravitational sensor value are calculated and sent to the server using a specific program to calculate the value of the mouse and keyboard movement. When transmitting, it is carried out through wireless connections such as Bluetooth and Wi-Fi. The matching of the sensor value of the wearable device sent to the server is verified using a mouse or keyboard, and user authentication is performed on the computer with an authentication completion message. Request to measure again if authentication fails.
Unlike traditional proximity-based authentication, the user authentication intent delivery conveys the user's authentication intent, which allows the user to approach within a certain distance, verify the authentication intent, and authenticate. Differences from proximity-based authentication methods are shown in Table Ⅰ [13].
2) Operation:
In a conventional method for user authentication intent delivery, a computer and a wearable is paired via Bluetooth communication [14]. After pairing, authentication starts by pressing the computer's keyboard specific key twice, and the computer requests acceleration sensor values and time data from the wearable. The worm transmits the requested acceleration sensor data [15] to the computer. When the data is transferred, the computer checks data consistency; it successfully authenticates upon successful data matching or fails to authenticate upon data matching failure. In the event of a data matching failure, the sensor data transfer request is re-requested. If the data is not sent within 5 seconds of pairing, the pairing is canceled and then the pairing is requested again.
3) Detection of user intent on computer: The computer uses data values for wrist movements of wearable users to check the consistency with acceleration sensor data of wearable [16]. To collect data on the user's wrist movement from a computer, conventional methods use a mouse-wiggle [17] and/or a TAP [18]. Fig. 2 illustrates these two ways.
Proximity-based authentication
User authentication intent delivery
Advantages
Authentication system operation when user approaches within a certain distance
Characteristics
Complement the absence of a user authentication intent verification method of traditional proximity-based authentication by identifying the intent through specific actions.
Weakness
Continuously operates the authentication system when accessing within a certain distance without identifying the user's authentication intent. When authentication begins, a user in the mouse-wiggle method moves the mouse from side to side on the screen, which is recorded. The computer measures the mouse's position values and time values with which it computes acceleration values. Finally, the acceleration value is calculated by dividing the time value by the distance traveled previously obtained. The TAP method makes use of the time value when the keyboard was pressed a certain number of times and difference between the current time value and the time value when the keyboard was first pressed. This value indicates the time when the user's hand presses and releases the keyboard. The computer uses this time value to verify that the value of the highest point of the acceleration sensor value in the wearable matches the recorded time.
4) Detection of user intent on wearable:
After pairing with a computer, a wearable is asked to send sensor data. When a user performs a mouse-wiggle and keyboard-TAP method, the wearable transmits the value of the acceleration sensor of the device itself to the computer [19].
B. Limitations of User Authentication Intent Delivery
A delivery of explicit user intentionality for authentication solves the random authentication problem [20] that can occur in proximity-based authentication methods, thus enabling more accurate authentication. However, conventional methods for user authentication intent delivery have limitations that hinder optimal authentication operations. First, the methods make a computer to detect a user intent for authentication initially. Once detected, the computer tries to communicate with a wearable device on the user side to obtain sensor records. This implies that the device remains on a ready state all the time, which makes it hard to save energy consumption on the device. Next, conventional methods use fixed forms of user behaviors as intent. For instance, the TAP-5X method pushes a computer's keyboard five times to collect data for matching data between computers and wearables. Because TAP-5X performs exactly five actions by the user, it is possible for an external attacker to observe and analyze the user's behavior and authenticate by taking the same action [24]. Wearable devices may not be problematic because they are usually worn by the user, but if the user is away for a while or if the wearable is stolen 15 times, the computer cannot verify whether the user is a user or an attacker, making the TAP-5X vulnerable to external attackers. Last, conventional methods prioritize the matching of acceleration sensor values in data matching for authentication between computers and wearables for authentication. Data obtained from computers and wearables include time values and acceleration sensor values. A Needleman-Wunsch algorithm [21] has been used to match the www.ijacsa.thesai.org acceleration sensor values, increasing the accuracy of the acceleration sensor data values. However, algorithms for data matching were not used for time values while time values have been used as one of the most significant metrics in previous research on user authentication intent delivery [22,23]. This can cause problems in the process of identifying data matching.
IV. PROPOSED SYSTEM: USER-INTENDED PROXIMITY AUTHENTICATION
This section proposes a new proximity-based authentication system that explicitly delivers user intentionality for authentication. To resolve the first limitation, an authentication process in the proposed system is initially detected on a user side instead of on a computer. To this end, the user is required to touch her wearable twice first. The next limitation is resolved by applying a one-time password concept that proposes a new criterion for the number of actions that a user must perform each time he or she proceeds with authentication. To resolve the last limitation, our system improves accuracy in the data matching process by applying the Needleman-Wunsch algorithm to time values as well.
A. System Architecture
The proposed system consists of two entities, a user and a computer, as shown in Fig. 3. The user is with a wearable that is paired with the computer via Bluetooth communication. The user starts behaving an intent action, and her behavior is detected both on the wearable and on the computer and processed. Fig. 4 helps us describe how accurately the system measures data on the behavior and determines that the measured values on both sides are matched.
The computer in Fig. 4 opens a Bluetooth server and connects it to Intelligent Unit that calculates data matching and Action Detector that is responsible for detecting the movement of a keyboard and a mouse on the computer and for measuring corresponding data. The wearable uses Bluetooth Server on the computer and a Bluetooth socket to make a UUID connection. Sensor Manager keeps monitoring values from built-in sensors. Once an authentication process is triggered, Sensor Manager transmits the sensor records to the computer via the Bluetooth communication. At the same time, Action Detector measures the movement of the computer's mouse and keyboard and records related data. Upon collecting data from both Sensor Manager and Action Detector, Intelligence Unit checks the match between the two data. Authentication is completed if the data is matched, or retransmission is requested if authentication fails due to data mismatch.
B. Protocol
The proposed system designs a protocol for authentication between the computer and the wearable, and Fig. 5 depicts flows and processes of the protocol.
The computer makes a pairing request to the wearable by transmitting its UUID. The wearable checks the validity of the UUID received and transmits its own UUID to the computer to proceed further with pairing. Once they are paired, a user is allowed to request authentication to the computer.
An authentication process in the proposed system is initially detected by the wearable unlike conventional methods for user authentication intent delivery. The user double-touches the wearable to communicate her authentication intent to the computer. Then, authentication begins when the computer confirms the intent. The computer requests the wearable to transmit sensor records. At the same time, it generates a random number (a nonce) and piggybacks the nonce on the request message.
Upon receiving the message, the user takes an action; she moves a mouse or pushes a key in a keyboard connected to the computer. The nonce received guides her behaviors; that is, she repeats the movement or the push multiple times corresponding to the nonce value. When the user performs an action, Sensor Manager in the wearable records acceleration values and corresponding time values and transmits them to the computer. After sending the request message to the wearable, Action Detector in the computer starts collecting data of mouse movement (changes of the pointer's positions and corresponding timestamps) and/or data of keyboard (numbers of key presses and timestamps of pressing and releasing). Intelligence Unit in the computer collects data from both Sensor Manager and Action Detector. These data are supposed to represent the user's behavior for authentication intent commonly but recorded by two different devices. It then checks the consistency of the data using the Needleman-Wunsch algorithm. If two data are matched, the user is authenticated. If they are not matched or the collected data is not received, the computer retransmits the request message. If there is no response within timeout period, authentication fails.
C. Accuracy of Data Consistency
Conventional methods for user authentication intent delivery, like our system, apply the Needleman-Wunsch algorithm to the data consistency check; it matches data from user-worn wearables with data measured on the computer. It is observed that the methods use the matching algorithm only to determine the matching of sensor values and do not care for corresponding time values. However, time values in general authentication have played an important role especially when computing data consistency and data matching [25]. Matching of sensor data may result in failure of authentication unless corresponding time values are matched. To resolve the limitation, this paper proposes applying the Needleman-Wunsch algorithm to the matching of time values as well as acceleration sensor data, enhancing accuracy of data consistency.
D. Randomness of user behavior
In a behavior-based authentication, data consistency is verified using data obtained by a user taking an action. Conventional methods for user authentication intent delivery define criteria in their specific behaviors [26]. This criterion is an essential part of users' behavior but can be abused for attacks. An external attacker can observe certain behaviors that a user performs when authenticating to a computer and then mimic the same action with a wearable device to carry out an authentication attack [27]. The criteria for users' behavior serve as fixed ID-Password values in a popular authentication. Thus, if data is leaked, constant data can be analyzed and attacked with authentication using fake data.
To tackle the challenge, the proposed system adopts the concept of OTP in the existing ID-Password authentication method [29]. The computer in our system sends a random number together when requesting data for authentication to a wearable [28]; this changes the required number of criteria for each authentication of a user's specific behavior. More technically, the computer sends two random values that are applied separately to the mouse and keyboard actions. To reduce the time required for certification as much as possible, a random number for the mouse is between 3 and 5 and that for the keyboard is between 3 and 7, which is based on numbers of existing authentication methods. This allows users to defend against authentication attacks because each authentication requires different numbers of certain behaviors. Even if external attackers observe and analyze users' behavior, they are inconsistent. We note that an optimal range of random values could be an interesting topic for further research.
V. DEVELOPMENT AND PERFORMANCE EVALUATION
This section develops a prototype of the proposed system, runs experiments, and evaluates its performance. To assess accuracy of data consistency, we compare error values both in a traditional (conventional) method reviewed in Section III and in the proposed system that applies the matching algorithm to the time values additionally. Regarding randomness of user behavior, we measure data on how much user behaviors are matched when using randomized criteria. The initial result is then used to see whether an external attacker is authenticated when he imitates a particular behavior in both systems.
A. Development
The test environment was modeled on the way that a user performs authentication at a personal computer, and a wearable was worn on the user's right wrist. We use a computer running Linux Ubuntu 16.04.04 LTS on a system of Intel® Core™ i5-9600KF CPU @ 3.70GHz and 16GB of memory. When recording mouse movements, the computer calculates the acceleration value using the position change value and the time change value [30]. When recording keyboard input, the time when the keyboard is inputted and the time when it is inputted is recorded. We use a Galaxy Watch (smartwatch) as a wearable, and Table II shows its technical specification. The smartwatch samples the accelerometer sensor at 200 Hz and transmits the data to the Bluetooth server in real time.
B. Experiments on Traditional Method
For traditional methods, we use a mouse and keyboard to collect data about a user's specific behavior. Participants wear smartwatches on their right wrist to conduct experiments. In the smartwatch, acceleration sensors are used to collect acceleration values for wrist movements, and in the computer, mouse and keyboard are selected sequentially from Pythonbased programs. When selecting a mouse, the mouse moves from side to side on the screen to collect the mouse's location data and time data and calculate the acceleration. We then collect the time when the keyboard button is pressed once the keyboard is selected, or the time when the keyboard button is pressed [31]. The computer uses the matching algorithm to determine the data match for the acceleration value of the collected data.
For accurate verification of consistency of the data collected via the mouse-keyboard method of the participants, the data values are graphically represented to confirm the consistency of the values directly. This section compares the two graphs with the largest error in the experimental results as representatives, and the overall experimental results are tabulated. 6 shows the mouse experiment results of participant 1. The x-axis of the graph is the time in seconds, and the y-axis uses the acceleration value ( ). The graph for acceleration values in computers and smartwatches shows a similar graph in the peak, but in the last peak in the computer, the computer shows an acceleration value of 0.47264 and 0.168671 in the smartwatch, with an error value of 0.303969. In the SAW paper, the criterion for the error value of the data is 0.3. Existing systems determine that certification failed for the current participant. Fig. 7 shows the results of a keyboard experiment for the same participant. In the figure, the part marked in red squares takes time for the keyboard to press and hit. The data consistency check on the keyboard first determines whether the acceleration value on the smartwatch is the value of the peak point at the time the keyboard is pressed on the computer. The keyboard checks the pressed and hit time and determines that it is the same if there are no other pick points within this time interval. However, the above graph shows that the value is peak at the time the keyboard was pressed on the second computer, but the exact data matching was not achieved because another pick point was displayed in the red square section, which represents the interval where the keyboard was pressed and hit. For the first experimental participant, neither mouse-keyboard nor mouse-keyboard matched the data. Fig. 8 shows a graph of the mouse results of participant 6. The graph on the mouse results of the 6th experimental participant shows significantly similar appearance and matching values in computers and smartwatches than the graph of the first participant in the preceding one. However, the maximum value on a computer at 2.5 seconds in the graph is 1.62745, and the minimum value for a smartwatch is 1.10937. In this experiment, the error value at the peak point has a maximum value of 0.51808. This value is also determined to be an authentication failure because it does not match the data with more than twice the error tolerance value of 0.3 in [10]. Fig. 9 shows the results of the same participant's keyboard experiment. By checking the graph above, the sixth experimental participant was able to see the matching of the data in the keyboard experiment because the time the keyboard was pressed on the computer was all peak value of the smartwatch's acceleration value and no other peak value was included in the red square section. Table III summarizes mouse experiment results of a total of eight experimental participants in the traditional method. Five of the eight participants in the experiment are close to the tolerance value of 0.3, of which three are accurately included in the tolerance value, with one exceeding some. Table IⅤ shows eight participants are certified through the results of a keyboard experiment. There were five successful participants in the experiment, and three unsuccessful participants. Authentication via keyboard rather than mouse is the main method. All participants matched the time the keyboard was pressed on the computer with the peak point of the acceleration value on the smartwatch, but three failed to authenticate, including the other peak value in the keyboard's pressing and hitting interval. These results confirm that when the matching algorithm for time values is not applied during the authentication process through the user authentication intent transfer method. www.ijacsa.thesai.org
C. Experiments on Proposed System
Unlike traditional methods, a smartwatch in the proposed system verifies the user's authentication intent, sends it to the computer, and verifies this intention on the computer to proceed with authentication. In this case, random values are transferred to the smartwatch together to provide a baseline for a specific behavior for the user's authentication, and experimental participants take a specific behavior through the mouse and keyboard according to this number.
With these data values, the computer applies the Needleman-Wunsch algorithm to verify the consistency of the data, and then applies the algorithm to the time value to perform authentication with more accurate data matching. Experiments conducted by experiment participants are the same as previous experiments, and only new parts are added that initiate authentication by touching the smartwatch twice. The graph also displayed the results of the experimenter with the largest error value and the experimenter with the smallest error value, just like the previous experiment, and the results for the entire experiment were tabulated. Fig. 10 shows a graph of the acceleration sensor values of participant 5. The random number of mouse experiments occurred was 4, and the user moved the mouse left and right for 4 seconds to measure acceleration sensor data, time data, and acceleration and time values through the computer's mouse movement. In the above graph, the graphs for acceleration values of computers and smartwatches are not completely consistent, but we can confirm that the two data are much more consistent than those of conventional methods. At this point, a peak value of 0.33762 was recorded on the 1.2-second computer, while the smartwatch recorded a value of 0.25983. At this point, the error of the two values is 0.07779, showing a significantly lower value than 0.3 which was represented by the error value in the existing user authentication scheme. Fig. 11 shows the results of a keyboard experiment of the same participant. At this point, the displayed random numbers represent the same 5 as the existing experiments, and the user conducted an experiment of pressing the keyboard five times.
In the graph above, the time when the keyboard was pressed on the computer and the peak point of the acceleration value of the smartwatch are the same, and the red section for the time when the keyboard was pressed and hit does not include other peak points. In the case of keyboard experiments, there was no problem with data matching, unlike previous conventional methods experiments among participants. In other words, authentication was carried out by applying the need only-one algorithm to the time value, matching the section where the keyboard was pressed and hit to the time when the peak point of the smartwatch was stamped. The results of the overall participants in the proposed scheme are presented in Table V. There were seven participants who did not correspond to the error value of 0.3, and only one failed to authenticate beyond the error value. It shows higher accuracy than data matching experiments in which three people in the previous existing method succeeded and five failed. This is the result of applying the need-onevalue algorithm to the time value, which makes the comparison between computers and smartwatch data more accurate. It shows a lower error value than the error value shown in the experiments of the existing method, and is reliably successful in data matching, enabling authentication.
D. Performance of Data Consistency
We confirm the experimental results of the existing and proposed methods in the previous subsection. In mouse authentication, we show that the existing method succeeds in 3 out of 8 and 5 fails, and that the proposed method fails only 1 out of 7 people. Furthermore, we show that the error value of the proposed scheme is also significantly lower, and we can confirm that it is a more suitable method for data matching. This can be confirmed through the graph in Fig. 13. By checking the graph in the figure, the maximum value of the error in the existing scheme is 0.55425 and the minimum value is 0.21989 and the maximum value of the error in the proposed scheme is 0.4517, with a minimum value of 0.04491.
The mean of each error value shows a significantly lower value of 0.359304 in the existing method, 0.191796 in the proposed method, and since the data error value for successful authentication must be less than 0.3 it is appropriate to use the proposed method focusing on matching time values over the existing method.
Keyboard-experiments also showed results of five successful and three unsuccessful using conventional methods, but the proposed method showed results of seven successful and one failure. However, because the proposed method can only authenticate when both the mouse and keyboard have a data match, one failed to authenticate through a match at the mouse value. Comparing only keyboard values, all eight showed accurate data matching.
E. Comparison of Behavioral Matching with Randomness
Traditional methods operate by setting criteria for specific behaviors of users. For instance, SAW in [10] used the TAP-5X, a five-press keyboard method. However, as previously stated, certain behaviors with these criteria can allow an external attacker to observe the user's movements and take the same action to make an authentication attack [32][33][34]. In this paper, random numbers are transferred from the computer to the smartwatch to perform the mouse-keyboard behavior at different times per authentication, rather than the criteria set for a particular number of actions. To confirm this, eight participants observed other people's experiments and examined whether they could perform the same behavior.
In experiments with existing methods, all eight participants answered that all users could do the same because they used the same authentication method of mouse movement and keyboard No. 5 tab. However, the experimental participants failed to take the same action because the proposed authentication method applied different random numbers to the mouse-keyboard method. Through this, authentication methods through randomness have an advantage in attacks through the observation of external attackers than when there is a set standard for a specific number of actions for authentication of existing users.
VI. CONCLUSION
This paper proposed a new proximity-based authentication system that delivered user's intentionality for authentication in an accurate manner. Conventional methods for user authentication intent delivery solve a random authentication problem that can occur in proximity-based authentication. But, they still have limitations; (i) a wearable device may consume energy much faster, (ii) conventional methods proceed based on the number of actions fixed to a specific behavior for user authentication, which could be vulnerable to external attackers, and (iii) the methods do not match time values, which results in less accurate data consistency process.
To overcome the limitations, the proposed system designs a new protocol for authentication where an authentication process is initially detected on a user side instead of on a computer. The system adopts a randomness that changes the number of actions that a user should perform each time she proceeds with authentication. It increases the accuracy of the matching of the data by applying a Needleman-Wunsch www.ijacsa.thesai.org algorithm to time values when verifying data consistency. Experimental results showed that authentication was succeeded 5 times and failed 3 times with conventional methods, but the proposed system showed 7 successes and 1 failure. Results in the mouse experiments showed that the maximum error value in the conventional methods was 0.55425 and the minimum value was 0.21989, while the proposed system showed the maximum of 0.4517 and the minimum of 0.04491, which was much lower.
A. Discussion
Verifying user intentionality is one of the most important goals in authentication process. In traditional patterns of authentication interaction (human-machine, human-human, and machine-human authentication), human beings have been involved directly in authentication and delivered authentication intent explicitly [35]. Examples include password-based methods and biometric-based methods. By touching on a finger scanner, a user presents her intent for authentication in a fingerprint authentication. With increasing development of IoT technologies and pervasive computing, however, a new pattern of a machine-machine authentication becomes popular [22,[36][37]. For instance, a user carrying a wireless authentication token approaches a target computer that authenticates the user whenever the token is within a certain distance. In such a new pattern of the proximity-based authentication, the user intentionality is often omitted or not verified explicitly.
Delivering the intent and verifying it on both sides of authentication entities may delay processing and degrade convenience of the machine-machine authentication [38]. That is, a new authentication method is on between accurate verification and user convenience. The proposed authentication system is somewhat intended to high accuracy and high protection level. It diminishes risks from external attackers by randomizing user behaviors in authentication, increases accuracy of data consistency process by handling time values, and takes care of energy consumption of a power-constrained IoT device by designing a new authentication protocol.
The proposed system may not provide an excellent benefit of user convenience. Our authentication may be recognized as an interruptive step in a user's normal workflow. That is, a user should start explicitly authentication after stopping what she is doing. Once authentication done, she gets back to her normal work that she was on before authentication. A future work may include development of an advance authentication that blends seamlessly into users' workflow. One possible approach is to make use of the workflow for authentication [26]. It would be optimal if she is being authenticated while she is doing her work; that is, seam between authentication and workflow are blurred.
Delivery of users' authentication intent is expected to enable faster and safer authentication through user behavior analysis if machine learning, which has recently been utilized in various fields, is applied. Furthermore, as the demand for wearable devices such as smartwatches is increasing, further research is required to analyze user behavior patterns in more detail and to quickly authenticate based on them. | 2021-11-10T16:21:50.257Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bfa176f83f14f4476fda8ff3dafeb1af6a3654a4",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume12No10/Paper_85-Delivery_of_User_Intentionality_between_Computer_and_Wearable.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2573835d974921da82b0e4cdc726b6e5fd832ce4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
257291108 | pes2o/s2orc | v3-fos-license | Art, Affect, and Social Media in the ‘No Dakota Access Pipeline’ Movement
Indigenous-led activism against proposed oil pipelines has relied heavily on social media, as in the #NoDAPL campaign against the Dakota Access Pipeline. This paper explores affective engagement in online activism, including the Standing Rock ‘check-in’ campaign on Facebook. Moving beyond dichotomous understandings of embodied vs digital activism, Cannupa Hanska Luger’s Mirror Shields Project employs digital media in order to support direct action at Standing Rock. Patricia Clough draws a direct link between affect and technoscientific understandings of the body in her concept of biomediated bodies. This helps explain how physical and digital activism are linked: the digital and the physical cannot be understood as independent of each other, since online engagement always has an embodied aspect as well. Luger’s Mirror Shields Project functions as a form of alterlife, in resistance to biopower, recognizing historical and ongoing harms while also creating new possibilities for resistance.
Introduction
In November 2016, over a million people 'checked in' to the Standing Rock Sioux reservation on Facebook in order to help protect activists protesting the construction of the Dakota Access pipeline.A viral post of unknown origin claimed this would shield protestors from police surveillance.However, the Morton County sheriff's department denied that it was monitoring Facebook check-ins and, while protestors had shared a variety of possible support actions, checking in on Facebook was not among them.Despite not being of practical use, according to protestors it was a welcome show of solidarity, Theory, though it would have been better to see people show up and provide direct support.Largely ignored by the media until this point, the Facebook check-ins prompted a dramatic increase in coverage of the protests, while also demonstrating that protestors and supporters could communicate directly with each other.This virtual action demonstrates some of the potential and the pitfalls associated with affect-driven digital activism.
The $3.8 billion, 1,172-mile (1,886 kilometer) Dakota Access pipeline crosses beneath the Missouri River, just north of the Standing Rock Sioux Reservation that straddles the North Dakota-South Dakota border.Running from North Dakota to Illinois with a capacity of 570,000 barrels of oil a day, it is instrumental to the growing oil industry in North Dakota and neighboring states.It was subject to months of protests in 2016 and 2017 since the pipeline's route threatened the water supply and important Indigenous cultural sites and ancient burial grounds.Physical protests halted when the pipeline began carrying oil in June 2017.The protests at Standing Rock Sioux Reservation have included members of more than 100 tribes and are considered to be one of the largest Native American protests in recent times.Despite successive legal challenges by the Standing Rock Sioux Tribe claiming that the environmental review conducted by the US Army Corps of Engineers was insufficient, the Dakota Access pipeline was built and is currently operating.However, a federal appeals court has upheld the order of a full environmental impact review of the Dakota Access pipeline.This will not require the pipeline to stop operating.
Indigenous-led activism against proposed oil pipelines has relied heavily on social media, particularly Twitter, as in the #NoDAPL campaign.Resistance to the building of the Dakota Access Pipeline also relied on affective response: a sense of shared nationhood, forged through histories of anticolonial activism, as well as a more-than-human sense of community and responsibility.Oceti Sakowin (all the Lakota and Dakota nations allying together) arose through shared histories of colonization dating back to the 19th century and the dispossession of land through environmental devastation couched in terms of economic development, including the building of a dam and flooding of Indigenous lands.This resistance was both centered in place -Mni Sose (the Missouri River) had long been the center of an 'Indian war that never ends'-and also deeply global and technologized, since through social media and technology 'the world had come to #NoDAPL' (Estes, 2019: 7, 10).The prolonged protest garnered widespread and consistent attention on social media, amplifying possibilities for similar protests elsewhere (Steinman, 2019).#NoDAPL tactics -resistance camps, prominent use of social media, and online fundraising -have been taken up against other pipeline projects, such as the blockades against the Coastal GasLink in northern British Columbia led by the Wet'suwet'sun hereditary chiefs (#WetsuwetsunStrong), which sparked cross-Canada blockades and demonstrations.
This paper explores the relationship between the virtual realm and the offline world, considering how affective engagement in activism is shaped by social media.Communications technologies do not merely transmit pre-existing information and affect; they play an active role in forming what they ostensibly merely express (Blackman, 2019;Derrida, 1998).In the age of COVID-19 and Black Lives Matter protests that involved both extensive online activism as well as mass protests in the streets, it is even more urgent to understand the relationships between embodied standing-with and digital activism.The challenges for activism are clear: when proximity with other humans is a direct threat due to a global pandemic, with a Canadian politician openly asserting that this is a great time to build pipelines because of restrictions on public protests (Bracken, 2020), digital activism takes on a renewed importance.How does commitment and empathy arise, circulate, and develop in and through physical and digital protests?
This paper draws on a variety of non-Indigenous as well as Indigenous theorists.As a white settler reading Indigenous activism through different theoretical lenses, I begin from Million's (2014) assertion of the worthiness of Indigenous lives as the 'stuff of theory' and the value of affective experiences or 'felt knowledge' of Indigenous peoples independent of legitimation by Western academia, which is inextricably linked to settler colonialism (p.32).Rejecting binary oppositions of Indigenous and modern/technological (Nelson, 1999), I draw from Native Studies scholars who argue for conversations between Native Studies and other fields, and the 'intellectual sovereignty' (Warrior, 1992) of Indigenous scholars and activists: both being informed by other disciplines and shaping scholarly discourse as a whole (Morgensen, 2011;Smith, 2010).
Social media activism such as checking in on Facebook is limited in its impact, since it is inseparable from biopolitical modes of surveillance and limitations on Indigenous access to cyberspace (Duarte, 2017).Technology has been used to surveil and limit Indigenous-led activism: law enforcement agencies have access to a wide range of tools to monitor geo-tagged posts on social media, such as Geofeedia (Levin and Woolf, 2016;Waddell, 2016).Patricia Clough's concept of biomediated bodies is helpful in understanding how the repressive effects of biopolitical racism are deployed through technologies of the body, which also simultaneously give rise to potential resistance.Clough (2008) argues that affect is at work in biopower.
Practices of 'standing-with' (TallBear, 2014) are hampered if activism is understood as purely digital: Indigenous people gathering in the Oceti Sakowin camp at Standing Rock were essential to the protest actions.The action of 'standing' is absolutely embodied, involving a powerful connection to the land, in its specificities.There have, however, been important critiques of 'standing' as ableist, including Peers and Eales (2017).As well, the collective 'standing with' at Standing Rock is enabled and promoted through forms of technology -social media in particular.The virtual and the physical cannot be understood as independent of each other, since online engagement always has an embodied aspect as well (Karatzogianni, 2012b: 57).In order to avoid false dichotomies of 'real' (embodied) vs virtual activism, I follow Judith Butler's argument that 'What bodies are doing on the street when they are demonstrating is linked fundamentally to what communication devices and technologies are doing when they "report" on what is happening in the street.These are different actions, but they both require the body' (Butler, 2015: 93-4).
Moving beyond dichotomous understandings of bodily vs digital activism, I turn to a discussion of the art/activism of Luger's (2016) Mirror Shields Project.Luger is a Mandan/Hidatsa/Arikara/Lakota multidisciplinary artist who was born on the Standing Rock reservation.He employs digital media in this project in order to support direct action, challenging conventional divisions between virtual and embodied.The Mirror Shields Project is a form of what Michelle (2018) calls alterlives, in resistance to biopower, both recognizing historical and ongoing harms while also envisioning new futurities and possibilities for resistance and transformation.
Biomediated Bodies and 'Standing-with': Biopower, Media, Affect
The 'affective turn' (Clough, 2007(Clough, , 2008) ) moves away from dichotomies of mind/body and passion/reason, away from subjective states of emotion contained within discrete subjects, exploring instead how emotions and affect move between bodies, connecting them in 'new configurations of bodies, technology and matter' (Clough and Halley, 2007: 2).Clough (2008) analyzes affect in relation to what she describes as 'the biomediated body' (p.207), drawing a direct link between affect and technoscientific understandings of the body.Clough views affect as encompassing areas of information, surveillance, and capital, extending Foucaultian biopower to a Deleuzian-inspired thinking about bodies in terms of openness, complexity, and becoming.She moves beyond thinking of bodies as merely biological organisms, to analyze how they are increasingly technologized.New media and digital technologies expand the capacities of bodily matter invested with capital, mapped through DNA sequencing and bioinformatics, and reduced to informational substrates under advanced capitalism.Clough's biomediated body recognizes the biopolitical deployment of racism -locating the Dakota Access Pipeline through the Standing Rock reserve, moved from its original route due to concerns about possible contamination of drinking water in the mostly white municipality of Bismarck, North Dakota -along with the ways in which bodies, work and reproduction are being reconfigured.
Clough builds upon Eugene (2003) formulation of biomedia.Thacker defined biomedia as a relationship between the technological and the biological, through which the biological remains biological while always taking on new capacities.This is not a hybrid of body and machine, nor is it a virtual body.Instead, the body is mediated through a relationship with technology, increasing the capacities of what the body can do but in such a way that technology seems to disappear.As Thacker (2003) puts it, 'the body you get back is not the body with which you began, but you can still touch it' (p.53).Clough expands Thacker's concept of biomedia by reading this technological expansion of bodily capacity through the lens of affect: building intensities through historically specific modes that give rise to new potentialities of becoming.
Clough draws on Deleuze's conception of the virtual, in which the virtual can be real without being actual.In the virtual, objects and states exist but are not tangible or 'concrete'; the virtual is known only indirectly, through its effects (Shields, 2006).The virtual is real insofar as it can be interacted with and is generative.For Deleuze, both the actual and the virtual are fully real -the former has concrete existence while the latter does not, but this does not make it any less real.Affect operates at the level of the virtual, the potential and the emergent; it links across senses, events and temporalities.The reality of the virtual is the reality of change: it operates in terms of openness and indeterminacy (Massumi, 1998).
The militarized response to the protests and the intensive use of technologies of surveillance (to which the Facebook check-ins were responding) represent what Massumi (2015) calls ontopower, a pre-emptive power beyond the biopolitical control of life.
Ontopower is the governance of affect, 'or its modulation when assembled with new technologies of time/memory, new media technologies, bio-and neuro-technologies as well' (Clough, 2012: 25).Ontopower, Massumi claims, is a new type of power encompassing both soft power (surveillance) and hard power (military interventions).It subsumes and transcends biopower.Ontopower refocuses on what may emerge, as that potential presents itself to feeling: it is based on an affective mode of pre-emption.As such, ontopower can help explain criminalization of anti-pipeline protests: the feeling that protests may pose a violent threat even in the absence of any evidence.
Affect challenges conceptualizations of matter as inert in contrast to active subjects; instead affect moves through and between organic and inorganic bodies, without inhering in them.The Standing Rock slogan, 'Mni Wiconi' -water is life, or more accurately, water is alive -can be understood in these affective terms.This understanding of water may be described as what Bennett (2010) calls 'vital matter': things having the capacity to not only 'impede or block the will and designs of humans but also to act as quasi agents or forces with trajectories, propensities, or tendencies of their own' (p.viii).There are risks, however, in reading Indigenous worldviews through a posthuman theoretical lens.Todd (2016) notes that posthuman theories often fail to recognize extremely long histories of Indigenous worldviews with their complex relationalities of humans and nonhumans.As Estes (2019) notes, 'Concepts such as Mni Wiconi (water is life) may be new to some, but like the nation of people the concept belongs to, Mni Wiconi predates and continues to exist in spite of white supremacist empires like the United States' (p.15).
Social Media, Bodies in the Street, and Digital Assembly
Emotions can motivate people to become involved in social movements, keep them involved, or lead them to stop participating; activists also appeal to emotions strategically in order to incite people to take action (Jasper, 1998;Gould, 2004).Affective transmission occurs both within and beyond digital realms and may be intensified or muffled in the process of digital circulation (Kuntsman, 2012: 1).Technologies do not faithfully transmit ideas, data, and feelings, which are instead mediated and constituted through their transmission.Consequently, the effects of technological transmission are not predictable.As Massumi (2002) puts it, 'what the mass media transmit is not fundamentally image-content but event-potential' (p.269).Networked affect (or networked virtuality) involves trends, feelings, and processes spreading across social media in ways that 'appear to defy rational logic and understanding' (Blackman, 2019: 11).Affect is not reducible to rationality and thus affective engagement in digital activism cannot be fully predicted or controlled; nevertheless, its power and potential are increasingly apparent.
#NoDAPL is certainly not the only recent protest to rely heavily on digital activism.From the Arab Uprising, to Occupy, to the Worldwide Women's March, to Black Lives Matter, to #MeToo, to anti-lockdown protests (Schradie, 2020), to rent strikes (Massarenti, 2020), to the GameStop Redditors' complicated challenge to capitalism, increasingly digital platforms are integral to social activism.#NoDAPL, building on the Idle No More movement for Indigenous sovereignty, drew heavily on social media use, particularly Twitter.The widespread use of digital platforms in Idle No More meant that 'an aspect of Indigeneity, as a paradigm of social and political protest, had become digitized, infrastructurally through broadband internet, personally through consumer mobile devices, socially through social media adoption, and discursively through flash mobs, hashtags, and memes' (Duarte, 2017: 5).#NoDAPL and Idle No More attained widespread visibility as mass movements through their use of digital platforms, yet they both emerged from hundreds of years of ongoing Indigenous resistance (Estes, 2019;Simpson et al., 2018).
The profile of the #NoDAPL protests increased dramatically at the national and international level as a result of checking in on Facebook.During the protests, a rumor started that the police were using Facebook to track and arrest activists involved in the protests (Kennedy, 2016).Individuals were asked to check-in to the Standing Rock Sioux Reservation on Facebook in order to confuse local authorities and provide anonymity for protestors.This rumor was denied by local police and the action did not actually protect protestors, but the show of solidarity that resulted from the campaign increased attention to the protests generally and to the fact that protestors were being attacked and arrested (Kennedy, 2016).The role of the 'check-in' function on Facebook in this campaign made the use of social media itself newsworthy (Hunt and Gruszczynski, 2019: 5).Both NPR and The Guardian reported that the number of check-ins to Standing Rock went from 140,000 to over 1.5 million almost overnight (Levin and Woolf, 2016;Hunt and Gruszczynski, 2019;Kennedy, 2016).According to Hunt and Gruszczynski (2019), 'the attention garnered through this interaction of social media and traditional forms of media could have broader impacts, such as priming attention to pipeline protests and concerns in the future' (p.12).Social media played an important role in increasing affective engagement with the protest.Since 'the largest spikes in attention' occurred following violent clashes at protests, it may be that regardless of the media form, vividness is a currency necessary for social movements to acquire attention, lacking sustained, large-scale protests that tend to capture attention over the long term (Hunt and Gruszczynski, 2019: 13;Jennings and Saunders, 2019).Alexandra Deem argues that regardless of their effectiveness, the Facebook check-ins reconfigured the structure of solidarity and presence, redefining place and reconstituting the virtual and the analog (Deem, 2018: 12).According to Deem, while 'embodied presence provided the model for the disembodied presence enacted by the check-in, it was this disembodied presence that lent a new sort of visibility to the bodies actually present at Standing Rock' (Deem, 2018: 12).Karatzogianni (2012b) calls for further theorizing the revolutionary contribution of social media in light of the affective structures and politics of emotion (p.68).She argues that the affective structures of social media and digital cultures more broadly allow the transformation of the 'digital virtual' into the 'revolutionary virtual', which can materialize revolution in the offline world (Karatzogianni, 2012b).However, digital activism does not necessarily promote progressive social change.Conservative digital activists often gain an advantage online.A widening digital activism gap is 'reproducing, and in some cases intensifying, preexisting power imbalances' due to the tremendous amount of labor required (Schradie, 2019: 7).Organizations rich in resources are better positioned to make use of digital technologies, while grassroots organizations with less time, money, personnel, and structure risk falling behind (Schradie, 2019: 7).Digital activism also tends to reinforce existing inequalities by relying on class privilege, horizontal organization, and simplified ideology (Schradie, 2019).
The 2016 US presidential election and accompanying Cambridge Analytica scandal have exposed the risks of digital advertising in political campaigns.Facebook and Google's dominance in the market for online advertising has significant repercussions for the private regulation of paid political speech, since these decisions are being made in the absence of transparency and accountability (Kreiss and Mcgregor, 2019).Facebook's algorithms are far from politically neutral or progressive in their effects, creating echo chambers by showing users similar content and thereby increasing political polarization (Thorson et al., 2021).This leads to political extremism, toxic political rhetoric, misinformation, and ideologically motivated violence (Uscinski et al., 2021).All digital platforms do not create echo chambers to the same degree, however: there are significant differences between platforms that allow users to tweak their feed algorithm (such as Reddit) and platforms that do not allow this (Facebook and Twitter) (Cinelli et al., 2021).
There are also barriers to online activism that reflect histories of colonization.A deep 'digital divide' exists between Native Nations of the United States and the rest of the country; according to the Federal Communications Commission, only 53% of people living on tribal reservations have access to broadband internet, with access particularly low in rural tribal areas (Mack et al., 2022;Wang, 2018).Although Simpson et al. (2018) recognize the incredible utility of the internet for Indigenous activism, they are concerned about the lack of reciprocal relationships between bodies and land on the internet, which may constitute a 'digital dispossession' (p.79).They also remain wary of the asymmetry of large corporations such as Google, Facebook, and Twitter, where Indigenous peoples can be content providers but are unable to structurally intervene in digital technologies, reinforcing settler colonialism (Simpson et al., 2018: 79).
Technologies of Resistance: Biomedia and Art
In contrast with the largely passive Facebook check-in campaign, Cannupa Hanska Luger's Mirror Shields Project enhanced creative capacities: online instruction for individual makers of the shields and collective, embodied resistance deploying the shields at Standing Rock.The project spans the fields of art and activism, physical and digital resistance; through affective resistance to biopower, it produced new forms of intensities that create possibilities for change.The mirror shields (16 x 48 inch reflective boards) were designed, developed, and deployed for use at Standing Rock, where they transformed notions of place and space while remaking the boundaries of bodies.Luger (2019) described the shields as 'poetic armour' (p.262), providing physical protection for Standing Rock protestors while reflecting back the violence of police and security forces.
Luger took inspiration from mirror shields used by the EuroMaidan movement in Ukraine.Protests broke out in late 2013 after then-Ukrainian president Viktor Yanukovich scrapped a pending trade agreement with the European Union in favor of closer ties to Russia (Scherker, 2014).After a brutal crackdown by Ukrainian riot police, mirror shields were used by protestors to remind police that underneath all their riot gear they were still human beings.The EuroMaidan mirror shields also inspired use of similar mirror shields in 2017 during protests against President Nicolás Maduro in Venezuela (Rawlins, 2017).
Luger's mirror shields were modified from those used in Ukraine, constructed from Masonite boards and reflective adhesive foil rather than glass to be more durable and less likely to cause injury.Luger worked with students at the Institute of American Indian Arts in Santa Fe, New Mexico, where he is an artist-in-residence, to design the shields and posted a video online providing instructions on how to build them.The shields linked the makers, across the United States, with water protectors in North Dakota.The shields were used on the frontline of the protests, protecting hundreds of people and behind them a camp of thousands.Luger (2016) hoped that the mirror shields would inspire the demonstrators to 'hold ground and not panic', promote solidarity among protestors and disrupt the violence of uniformed security and police through reflecting their shared humanity.
Creative tactics, imagery and theatricality are increasingly features of political movements and struggles (Serafini, 2018: 1), with affect a key aspect of art activist practices (Serafini, 2018: 174).As Clough (2012) describe, 'The measure of affect is an aesthetic measure, understanding aesthetic measure to be singular, non-generalizable, particular to each event, or each capture of the not-yet' (p.29).The combination of art and digital technologies in Indigenous resistance has an extensive history.De La Garza (2016) describes how digital technology 'resembles and even parallels traditional Indigenous means of producing and sharing knowledge and of experiencing time and space' (p.49).Nelson (1999) coins the term 'Maya-hacker' to describe the vital importance of information to political strategies of Indigenous resistance in Guatemala.The Zapatista movement also relied on virtual protests, drawing together theatre and activism in the digital realm (Lane and Dominguez, 2003).
Performativity provides a way of understanding the complex, iterative relationships between digital and physical protest and between art and activism (Vlavo, 2017).In her analysis of assembly, Butler (2015) describes how collective actions of various kinds performatively produce 'the people'.Bodies protesting are essentially making a performative claim to belong there, to have a right to exist and assemble, both prior to and in addition to any of their specific political demands.These modes of assembly are not limited to physical gatherings for Butler (2015), since 'not everyone can appear in a bodily form, and many of those who cannot appear, who are constrained from appearing or who operate through virtual or digital networks, are also part of "the people"' (p.8).
The mirror shields represent an opening to an affirmative biopolitics (Esposito, 2008), which involves recognizing that harming one life harms all lives.The shields protected protestors from the rubber bullets of DAPL security guards while reflecting back the securitized, uniformed forces not just as instruments of biopower but as affected and affecting humans.The mirror shields produce what Massumi (2002) terms intensity, which he equates with affect (p.27).Intensity is associated with nonlinear processes: resonance and feedback that momentarily suspend the linear progress of the narrative present from past to future.Intensity is qualifiable as an emotional state, and that state is static -temporal and narrative noise.It is a state of suspense, potentially of disruption . . .It is not exactly passivity, because it is filled with motion, vibratory motion, resonation.And it is not yet activity, because the motion is not of the kind that can be directed (if only symbolically) toward practical ends in a world of constituted objects and aims.(Massumi, 2002: 26) Luger's Mirror Shield Project represents perceiving the present in the past, observing current protests in a historical mirror.Unlike linear notions of time dominant in settler worldviews, Indigenous conceptions of time understand the present as structured by the past and ancestors, with alternative futures made possible through relationship to the past (Estes, 2019).Resistance to the Dakota Access Pipeline was inspired by the Lakota prophecy of Zuzeca Sapa, the Black Snake, that would stretch over the land and threaten all life, beginning with water (Estes, 2019: 14).Estes (2019) describes such prophecies as a 'revolutionary theory, a way to help us think about our relationship to the land, to other humans and other than humans, and to history and time ' (p. 14).
The use of mirrors in Indigenous resistance has a long history.Carcelén-Estrada (2017) traces a genealogy in a Zapatista story which explains how to kill a lion not with a gun but with a mirror, which is used to deflect the lion's power and direct it against itself (p.104).The 2016 Water Serpent Action built on this tradition, as well as reflecting the prophecy of the Black Snake.Conceived and enacted by Luger and Rory Wakemup (Ojibwe), it involved more than 150 protestors holding the mirror shields above their heads as they marched along the snow-covered Oceti Sakowin camp near Standing Rock, creating a moving river or serpent-like formation, documented from above by drone camera.The audience for this action were police surveillance planes constantly flying overhead; it was intended to provide evidence of the water protectors' resilience and reflect back the harms of police surveillance.In walking the shape of the river, light reflected off the shields like water, the lines between human body and nonhuman nature were blurred, challenging extractive views of the natural world and asserting Indigenous sovereignty over the land (Davis, 2019: 149;Morris, 2019).
The Mirror Shield Project produced new affective and bodily capacities, transforming the relationships between bodies and landscapes and challenging biopower.Many other actions involving the mirror shields were organized by anonymous communities online, including one of over 1000 US veterans who came from across the country to guard water protectors from the police with mirror shields and other handmade protective shields.The mirror shields continue to be made anonymously and used in frontline actions around the world, including Black Lives Matter protests across the United States in 2020.
Conclusion: #NoDAPL's Alterlives
Affect can be a powerful driver of action.Technology provides both opportunities and obstacles, promoting empathy by giving us a glimpse into the lives of others but also potentially misrepresenting our capacity to share the embodied experiences of others.To what extent do social media strategies such as checking in at Standing Rock on Facebook constitute a form of solidarity?How much of this is a technological buffer between a passive audience and committed activists, an example of 'cruel optimism' (Berlant, 2011): a fantasy of effortless allyship possible through the click of a 'like' button disguising underlying apathy and perpetuation of privilege?Is there a need to recognize the limits of empathy along with the limits of a decentered, posthumanist subjectivity?Clough (2008) describes biomediated bodies as operating at the postbiological threshold.This threshold is indeterminate, however: it is 'the limit point beyond which there will have been change irreducible to causes' (p.19).Clough notes that the affective turn has already been mined by capitalism: capital accumulates in the realm of affect and racism helps realize this accumulation economically.Nevertheless, Clough (2008) sounds a note of cautious optimism: 'it is important to remember the virtual at the threshold.Beyond it, always a chance for something else, unexpected, new' (p.19).In the title of his history of Indigenous resistance, Estes (2019) claims that 'our history is the future'.Through relationship to the past it is possible to imagine new possibilities for the future.This is a time of what Murphy (2018) calls alterlife: 'the struggle to exist again, but differently when already in conflicted, damaging, and deadly conditions, a state of already having been altered, of already being in the aftermath, and yet persisting ' (p. 113).Protests against the Dakota Access Pipeline can be thought of in terms of alterlives -a state of already having been altered by environmental violence that is nonetheless a capacity to persist and to become something else (Murphy, 2017).Alterlives resist colonialist biopower through imagining futurity and ways of being otherwise that are rooted in local communities and intimate relations, that build on past and current political projects of resistance (Murphy, 2018: 122).Luger's mirror shields show us both what isthe racist history of settler colonialism, dispossession of Indigenous lands, environmental destruction -while also creating new possibilities, new forms of relationality, rerouting of affect, and reconfigured boundaries between us and them, protestor and police.Time and space are transformed through relation to the long history of Indigenous resistance to dispossession from their land and damage to Mni Sosa (the Missouri River).The biomediated body is rooted in biopolitical racism, and yet it is transformed through art, community, technology, and embodied resistance in Standing Rock.New potentialities and forms of emergence have become possible.
US President Biden's cancellation of the Keystone XL project raised hopes that DAPL will meet a similar fate.The environmental review by the Army Corps is ongoing and, whatever the outcome, the losing side is likely to continue the fight in the Supreme Court.Even if #NoDAPL is not able to shut down the pipeline in the near future, in periods of abeyance social media can help retain 'the people (resources) needed to accumulate and maintain the power to carry on the movement' (Leong et al., 2019: 191).Social media use in #NoDAPL is described by Clark and Hinzo (2019) as 'digital survivance', combining aspects of survival and resistance (Vizenor, 1999).This digital survivance involves 'the digital and visual practices of Indigenous peoples and their allies as they have drawn upon and advanced Indigenous epistemologies and storytelling traditions within the contexts and constraints of social media' (Clark and Hinzo, 2019: 94).#NoDAPL digital survivance includes networks of online and embodied solidarities enabling future Indigenous-led resistance against pipeline development.The biomediated body has changed through protest, standing-with, and checking-in.New potentialities and forms of emergence have become possible.Through interconnections of fluids and circuits, water protectors and Twitter-users, affective expansions of bodily capacities give rise to solidarity and new forms of 'standing-with' between biomediated bodies. | 2023-03-03T16:11:06.129Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "15817ca20b4e292187ea548925097143c7a50e37",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/02632764221146715",
"oa_status": "CLOSED",
"pdf_src": "Sage",
"pdf_hash": "e1c2a63d4f9edaaa382e7feaf8d499e1436b727b",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
213202073 | pes2o/s2orc | v3-fos-license | Selenium-Containing Preparation Use for Athletes’ Adaptive Abilities to Physical Loads Improvement
The aim of the research work is to study the influence of selenium-containing and other (neotone and panangin) preparations on athletes’ adaptive abilities to physical loads improvement. 90 boys at the age of 17-19 took part in the research. They went in for athletics and had the 1 and the 2 sports categories. It was stated that the athletes from the experimental group, who took selmevit, neotone and panangin during 30 days, had considerably better competitive results during 3000 meters distance overcoming, in hemoglobin indices and the amount of erythrocytes in blood. We came to the conclusion that in case of long-term (till 30 days) selmevit, neotone and panangin use there happens athletes’ organism adaptation to physical and psycho-emotional loads and also balance normalization between prooxidation and antioxidation parameters of antioxidative defence system and there is transfer from short-term adaptation to long-term adaptation. Keywords—metabolic preparations; adaptive abilities of an organism; athletes; antioxidative system; selenium.
I. INTRODUCTION
Many spheres of a man's industrial activity are connected with intensive psycho-emotional and physical loads, which have negative influence on health. It especially concerns sports activity. Constant sports result increase is connected with great physical loads during training and high psycho-emotional tension during competitions. Professional sport politicization, commercialization and many other factors place high demands on an athlete's organism. Very often an athlete undergoes high loads. They increase his adaptive abilities and it leads to adaptation disruption, to extreme physical and psycho-emotional loads and finally to pathological states development.
In terms of the sport rejuvenation tendency the problems of the training and competitive process maintenance became more urgent. Early sports specialization, intensive physical loads during early age training, together with knowledge accumulation concerning genetic determinants and external factors influence on athletes' health state condition the necessity to control the state of athletes' cardiovascular system, as one of the main factors providing adaptation to physical stress.
Chuvash Republic is the region with high risk of selenium lack, which has negative influence on health state of population. In terms of deficient these or that micro-and macroelements morpho-physiological status correction of all age groups by means of deficient components of the ration is reasonable [5], especially among people with the increased physical and psycho-emotional load.
Special literature analysis helped to suppose, that one of the most effective techniques of working capacity renewal and increase among athletes can become allowed in sports medicine preparations use.
That is why the aim of our research work was the influence study of a complex Selenium-containing preparation and other preparations (neotone and panangin) taking on athletes' adaptive abilities to physical loads improvement.
II. RESEARCH METHODOLOGY 90 boys at the age of 17-19 took part in the research. They went in for athletics and had the 1st and the 2nd sports categories ("racewalking" specialization). According to the results of individual medical cards there were no pathological deviations in cardiovascular, digestive and nervous systems. The respondents didn't have congenital diseases [2].
The respondents were divided into two groups. The control group (45 people) included athletes, who didn't take Selenium-containing preparation and other preparations (neotone and panangin).
The experimental group (45 people) included athletes, who took Selenium-containing preparation and other preparations (neotone and panangin) every day.
Both groups trained 30 days according to the increased and regulated scheme of the loads, which included the following: 20 km running, 60 min walking with speed 1km -5,20/5,50, 10 min walking with speeding up 71 m in 142 m, general physical training (GPT) exercises (shoulder dip, legs lifting at a bar, chin-up, 60-70 degrees press), 25 minutes of gymnastics (joints mobility development) [1,3].
In the experimental group additional health-improving events were held together with Selenium-containing preparation taking and taking into account the content of bioadditive, which was the composition of selenopiranum (SP), vitamin C, vitamin E and optimizing students' adaptation owing to intensity bio-balance of free radical oxidation processes (FRO) and antioxidant system (AOS) activity.
It was taken into account that the components of bioadditive influence different stages of organic peroxides creation: tocopherols prevent gender owing to intensity of SH-group membrane proteins oxidation decrease; glutathione peroxidase decomposes existing peroxides of lipids and hydrogen. Together with ascorbate αtocopherol provides selenium inclusion into an active center of glutathione peroxidase [6,7,8,9,10,11].
III. RESULTS
It is stated (table I) that in the experimental group athletes, who took selmevit, neotone and panangin during 30 days had significant results increase of 3000 meters distance overcoming (for 18,1 s -from 905,4 till 887,3 s), than in the control group (for 5,3 s -from 902,8 till 897,5 s). Table II presents hemoglobin indices in blood of athletes before and after the experiment.
At the beginning of the experiment these indices among the respondents of both groups were different: control group -146,6 g/l, experimental group -145,7 g/l.
During the experiment these indices among athletes from the control group preserved the same -147,7 g/l. In the experimental group there were higher hemoglobin indices -154,4 g/l. The differences were statistically valid in terms of Р < 0,05. Table III presents indices of erythrocytes concentration in blood of athletes before and after the experiment. Comparative analysis of these indices, observed before the experiment, didn't reveal considerable differencesin the control group they were 4,96х10 12 /l, in the experimental group-4,87х10 12 /l. After the experiment athletes from the experimental group had significant advantage in these indices -5,26х10 12 vs 4,92х10 12 /l. As our research work showed, selenium deficiency prophylaxis provides regulatory mechanisms tension decrease and biologically more effective adaptation to high physical and psycho-emotional loads [5].
There were several attempts of algorithm creation of metabolic effect preparations taking in case of this or that variant of disadaptative heart changes among athletes.
We can present our variant of metabolic means choice during stress cardiomyopathy prophylaxis and cure (table IV).
The list of the presented by us preparations would be increased. However, the presented preparations are already widely used during heart disorders cure and prophylaxis among athletes and are the important reserve of young athletes' tolerance increase concerning intensive physical loads. Held by us experimental work showed that in case of complex selenium correction of organism adaptation there is balance normalization between prooxidation (malondialdehyde, oxidized glutathione, gender activity decrease) and anti-oxidation (vitamins E and A concentration, renewed glutathione, catalase, glutathione peroxidase, selenium, AOS activity increase) parameters of antioxidative defence system and there is transfer from short-term adaptation to long-term adaptation. As a results, in age-related aspect there are metabolic (general protein concentration increase, iodine, glucose, calcium level stabilization), immune (hemopoiesis activation, the level of A-, M-, G-immunoglobulin increase), somatometric (body weight increase) effects. | 2020-02-13T09:25:15.696Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "27e57e7e0281fb9d8ff5fcb94ea0526655e7fbdb",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125932383.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd79986f50d42548380317dcd886b425e944e20b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54499221 | pes2o/s2orc | v3-fos-license | Has the Post-communist Transformation Led to an Increase in Educational Homogamy in the Czech Republic after 1989?
This article analyses trends in educational homogamy in Czech society from 1988 to 2000. Two hypotheses are tested: (1) that educational homogamy strengthened as a result of growing economic uncertainty during the 1990s, and (2) that educational homogamy is higher among younger newlyweds than among people who get married after 30 years of age. The authors analyse vital statistics data on all new marriages for the years 1988, 1991, 1993, 1997 and 2000. Using a log-linear analysis the first hypothesis was refuted, as no change in the tendency towards homogamous marriages was observed during the 1990s in the Czech Republic. Stronger support was found for the second hypothesis, as educational homogamy is indeed much higher among younger than older couples. Finally, the article includes a discussion of some possible explanations for the absence of a trend towards homogamy that was detected. Sociologický časopis/Czech Sociological Review, 2004, Vol. 40, No. 3: 297–318 297 * The first author received primary support from the School of Social Studies, Masaryk University, Brno, (Grant CEZ: J07/98:142300002 – Children, Youth and the Family). The second author received primary support from the Grant Agency of the Academy of Sciences of the Czech Republic (Grant No. B7028202). Previous drafts of this paper were presented at the conference ‘Children, Youth and the Family’ organised by the School of Social Studies, Masaryk University, Brno, in Medlov, 16–17 October 2003 and at the seminar of the Institute of Sociology, Academy of Sciences of the Czech Republic, Prague, Czech Republic, 4 March 2004. ** Direct all correspondence to: Tomáš Katrňák, Masaryk University, Faculty of Social Studies, Department of Sociology, Gorkého 7, Brno, Czech Republic, e-mail: kartnak@fss.muni. cz; Martin Kreidl, University of California al Los Angeles, Department of Sociology, 264 Haines Hall, 375 Portola Plaza, Los Angeles, CA 90095-1551, USA, e-mail: kreidl@ucla.edu; Laura Fónadová, Masaryk University, Faculty of Economics and Administration, Lipová 41a, Brno, Czech Republic, e-mail: laura@econ.muni.cz. © Institute of Sociology, Academy of Sciences of the Czech Republic, Prague 2004
Introduction
Most people think of their choice of spouse as a decision dictated by romantic love and mutual attraction. Partner preferences, however, follow predictable empirical patterns at the societal level. Sociologists have discovered that people tend to marry partners who are similar to them with respect to family background, age, education, social class, race and religion. The preference for a partner with similar characteristics is far more common than would be expected in purely random pairing [see e.g. Ultee and Luijkx 1990;Kalmijn 1991aKalmijn , 1991bMare 1991;Smits, Lammers and Ultee 1998a, 1998bRaymo and Xie 2000;Schwartz and Mare 2003].
While in the first part of the 20th century sociology discovered the rule of marital homogamy and identified the factors that structure it [see e.g. Hunt 1940;Burgess and Wallin 1943;Winch 1958;Girard 1964], towards the end of the 20th century social scientists were concentrating on measuring homogamy and studying its inter-generational and spatial variability. In research studies on stratification, comparative analyses of homogamy inform comparative analyses of social mobility, and vice versa, because, like social mobility, homogamy reflects the social barriers among social strata and social groups [Ultee and Luijkx 1994;Ultee 1998a, 1998b].
Homogamy has recently become one of the most frequently researched stratification topics [Hout and DiPrete 2003]. Many authors have been inspired by the study of intergenerational occupational mobility conducted by Erikson and Goldthorpe [1992], and strive to identify trends in the development of homogamy. Some authors have reached the conclusion that patterns of assortative mating, and especially the extent of homogamy, change over time [compare Smits, Lammers and Ultee 1998a, 1998bRaymo and Xie 2000]. However, a number of studies have arrived at the opposite conclusion, and the research community has not yet produced any satisfactory generalised conclusion [Hout and DiPrete 2003].
The topic of homogamy has rarely been researched in the Czech Republic. The few existing comparative research studies of homogamy are the exception rather than the rule. Ultee and Luijkx [1994] included data from former Czechoslovakia in a comparative analysis of educational homogamy and mobility and tested the hypothesis that socialism had a positive influence on the openness of the social structure. Their paper revealed the unexpected effect of socialism on educational homogamy, showing that socialism contributed to the growth of educational homogamy, and, as a result, to the closing of the social structure. Boguszak [1990] also concluded that homogamy in the Czechoslovak and the Hungarian societies was higher than in the Netherlands. He suggested that the increase in homogamy was a behavioural response by people to the egalitarian measures adopted by the socialist regime. According to Boguszak, in these conditions of extreme economic equality, better-educated people tried to preserve status and cultural privileges, and therefore large numbers of them entered into marriages with equally (highly) educated people. Finally, after controlling for many economic, political and religious factors, Smits, Lammers and Ultee [1998a] showed that Czechoslovakia had a relatively low degree of educational homogamy in comparison to other European countries.
In this article we will build upon previous analyses and study educational homogamy in Czech society, extending it here to cover the 1990s, a period of rapid economic, political and social change. Two basic research questions are posed: (1) Did educational homogamy in Czech society increase or decrease between 1988 and 2000? (2) Is the tendency of fiancés to enter into educationally homogamous marriages contingent upon their age?
The expected development of educational homogamy in Czech society after 1989 -hypotheses
Answers to the question of how educational homogamy developed in Czech society in the 1990s can be found at the macro-structural and micro-structural levels. The macro-structural answer is offered by the modernisation theory [Treiman 1970;Smits, Lammers and Ultee 1998a], which stresses the gradual elimination of social barriers and the gradual increase in social mobility. The micro-structural answer is offered by the theory of marital exchange [Elder 1969;Becker 1981;Smits 2003], which conceptualises marriage in terms of utility and the economic security of partners, and the theory of the cultural similarity of spouses [Kerckhoff and Davis 1962;DiMaggio and Mohr 1985;Bukodi 2002], which concentrates on the importance of education in defining the cultural status of spouses and highlights its importance for the consensus of partners in a marriage.
The theory of modernisation argues that increasing geographical mobility, urbanisation, and the introduction and expansion of a unified educational system [Treiman 1970] have expanded the range of potential partners from which young people can choose their spouses. Mass communication and the homogenisation of society contribute to more widespread similarities in value orientations, lifestyles, leisure time activities, language and taste; in short, to an expanding group of people with whom one shares a "common universe of discourse" [DiMaggio and Mohr 1985] and among whom one is more likely to find a mate. Therefore, we could, ceteris paribus, expect declining educational homogamy over time.
However, modern educational expansion also increasingly structures the marriage markets. Because people in modern societies spend, on average, more time at school then ever before, the relative importance of educationally structured marriage markets is likely to grow over time. This is likely to lead to an increased tendency toward entering into educationally homogamous marriages [Blossfeld and Timm 2003;Kalmijn and Flap 2001]. Smits, Lammers and Ultee [1998a] shows that educational homogamy increases only in the early phases of economic development; it then peaks, and begins falling at the start of the 'post-materialistic era', arguably in relation to the proliferation of the ideal of romantic love. They empirically demonstrate that the re-lationship between educational homogamy and economic development takes the form of an inverted 'U' shape. They suggest that this pattern emerges because the forces leading to reduced educational homogamy prevail at higher levels of economic development, while the early stages of development augment educational homogamy.
The theory of marital exchange is based on the economic theory of marriage and the traditional model of calculating the benefits and costs of marriage. Gary Becker [1981], a proponent of this approach, claims that the relative advantages of marriage are the result of men specialising in paid work and women specialising in unpaid work. In traditional societies, where there is a high degree of division of labour in a marriage, a heterogamous spousal status is more advantageous than homogamous. The growing economic potential of women, however, leads to both partners assessing before marriage the potential economic contribution of the spouse to the family budget, and both fiancés logically prefer a partner with higher status potential [Blossfeld and Huinink 1991;Mare 1991;Sweeney 2002], a behaviour referred to as status attainment or status seeking Ultee 1998a, 1998b]. Under the pattern of the individual preference of both sexes, when equilibrium is achieved on the marriage market, it tends to be educationally homogamous rather than educationally heterogamous [Kalmijn 1991a[Kalmijn , 1998.
Similarly, the theory of cultural similarity of spouses predicts that educational homogamy will increase over time. First, it points out that partner choice may also be driven by non-economic individual preferences. People may, for instance, have a preference for a partner with similar values and attitudes; they may, consciously or unconsciously, prefer someone with a similar language, taste or lifestyle [Kalmijn 1994[Kalmijn , 1998Mohr 1985, 1994]. Because the idea of who is an attractive partner, as well as lifestyles, behaviours, values, attitudes and taste are all stratified in society [Birkelund and Heldal 2003;DeGraaf 1991;DiMaggio and Mohr 1985;Lamb 1989;Mohr and DiMaggio 1995;Tomlinson 2003], a preference for cultural similarity between partners will also lead to educational and status homogamy. As the relationship between education and attitudes, values, lifestyles and cultural preferences tends to increase in times of rapid social change, it is also likely to have grown during the post-socialist transition and therefore contributed to an increase in educational homogamy.
Based on the theory of cultural similarity of spouses and the theory of marriage exchange we believe that the economic, social and cultural transformation of Czech society in the 1990s led to an increase in educational homogamy, because the social transformation also resulted in an increase in social and economic insecurity and greater social and economic stratification. Since 1989, the general unemployment rate [Frýdmanová et al. 1999] and the number of long-term unemployed [Mareš, Sirovátka and Vyhlídal 2003] have increased, and the correlation between unemployment and the level of education have become more pronounced [Frýdmanová et al. 1999]. Moreover, economic returns to education have dramatically increased [Večerník 1999] and the relationship between education, economic income and em-ployment status has become more pronounced [Matějů and Kreidl 2001]. The perceived importance of education for achieving success in life has also risen [Kreidl 2000]. Based on these facts, we expect that educational homogamy has increased in Czech society since 1989 (Hypothesis 1). This trend should be most pronounced in first marriages, where the criterion of education is a more important indicator of the future socio-economic position of a spouse than is economic income [Kalmijn 1994].
At the same time, we expect that not all segments of the Czech population will record the same degree of educational homogamy. For example, a number of authors - Mare [1991] using the American population, Bukodi [2002] using the Hungarian population, Bernardi [2003] using the Italian population, and Chan and Halpin [2000] using the British population -have shown repeatedly that the likelihood of an educationally homogamous marriage decreases as the interval between graduation and entry into marriage grows. After graduation, the likelihood increases that pairs formed at school will gradually come apart. People then start looking for new partners in other environments (for example, at work) that are far more educationally heterogamous than school [see also Kalmijn 1994]. Based on these findings we do not expect the same degree of educational homogamy among marriages formed before spouses have reached the age of thirty and among marriages formed after spouses have reached the age of thirty. We believe that in Czech society in the 1990s, educational homogamy was greater among younger couples than among couples entering marriage at later ages (Hypothesis 2).
Data, absolute measures of homogamy and analytical methods
The data we used to test our hypotheses were drawn from official statistics collected and published by the Czech Statistical Office [Pohyb obyvatelstva 1989[Pohyb obyvatelstva , 1992[Pohyb obyvatelstva , 1995[Pohyb obyvatelstva , 1998[Pohyb obyvatelstva , 2001. The data are on all new marriages concluded between men (M) and women (W) according to education (elementary, vocational training, secondary and tertiary) and age (A) of entering into marriage (up to 29 years of age, and 30 and more years of age 1 ) in selected years (Y) (1988, 1991, 1994, 1997 and 2000). As an aggregate the data take the form of a four-way table (M x W x A x Y), which we clustered according to marriage age and years into ten (A x Y = 10) two-way tables showing marriages between men and women (M x W) by education level (see Table 1). The main diagonals in these tables show educationally homogamous marriages; the figures above the main diagonals represent marriages where the woman has attained a higher educational level than the man; and the figures below the main diagonals show marriages where the man has attained a higher educational level than the woman. The greater the distance of each number in each table from the main diagonal, the greater the educational disparity between the spouses. 1988, 1991, 1995, 1997. Prague: Czech Statistical Office 1989, 1992, 1996, 1998 The sum of all total frequencies on, above and below the main diagonals by marriage age and year is given in Table 2. It is possible to see that the percentage of educationally homogamous marriages did not change much in Czech society during the 1990s. In 1988 and in 2000, the education of the man corresponded to the education of the woman in more than one-half of all new marriages. With respect to marriage age, young people (up to the age of 29) were more educationally homogamous in the 1990s than older people (30 and older). The percentage of marriages in which the woman attained a higher educational level than the man and the percentage of marriages in which the woman attained a lower educational level than the man were reversed by age in individual years. Female hypogamy and male hypergamy occurred more frequently among younger married couples; female hypergamy and male hypogamy occurred more frequently among older married couples. Note: Homogamous marriage means that the man's education level is the same as the woman's education level, female hypogamy and male hypergamy means that the woman's educational level is higher than the man's educational level; female hypergamy and male hypogamy means that the woman's educational level is lower than the man's educational level.
These figures show the absolute educational homogamy, hypergamy and hypogamy for men and women by age during the 1990s. However, they do not take into account the structural circumstances that lead men and women to enter into a certain type of marriage. For example, marriages between women with secondary education and men with vocational training are contingent upon the absolute numbers of men and women in these educational categories: there are more female secondary school graduates than male secondary school graduates, and there are more men with vocational training than women with vocational training [Pohyb obyvatelstva, 1989[Pohyb obyvatelstva, , 1992[Pohyb obyvatelstva, , 1995[Pohyb obyvatelstva, , 1998[Pohyb obyvatelstva, , 2001. Thus, female hypogamy and male hypergamy are to some degree forced. The situation is similar in other educational categories. Therefore, the absolute figures are not considered here to be relevant indicators of the relationships between the selected variables. They can only be taken as illustrative in testing our hypotheses. Although they show the percentage of individual types of marriages and their variation by time and age, they do not indicate the extent to which this variation is a result of people's intentions or the extent to which it is forced by structural circumstances.
We shall test the hypotheses using log-linear and log-multiplicative analyses, which make is possible for us to describe the relationships between variables while controlling for the different numbers of men and women at individual educational levels, i.e. any association is not influenced by marginal frequencies. The goal of this analysis is to estimate a parsimonious model as well as an accurate model, which will satisfactorily explain the structure of the data [for more on this type of analysis, see Hout 1983;Xie 1992;Clogg and Shihadeh 1994;Powers and Xie 2000;Agresti 2002].
We analysed the data in two steps. In the first step, we divided Table 1 by marriage age into two three-way tables and estimated separate models for young married couples (up to 29 years of age) and for older married couples (30 and older) (Analysis I -see below). In this case, because we wanted to test Hypothesis 1 about the trends in educational homogamy of concluded marriages among young people separately from marriages among older people, marriage age became the differential criterion. There were two aims behind this step: one was to eliminate re-marriage from the testing of Hypothesis 1 (up to the age of 29, repeated marital choice occurs rarely; the frequency increases after the age of 30) because we thought it might distort the test (in the case of remarriage, a person is influenced by their first choice of spouse, and will probably be partial to different partnership criteria, especially if that person has a dependent(s) from the first marriage -a child or children). The other and more important aim was to test whether the pattern of association among young fiancés and older fiancés is the same. In the second step (Analysis II), we estimated models for all the data (all of Table 1) and again tested Hypothesis 1 about the trends in educational marital choice during the 1990s, and Hypothesis 2 about the influence that the age of fiancés at the time of their marriage had on educational homogamy in Czech society in the 1990s.
Results of the analyses
At the beginning of each of the two analyses we estimated a saturated model that accurately simulated the structure of the data. Then, for both types of analyses, we estimated a model of conditional independence, which presupposes no relationship between the variables. When it was discovered in either of the analyses that the independence model fits the data very poorly, we tested the data against models that differed from the saturated model with regard to constraints in two-way interactions -within tables -and with regard to constraints in multi-way interactions -across tables. 2 Table 3 presents the results of the models estimated for Analysis I, which tested the relationship between three variables (M -men, W -women, Y -years) for young married couples and older married couples separately. 3 Model A1 is a conditional independence model. This model was designed to eliminate the relationship between M and W when the third variable Y is controlled. The differences between the expected frequencies of the model and the measured frequencies in the tables are high, both among young married couples and among older married couples. This model does not fit the data. Model A2 differs from Model A1 heterogeneously (subscript l), i.e. for each table separately by blocked main diagonals B (see Image 1, constraint B). 4 Even when the difference between the model frequencies and the measured frequencies in both age categories decreased, the difference between them remained so large that this model could not be accepted.
Analysis I
The other four models (B1, B2, C1 and C2) are constant social fluidity models [Erikson and Goldthorpe 1992]. These models differ by heterogeneously blocked main diagonals and constraints placed on three-way interactions. Model B1 and Model B2 do not presuppose any occurrence of a three-way interaction; the relationship between M and W by Y is constant. The difference between Model B1 and Model B2 is that B1 was estimated without blocked main diagonals and B2 was es-timated with heterogeneously blocked main diagonals. The two remaining models, Model C1 and C2, were also estimated both without blocked and with heterogeneously blocked main diagonals; what differentiates them from the two previous models is the uniform effect (subscript u) between two-way interactions by Y [Yamaguchi 1987]. This means that the two-way interaction between the education of M and W was the same in all the tables, and the three-way interaction between the tables was estimated as a sum of this two-way interaction and an estimated parameter β, which indicates the change in the strength of the two-way interaction by Y. With respect to both the younger and the older couples, Models B1 and C1 do not, according to conventional statistics, satisfactorily reproduce the data (L 2 is too high, and ∆ is too low given the d.f.). The two remaining models, Models B2 and C2, fit the data satisfactorily. These models, however, are not very parsimonious because the two-way interaction is a full interaction (without a constraint). Because Models Note: Y -years; M -men; W -women; B -blocked main diagonals; D -distance; P -effect of sex on heterogamy; S -educational status effect on entering into marriage; subscript l -heterogeneous effect among tables; subscript o -homogeneous effect among tables; subscript u -uniform effect among tables; subscript x -log-multiplicative effect among tables; L 2 is the log-likelihood ratio chi-square statistic; d.f. refers to the degrees of freedom; BIC is the Bayesian Information Criterion (BIC= L 2 -(d.f.) log (N)), in which N is the total number of cases (for the age group up to 29 years N is 225 956; for the age group 30 and over N is 99 040); ∆ is the index of dissimilarity, which indicates the proportion of cases misclassified by the model. B2 and C2 fit the data better than Models B1 and C1, we estimated other models restricting two-way interactions, with heterogeneously blocked main diagonals.
We began with the model of conditional independence, Model A2, and modelled each constraint of the two-way interaction as homogenous (subscript o) and as log-multiplicative (subscript x) between the tables. The homogenous effect means that a three-way interaction does not exist. The log-multiplicative effect was constructed on the basis of a principle similar to the uniform effect. A two-way interaction between M and W was estimated as the same for all the tables, and the threeway interaction among the tables was modelled as a factor of this two-way interaction and the estimated parameter ϕ/φ (which shows changes in the strength of the two-way interaction by Y). Unlike the uniform effect, however, the log-multiplicative effect does not presuppose an arrangement of rows and columns in the table and is Image 1. Design of factors for two-way association by constraints. Man's education also more appropriate for modelling individual constraints in two-way interactions (for more on this, see Xie [1992]). Models D1 and D2 are distance models [Goodman 1984]. In this two-way interaction there are heterogeneously blocked main diagonals (B), and three parameters (D) were estimated for each set of equally distant fields above and below the diagonal (see Image 1, constraints H, D). Studies on assortative mating in the Czech population [c.f. Možný 1983, Vlachová 1996Katrňák 2001] show that there is a greater tendency towards hypergamy among women than men. Based on this conclusion we estimated parameter P (the effect of sex on heterogamy) in Models E1 and E2; the parameter for those fields just below the main diagonals was different from the parameter for those just above the main diagonals. The equation of the two-way interaction otherwise remained the same as in the preceding models (see Image 1, constraints H, D, P). In the last two models (F1 and F2) the number of parameters in two-way interactions does not change. We have only exchanged the parameter for a woman with vocational training and a man with secondary education for the parameter for a woman with secondary education and a man with vocational training, and defined this exchange as the educational-status effect (S) on entering into marriage (see Image 1, constraints H, D, P, S). In light of the fact that in the Czech population men with vocational training outnumber women with vocational training, and women with secondary education outnumber men with secondary education, the probability that men with vocational training and women with secondary education will enter into a heterogamous marriage differs from that for men with secondary education and women with vocational training (the situation is similar in the case of elementary and tertiary education). This heterogamy by sex and education is accompanied by the complementarity of income between male and female partners with different educational levels (the same income level for men with a lower educational level as for women with a higher educational level) and by a tendency among women with elementary education towards hypergamous marriages with men with vocational training and a tendency among women with secondary education towards hypergamous marriages with men with college education [Katrňák 2001]. Therefore, we used the same parameter for hypergamy and hypogamy among women with secondary education (in the first fields, below and above the main diagonal in each table) as for hypergamy and hypogamy among men with vocational training (in the first fields, above and below the main diagonal). Then, in the first fields above and below the main diagonal in each table we used the same parameter for women with vocational training (and their heterogamy) and for men with secondary education (and their heterogamy). Each of these six models (D1, D2, E1, E2, F1 and F2) fits the data satisfactorily; among younger married couples the difference between the modelled and measured frequencies is slightly less than among older married couples.
With the exception of the first two models (A1, A2), which tested the independence between the variables, and Models B1 and C1, which tested the constant occurrence of an interaction between M and W over time without restricting it, all Figure 1. 5 We can see that educational homogamy is generally higher among younger fiancés than among older fiancés. In both age categories, homogamy is strongest among fiancés with college education and fiancés with elementary education. 6 Homogamy falls significantly as one approaches the center of the table along the main diagonal. This model and Models B1, B2 and C2 are full two-way interaction models. They do not restrict the structure of the data in the table. Therefore, they are not appropriate for answering the question of whether the pattern of association between men and women according to education is different between young fiancés and older fiancés. Among the remaining models, according to the BIC criterion, Model E1 reproduces the structure of young fiancés slightly better, and Model F1 reproduces the structure of older fiancés slightly better. Nevertheless, differences in the fit of these models relating to younger and older fiancés are not substantively significant, and therefore we consider the pattern of association for marital educational pairing among younger and older fiancés in the Czech population to be identical.
Model E1 for young married couples and Model F1 for older married couples presuppose constant strength in the relationship between the education of M and W by Y. A comparison of the size of the association parameters ϕ in Models D2, E2 and F2 leads us to the same conclusion: an association between the education of the man and the woman in choosing a husband or a wife among younger or older fiancés does not change over time. The social, political and cultural changes that have been taking place in Czech society since 1989 have not had a significant impact on the educational choice of a husband or a wife. The analysis disproved Hypothesis 1, that educational homogamy increased between 1988 and 2000 in the age categories of up to age 29 years and 30 and over.
Analysis II
In Analysis II we worked with four variables (M -men, W -women, Y -years, A -age) and again tested Hypothesis 1 (now on the total sample) and Hypothesis 2 with regard to the influence of the age of the fiancés on educational homogamy. The models estimated in this analysis are identical to the models estimated in Analysis I. First, we estimated the conditional independence model (the relationship between M and W upon entering into marriage disappears when we control for Y and A). Then we estimated the same model with heterogeneously blocked main diagonals (constraint H -see Image 1). As shown in Table 4 (Models A1 and A2), none of these models satisfactorily reproduce the data from the table. The other models (Models B1, B2, C1 and C2) presuppose a constant two-way interaction between M and W by Y and A. As in Analysis I, they differ by the heterogeneously blocked main diagonals and by the constraints of the four-way interactions (Models B1 and B2 do not Note: Y -years; A -age; M -men; W -women; B -blocked main diagonals; D -distance; P -effect of sex on heterogamy; S -educational status effect on concluding marriage; subscript l -heterogeneous effect among tables; subscript o -homogeneous effect among tables; subscript u -uniform effect among tables; subscript x -log-multiplicative effect among tables; L 2 is the log-likelihood ratio chi-square statistic; d.f. refers to the degrees of freedom; BIC is the Bayesian Information Criterion (BIC= L 2 -(d.f.) log (N)), in which N is the total number of cases (326 996); ∆ is the index of dissimilarity, which indicates the proportion of cases misclassified by the model.
presuppose an occurrence of a four-way interaction, Models C1 and C2 model it as uniform). The fit of these models (with the exception of models with blocked main diagonals (B2, C2)) is not satisfactory. Therefore, as in Analysis I, we estimated other models with heterogeneously blocked main diagonals. Model D1 is a homogeneous distance model that presupposes an unchanging structure for the two-way interaction by Y and A (constraints H, D -see Image 1). Model D2 differs from D1 by the three-way log-multiplicative effect by Y, and Model D3 in the four-way log-multiplicative effect by Y and A. Model E1 is based on Model D1, with the addition of the effect of sex (P) on heterogamy (constraints H, D, P -see Image 1). Models E2 and E3 differ from E1 by the three-way (by Y) and four-way (by Y and A) log-multiplicative effect. The last three models (Models F1, F2 and F3) are based on Model E1; they add the educational-status effect (S) on entering into marriage to the two-way interaction (constraints H, D, P, S -see Image 1). As in the previous models, they differ by constraints in multi-way interactions (F1 does not presuppose any occurrence of a multi-way interaction, F2 models the interaction as log-multiplicative by Y, and F3 as log-multiplicative by Y and A).
According to the BIC criterion and the conventional statistics (L 2 and ∆ given the d.f.), and with respect to parsimony and accuracy, Model F3 fits the data the most satisfactorily. The estimated parameters of association ϕ between M and W by Y and A in this model are presented in Table 5 (to help illustrate, also provided are the parameters of association ϕ between M and W only by Y in Model F2). Although the values of parameter ϕ are more volatile in specific years in the age category 30and-over than in the younger age group, we cannot speak of a tendency or even a trend. The measure of educational homogamy among men and women remained constant among young and older Czech fiancés in the course of the 1990s. Analysis of the data contained in Table 1 does not demonstrate the validity of Hypothesis 1 relating to an increase in educational homogamy. Nevertheless, the association parameters ϕ are different in specific years between young and older married couples (on average by 18%). In Czech society, people over 30 who enter into marriage are 18% less likely than people under 30 to marry a partner with the same educational level. We were not able to disprove Hypothesis 2 relating to the occurrence of less educational homogamy among people entering into marriage at a later age.
Conclusion and discussion
Our goal in this paper was to describe the development of educational homogamy in Czech society in the 1990s. We analysed all marriages between young (up to the marriage age of 29) and older (marriage age 30 and over) people in selected years (1988, 1991, 1994, 1997, and 2000) and tested two hypotheses. According to the first hypothesis, educational homogamy (especially among young fiancés) should have increased after 1989. According to the second hypothesis, the educational homogamy of marriages where the age of the fiancés is under 30 should be higher than those marriages where fiancés were older than 30 years of age. The analysis did not confirm our first hypothesis. The relative educational homogamy remained constant between 1988 and 2000, both among young and older fiancés. As for the second hypothesis, our analysis did not disprove it. Among people who entered into marriage before the age of 30, the relative educational homogamy is 18% higher than among people who entered marriage after the age of 30. This difference remained practically the same between 1988 and 2000.
The finding that educational homogamy in Czech society in the 1990s did not change is very surprising. It means that during the transformation from socialism to capitalism, social barriers between people did not increase. This conclusion is at odds with the findings of the most recent mobility research studies [Gerber and Hout 2002;Pollak and Müller 2002], which have dealt with the transformation of the social structure in the 1990s in the post-communist countries. Gerber and Hout [2002] examined inter-generational social mobility in Russian society between 1988 and 2000 and showed that the social structure of Russian society was closing. Pollak and Müller [2002] reached the same conclusion when they compared inter-generational mobility in West and East Germany. The social structure in East Germany was indeed more open than the social structure of West Germany, but in both cases the social structure is gradually closing and social fluidity decreasing.
We believe that there are three possible explanations for the disparity between our findings about the constant measure of educational homogamy in the Czech Republic and the examples of the closing social structure in Russian and in East Germany.
The most likely explanation seems to be the decline in the marriage rate in Czech society after 1989. When we take another look at the total number of marriages in the selected years (Table 1), we see that while in 1988, 59 792 people in the 29-and-under age group entered into marriage, in 2000 only 33 914 people did. In the 1990s there was an increase in the number of people in the 29-and-under age group who remained single. With respect to education, these young singles are to be found especially among people with tertiary education [see Katrňák 2004] or, to put it differently, among people, who only rarely choose a spouse with a different educational level; homogamy among their group far exceeds rates of occurrence in the other educational groups (see Figure 1). We believe that our findings about the constant trends in educational homogamy after 1989 in Czech society may be due to this decrease of young female and male college graduates in the marriage market. In such a case, the social structure in the Czech Republic during the 1990s would experience changes similar to those which Gerber and Hout [2002] identified in Russian society, and Pollak and Müller [2002] in the former East Germany. The measurement of educational homogamy in Czech society does not reflect these changes because young people with college education (who are otherwise very educationally homogamous) disappeared from the marriage market after 1989. However, this explanation for the different conclusions about the development of the social structure in post-communist countries is complicated by the constant educational homogamy in the older age group of fiancés (marriage age 30 and over). In this case, the number of marriages did not fall between 1988 and 2000 (see Table 1) and therefore homogamy should have risen; nevertheless, this did not occur.
Another possible explanation is that developments in Russian and German societies and developments in Czech society are diverging. However, we do not consider this conclusion to be acceptable. First, when compared to the countries of Western Europe, the social structures of the former socialist countries are still considered to be more similar than different [see Domański 2000]. Second, the economic, political, and cultural changes that all the post-socialist countries experienced during the 1990s are the same in content, intensity, and direction, and therefore their consequences are also similar.
The last possible explanation could lie in the different focal points of the research studies: our study examines educational homogamy, while the other two research studies in Russia and Germany analysed social mobility. The question is whether the relationship between an increase in educational homogamy and a decrease in social mobility [Ultee and Luijkx 1994;Ultee 1998a, 1998b] also applies to the transforming countries of the former Soviet bloc, and whether the social mobility research study and the educational homogamy research study indicate one and the same thing -the opening (or closing) of the social structure.
Unlike the falsification of the first hypothesis, the verification of the second hypothesis (about the lower educational homogamy of marriages between partners at a later age) is not surprising. Mare's conclusion [1991] that the fall in educational homogamy is directly proportional to the increase in the interval between graduation and the age at marriage cannot however be accepted in this case. The average age upon entering into the first marriage in Czech society increased between 1990 and 2000 (among men from 24 to 28.8 years of age, and among women from 21.4 to 26.9 years of age [Populační vývoj 2001]); the interval between graduation (and entry into the labour market) and entry into marriage continued to increase during the 1990s. Nevertheless, because the difference between the educational homogamy among young and older fiancés remains practically the same over time (with the exception of minor annual fluctuations) we do not believe that this difference is a result of the increasing age of partners upon entering into marriage. The difference in the educational homogamy by age tends to be influenced rather by the different ex-pectations concerning a partner's qualities according to age [Kalmijn 1994]. Young people tend to define these qualities in terms of cultural capital (expectations about the future economic capital of a partner). Moreover, they choose from a wider circle of potential brides and grooms, and therefore it is possible to find a greater occurrence of educational homogamy among them. Conversely, older people can rely on economic capital as the main criterion in spouse selection, as the foundation of the employment career of a partner already exists; they also choose from a narrower circle of potential spouses, and thus the occurrence of educational homogamy is lower. These differences in the conditions that young and older people face over the choice of their spouse are not likely to have changed much during the transformation from socialism to capitalism. In the Czech Republic, the difference between the educational homogamy of partners who married earlier and partners who married at a later age remained practically the same throughout the 1990s. Secondary School Tracking in Socialist Czechoslovakia, 1948.
LAURA FÓNADOVÁ is a research fellow at the Faculty of Economics and Administration at
Masaryk University in Brno. Her main research interest is social and ethnic inequalities, the Roma population in Czech society and social demography. | 2018-12-03T21:34:06.903Z | 2004-06-01T00:00:00.000 | {
"year": 2004,
"sha1": "046c9dfc3280569c09f15e7bce8a0a08eedb88cf",
"oa_license": "CCBYNC",
"oa_url": "http://sreview.soc.cas.cz/doi/10.13060/00380288.2004.40.3.04.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "07f522065484e4173224ef941a3c5bac48dfa17f",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
255976547 | pes2o/s2orc | v3-fos-license | First evidence of vertical Hepatozoon canis transmission in dogs in Europe
Hepatozoon canis is a protozoal agent that is known to be transmitted by oral uptake of H. canis-infected Rhipicephalus sanguineus sensu lato ticks in dogs. Vertical transmission of H. canis has only been described once in a study evaluating dogs from Japan. The aim of this study was to investigate the parasitological status of puppies from a bitch that had tested positive for Hepatozoon spp. prior to giving birth. A 4-year-old, female, pregnant dog imported from Italy (Sardinia) to Germany showed clinical signs of lethargy and tachypnoea and tested positive for H. canis by PCR. The dog gave birth to eight puppies, one of which was stillborn and another that had to be reanimated. Haematology, buffy coat analysis and a biochemistry profile were performed for each dog. EDTA-blood of the surviving seven puppies and bone marrow, liver, spleen, amniotic fluid, and umbilical cord of the stillborn puppy was tested for the presence of Hepatozoon spp. by PCR. The mother and the seven surviving puppies tested positive for H. canis by PCR at day 62 post-partum. Gamonts were detected in all dogs by buffy coat evaluation. Haematological and biochemistry results revealed mild abnormalities. In the stillborn puppy, spleen, umbilical cord, and amniotic fluid were positive for H. canis. The results confirm that vertical transmission is a possible route of H. canis infection in dogs, demonstrated by molecular detection of the pathogen in the stillborn puppy. In the seven surviving puppies, vertical transmission was the most likely transmission route. A potential impact of the level of parasitaemia on the health of puppies, as well as its pathogenesis, should be investigated further.
was experimentally shown to be a suitable vector for H. canis in South America [12], but none of these tick species is endemic in Germany. Ixodes ricinus is widespread in Europe and H. canis DNA was detected in one I. ricinus tick collected from the environment in Italy [13], but additional studies suggested that this tick species does not act as a vector for H. canis [14]. Transstadial transmission of H. canis from R. sanguineus s.l. larvae to nymphs has been described [15].
The life-cycle for all Hepatozoon spp. includes gamogony and sporogony in haematophagous invertebrate definitive hosts and merogony and gametogony in vertebrate hosts [1]. After ingestion of a tick harbouring oocytes that contain sporozoites, the infective sporozoites are released in the gastrointestinal tract of the vertebrate host and reach blood and lymph circulation by penetrating the gut wall. Merogony starts in lymphoid tissues such as the bone marrow and from 13 days postinfection onwards, merozoites penetrate neutrophilic granulocytes and monocytes to develop into gamonts [16]. During the tick's blood meal on an infected host, the gamonts are ingested and gametogenesis takes place in the gut of the tick, followed by sporogony in the haemocoel [1]. Besides vectorial transmission, additional transmission pathways of Hepatozoon spp. have been described and include predation of infected animals, although this has not been described for H. canis [1,17]. Intrauterine transmission of H. canis was previously demonstrated in a study from Japan, in which H. canis gamonts were observed in peripheral blood smears in 23 out of 29 puppies (79%) from a total of six deliveries at 16 to 60 days after birth [18].
The diagnosis of hepatozoonosis is most frequently based on PCR results [20,21], as PCRs have higher sensitivity and specificity compared to other diagnostic tools such as microscopic evaluation [22]. Gamonts often are incidental findings when analysing blood smears.
Additionally, histopathology may reveal meronts and/or monozoic cysts in different tissues [23]. Serological tests, such as the immunofluorescence antibody test (IFAT), detect antibodies against H. canis with high sensitivity mainly in dogs with chronic infections [24,25], but are not used routinely.
To the best of the authors' knowledge, vertical transmission of H. canis has not been reported from dogs in Europe until now. We therefore performed a follow-up on the history of a female pregnant bitch imported from Italy to Germany which previously tested positive for H. canis.
A 4-year-old, female mixed breed dog was presented to a veterinary practice in Ganderkesee, Germany, with lethargy and tachypnoea in the absence of fever (Fig. 1 The first haematological examination (day 0; Vet abc Plus+, scil VET) of the mother was performed at the veterinary practice (Table 1) and showed mild anaemia and leucocytosis. All further analyses, including haematology (ADVIA 2120i, Siemens Healthineers) and a biochemistry profile (cobas 2 c 701, Roche Deutschland Holding GmbH) with kidney parameters, liver enzymes and electrolytes, as well as a buffy-coat analysis with quantification of H. canis-gamonts were carried out at Laboklin ( Table 1). The Rickettsia conorii IFAT of the mother was repeated at day 62 and was still positive with a titre of 1:256. The dog also tested positive for H. canis by PCR on day 62 (Ct 30.8) and day 112 (Ct 31.8). On day 62, the anaemia had gone, but a mild leucocytosis with mild neutrophilia, lymphocytosis and eosinophilia was present (Table 1). Gamonts were detected in neutrophilic granulocytes on day 0 (4%), day 62 (2%) and day 112 (8%), indicating a moderate H. canis concentration in the peripheral blood (Table 1). On day 112, mild Study design and timeline of a mother dog imported from Sardinia (Italy) to Germany and giving birth to eight puppies, all infected with Hepatozoon canis lymphocytosis and mild eosinophilia were seen. Biochemistry results were unremarkable apart from a mild initial decrease in iron on day 0 and day 112 as well as a mild hyperkalaemia on day 112 (Additional file 1). The dog was treated twice with imidocarb dipropionate (Carbesia ® ad us. Vet., 0.5 ml/10 kg of body weight subcutaneously) on day 85 and day 99. On day 112, the H. canis PCR was still positive (Ct 31.8).
The mother gave birth to eight puppies on day 15 after her first presentation in the veterinary clinic. One of the puppies was stillborn (Fig. 1). A post-mortem examination of the stillborn puppy including histopathology showed that the animal was in a state of advanced autolysis and putrefaction, but the umbilical cord was normal, and no gross malformations were observed. The lungs were not ventilated, and the placenta was not available. As far as recognizable by routine histopathology, the lungs and kidneys showed immature morphology. There was no indication of inflammation in any of the examined organs (lungs, heart, liver, spleen, kidneys, brain). Due to the poor state of preservation, no pathogen-specific abnormalities were recognizable. DNA isolated from the umbilical cord (Ct 31.3), spleen (Ct 35.7) and amniotic fluid (Ct 32.3) were positive for H. canis by PCR, whereas the PCRs on the bone marrow and liver were negative.
Six out of the seven surviving puppies were alive at birth, but one dog stopped breathing immediately after birth and had to be reanimated by the owner. This animal (puppy VII) later became ill with fever (40.0-41.0 °C rectal temperature), inappetence and lethargy from day 95 onwards. The puppy presented in a lateral position and was unable to stand or walk on day 97 and received intensive care treatment with application of imidocarb dipropionate (Carbesia ® ad us. vet., 0.5 ml/10 kg of body weight subcutaneously). The treatment was successful, and the clinical signs disappeared within 3 days after the first injection. All other four puppies of which the owner still took care were also treated with imidocarb dipropionate out of precaution, twice with a 12-14 day interval (Fig. 1) and have not developed clinical signs since. One puppy was adopted by new owners and therefore lost for further analysis.
In all seven puppies that were alive at the time of writing, haematology was unremarkable, aside from a mild monocytosis in one puppy (Additional file 1). Gamonts of H. canis were detected in neutrophilic granulocytes of all puppies, with a range from 1 to 7% (median 1.5%), indicating a moderate concentration of H. canis in the peripheral blood. The puppy that had to be reanimated had the highest concentration of gamonts with 7% at day 62 and one of the lowest Ct in PCR testing (31.8; median 31.8, range 31.1-34.8). Biochemistry results revealed mild hyperproteinaemia, a mild increase in albumin and mild hyponatraemia in all puppies. In three out of seven puppies, a mild elevation of urea was seen as well as mild azotaemia in one puppy (Additional file 1: Table S3). At day 125, biochemistry results were available from five out of seven puppies. All five puppies showed a mild increase in creatine kinase and four out of five mild hyperkalaemia. In one puppy, mild elevation in C-reactive protein was recognized as well as decreased urea in another puppy (Additional file 1). The owner monitored the weight of all seven puppies from the day of birth to day 36 (Fig. 2) A ~ 664 base pair fragment of the 18S rRNA gene from H. canis was amplified by PCR from samples collected from the bitch, all puppies, and the umbilical cord and amniotic fluid of the stillborn puppy [26]. The PCR products were subsequently sequenced (LGC Genomics, Berlin) and found to be identical to each other. A BLAST analysis of the sequence (GenBank Accession Number ON740944) showed 100% identity to previous H. canis entries from Europe, the Americas, and Asia, such as KX712129, MN393911 and MT107098.
To the best of the authors' knowledge, this is the first study demonstrating the vertical transmission of H. canis in a dog in Europe. The diagnosis was made by PCR and detection of gamonts in peripheral blood smears. As the mother dog was imported only shortly before initial screening took place and R. sanguineus s.l. ticks are not considered to be endemic in Germany, an infection with H. canis of the mother in the country of origin, in this case Sardinia, seems most likely. Due to the absence of the vector and the early detection of the pathogen by PCR after 8 weeks of age (Fig. 1), the infection of the puppies born in Germany most likely occurred vertically. This is supported by the fact that the pathogen was also detected by PCR in the umbilical cord and the spleen of the stillborn puppy, where vector contact would not have been possible. However, an experimental study previously demonstrated that gamonts could be detected in canine blood as early as 28 days post-infection [16]. As the seven surviving puppies in our study were tested for Hepatozoon spp. at an age of 62 days for the first time, it cannot be fully excluded that they were infected with H. canis after birth. This does seem unlikely, as all puppies were predominantly kept indoors and tick attachment was not observed by the owner.
Screening for co-infections is highly recommended in dogs infected with H. canis as clinical signs are mainly observed in animals with co-infections. In our study, we performed a so-called canine travel profile in the bitch. Positive titres in IFAT for Rickettsia spp. of 1:512 initially and 1:256 on day 61 were found. As there was no fourfold change in this titre during this period, the results were interpreted as being caused by a past pathogen contact, but a chronic or persistent infection could not be ruled out completely.
Although H. canis infections are usually subclinical, some case reports suggest that H. canis infections may cause systemic disease in canine puppies, with lethargy, fever, anorexia, weight loss and gastrointestinal signs being reported as the most prominent clinical signs [1,27,28]. In our study, immunosuppression due to pregnancy might have been responsible for the reported lethargy and tachypnoea in the mother dog.
The presence and severity of clinical signs is known to correlate with the degree of parasitaemia [22,29,30]. The puppy with the highest concentration of H. canis gamonts (7%) had to be reanimated, and another puppy with an unknown concentration of gamonts in the peripheral blood was stillborn. Additionally, there might be a correlation between percentage of weight gain and Ct values of the H. canis PCR with lowered percentages in dogs with higher concentrations of the pathogen (Fig. 2). However, Ct values are not necessarily proportionate to the level of parasitaemia. A quantitative PCR could not be performed, but quantification of H. canis gamonts in the buffy coat. To the best of the authors' knowledge, it is so far unknown, if the parasitaemia level in puppies may be linked to stillbirth. Further experimental studies may be of interest to clarify this hypothesis.
Mild normochromic anaemia is thought to be the most common clinical abnormality in dogs with H. canis infections [1,31,32], which was present in the mother dog on day 0 ( Table 1). The mild hyperkalaemia on day 112 is most likely linked to mild haemolysis. Lymphocytosis, monocytosis and eosinophilia were recognized in most of the dogs in our study (Tables 1, 2). These haematological findings are in accordance with another study evaluating haematology results in dogs younger than 18 months being infected with H. canis [19]. In this study, haematological abnormalities were present in 26 out of 28 dogs (93%), mainly eosinophilia (77%), leucocytosis (46%), lymphocytosis (31%), neutrophilia (23%), monocytosis (19%), thrombocytopenia (19%) and anaemia (4%) [19]. Because 13 out of the 26 dogs (50%) with available information regarding clinical signs and haematological results tested positive for other vector-borne pathogens, only a limited comparison of the mentioned study and our findings is possible. Higher leucocytic count was linked to a higher level of parasitaemia [1], but this was not seen in our study.
The mild elevation of creatine kinase observed in the five puppies with complete follow-up on day 125 must be interpreted with caution. To the best of the authors' knowledge, there are no published age-related reference intervals fitting with the age of the puppies. Therefore, an interpretation as age-related changes must be taken into consideration. Additionally, the formation of cysts in the muscular tissue of the puppies due to the H. canis infection [33] with subsequent elevation of enzyme activity in the blood can be discussed although no lameness or muscular pain was reported. Interestingly, elevated C-reactive protein was reported on day 125 2 weeks after the second shot of imidocarb dipropionate in the puppy with severe clinical signs, though not in any of the other puppies.
The therapeutic approach for canine H. canis infections is challenging, as no drug is officially labelled for treatment of this infection for dogs in Europe. Therefore, treatment options were discussed with the owner and the local veterinary authorities were asked for permission to apply imidocarb dipropionate. Previous reports indicated that treatment with imidocarb dipropionate did not sterilize H. canis infections at the standard recommended dose [34]. This was also demonstrated in our study, in which positive PCR results were still observed in the mother and four of the five treated puppies. However, in most of the dogs in our study, the Ct values of PCR tests revealed a lower parasitaemia after treatment. Clinical signs improved quickly in the diseased puppy and the mother was without clinical signs after treatment too. This is concordant with literature as the prognosis of dogs infected with H. canis is reportedly good in cases of low parasitaemia, although the decrease in the parasitaemia may be slow and may require several repeated treatments with imidocarb dipropionate [1].
Besides vector-based transmission, vertical transmission of H. canis from mother dogs to their puppies may present an important route of transmission. Dog breeders and veterinarians should be aware of this potential risk. As dogs usually do not show clinical signs or clinicopathological abnormalities upon H. canis infections, routine screening of dogs imported from endemic countries is important to identify infected animals. It is recommended to perform PCR-testing of both peripheral whole blood and buffy coat to increase the sensitivity. A possible link between stillbirth and H. canis infections has to be investigated further, as well as the routes of transmission from bitches to puppies. Our data suggest an impact of the umbilical cord (transmission via blood) while the impact of the amniotic fluid is in doubt.
Additionally, further studies are required for evaluation of alternative treatment options in dogs infected with H. canis. | 2023-01-19T22:26:01.624Z | 2022-08-23T00:00:00.000 | {
"year": 2022,
"sha1": "bd58e05c4a9ce88d49d32c69bf846d130c1b9b39",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/counter/pdf/10.1186/s13071-022-05392-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "bd58e05c4a9ce88d49d32c69bf846d130c1b9b39",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
210700945 | pes2o/s2orc | v3-fos-license | Identification of Korean cancer survivors’ unmet needs and desired psychosocial assistance: A focus group study
Purpose This qualitative study identifies difficulties and unmet needs in psychosocial aspect that Korean cancer survivors reported. Materials and Methods We enrolled 18 cancer survivors who agreed to participate in the focus groups. Each focus group consisted of four to six cancer survivors, considering homogeneity of sex and age. Participants were asked to freely describe the practical difficulties they faced and their unmet needs when living as cancer survivors. A cross-case interview analysis was used to identify major themes. Consensual qualitative research analysis was applied to complement the objectivity of results obtained from participants’ interviews. Results We identified three major themes: 1) shifts what cancer connotes, 2) development of government policies regarding integrative management for cancer survivors, and 3) preparing for cancer survivors’ future through vocational rehabilitation or career development. Conclusion Korean cancer survivors had difficulties in psychosocial adjustment even after the completion of anti-cancer treatments. We identified several unmet needs among participants who were living as cancer survivors. This qualitative study may expand the view of cancer survivorship in Korea by incorporating their voices and experiences to facilitate the development of a more holistic cancer survivorship program.
Results
We identified three major themes: 1) shifts what cancer connotes, 2) development of government policies regarding integrative management for cancer survivors, and 3) preparing for cancer survivors' future through vocational rehabilitation or career development.
Conclusion
Korean cancer survivors had difficulties in psychosocial adjustment even after the completion of anti-cancer treatments. We identified several unmet needs among participants who were living as cancer survivors. This qualitative study may expand the view of cancer survivorship in Korea by incorporating their voices and experiences to facilitate the development of a more holistic cancer survivorship program. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
The number of cancer survivors is significantly increasing owing to early diagnosis and advances in treatment techniques in Korea. In 2011-2015, the 5-year relative survival rate for all cancers in Korea was 70.7%, there were approximately 1.6 million cancer survivors as of 2015 [1]. This increment has led health care planners and policymakers to facilitate integrative care for cancer survivors. Cancer survivors experience decreased health-related quality of life (HR-QOL) after the completion of aggressive anti-cancer treatments, including limited physical activity and late sequelae due to anti-cancer treatment as well as loss of self-esteem, depression, fear of recurrence, financial burden, negatively affected work performance and relationship difficulties [2,3]. As the number of long-term cancer survivors increases, HR-QOL emerges as one of major concerns along with survival. Cancer patients were found to have relatively lower HR-QOL than other patients with chronic illnesses [4].
However, existing survivorship care is primarily weighted toward follow-up examination and prevention of recurrence and has shown little interest in HR-QOL, particularly with respect to psychosocial adjustment. Therefore, novel cancer survivorship plans with continuous physical and psychosocial health care are needed for improving HR-QOL of cancer survivors [5].
Better HR-QOL can be achieved if cancer survivors learn how to navigate psychosocial adjustment which is the ability of self-control to cope with various stress from altered physical appearance and surrounding circumstances after anti-cancer treatments. To support psychosocial adjustment of cancer survivors, it is important to resolve their unmet needs.
There have been several quantitative studies on the psychosocial difficulties and unmet needs of Korean cancer survivors [6,7]. However, quantitative studies have limitations in identifying unmet needs, as respondents' voices and experiences are not covered intimately. This qualitative study aimed to explore the difficulties in daily lives experienced by Korean cancer survivors and to identify their unmet needs through direct contact with them.
Material and methods
An Institutional Review Board of Ajou University School of Medicine approved this study before participants' enrollment. Participants were recruited from outpatient clinics and integrative care centers (Gyeonggi Regional Cancer Center) where educational and supportive programs were held for cancer patients, regardless of their place of treatment.
Inclusion criteria
Cancer survivors were eligible for enrollment in the study if 1) they had completed treatment with a curative aim, 2) they had no evidence of disease at the time of recruitment, 3) they were able to communicate in Korean, and 4) they were able to comprehend the informed consent. Cancer survivors who were on hormonal or targeted therapy were also permitted to participate. There were no restrictions on inclusion based on the primary sites of cancer. A total of 18 cancer patients agreed with the consent form with full understanding of the interview process and the need for recording. Their characteristics are summarized in Table 1. The median interval from the first diagnosis to participation in the study was 3 years (range, 1-11 years) and 4 participants had survived for at least 5 years. experienced focus groups moderator and an external consultant with specialty in oncology. The questionnaires were constructed as described by Krueger and Casey [8]; 1) opening questions, 2) transition questions, 3) key questions, and 4) ending questions (Table 2).
Before beginning the focus groups, each participant signed an informed consent and was requested to complete the socio-demographic survey (details of age, gender, marital status, primary household earner, and main stress factor). Given the homogeneity of age and sex, each focus group consisted of 4 to 6 participants in order to ensure that conversation could proceed in a free atmosphere. The interviews were conducted four times at the Gyounggi Regional Cancer Center between March and August 2017, and each interview took approximately 90 minutes. Each focus group session was led by a skillful moderator. The moderator provided detailed clues suitable for the purpose of the study and modified the sequence of questions for the discourse plan. After the interviews were completed, the facilitator summarized the major discussion points to check for additional items or omissions. An assistant (a doctoral student in counseling psychology) also attended the focus groups to observe the group and record non-verbal responses.
To foster an effective discussion focused on the study purpose, a psychologist suggested cues. At the end, the psychologist summarized the issues that were discussed and asked participants to add any remarks or questions. The recording was audiotaped and transcribed verbatim immediately after the completion of each interview by a professional transcriber.
Analysis process
We conducted this study following 4-step guidance for a systematic analysis process presented by Krueger [9]. In this study, we used cross-case interview to analyze the contents of the focus groups. Consensual qualitative research was undertaken to maintain objectivity [10]. Consensual qualitative research is conducted in 3 general steps: 1) Responses to open-ended questions from questionnaires or interviews for each individual case are divided into domains (or topic areas); 2) core ideas (or abstracts or brief summaries) are constructed for all the material within each domain for each case; and 3) cross-analysis, which involves developing categories to describe consistencies in the core ideas within domains across cases. The consensual analysis team consisted of 3 members (one main researcher, one postdoctoral graduate for psychology, and one PhD in counseling psychology). There was one external auditor who held a PhD in counseling psychology and had a wealth of experience in qualitative research. Each member of the consensual qualitative analysis team independently read the full transcriptions and code. When the meanings were not correct, we checked the field notes and recordings to identify the exact contents. The consensual qualitative analysis team was supervised by the external auditor to check if they had missed any important data, whether there was any bias in the interpretations, where the original data were categorized correctly, and whether the theme was concise and reflected the original data.
Results
Qualitative analysis identified 3 major themes. The results are summarized in Table 3.
Theme 1: Shifts in what cancer connotes
Category 1. Improving cancer survivors' perceptions of cancer. Participants confessed that they had been obsessed with a wrong perception of cancer before they finished primary anti-cancer treatments. Participants said that the most important thing in returning to their daily lives as cancer survivors was to change their perception about cancer first.
"I think that cancer is similar to being injured in a minor traffic accident" (Participant #4) "Although it is not over after surgery, I think that cancer is not different from any other disease. I'm managing well while caring for my illness like any other chronic disease." (Participant #14) Category 2. Changing the perceptions and attitudes of physicians towards cancer survivors. Participants stated that physicians needed to change their view of cancer patients from cure to care. Participants commonly said that during their active treatment course, consultation hours were insufficient and that they had only one-sided conversations with their physicians. Their oncologists authoritatively instructed the treatment modalities that the cancer survivors were going to receive without a full explanation of the rationale and possible side effects.
"The doctor just reviewed the medical chart and talked about my conditions briefly. In addition, he was indifferent to my complaints." (Participant #8) "I hope that medical staffs have a better understanding of cancer survivors, and comfort us more wholeheartedly." (Participant #11) Category 3. Changes in public awareness of cancer. Participants were not free from the social stigma about cancer that their family, neighbors, or friends continued to hold, as indicated in their perception of cancer as a death penalty or an infectious disease. Participants empathized that these prejudice about cancer were the main barriers to psychosocial adjustment. Participants suggested ensuing a public service announcement that cancer survivors were not near death, did not have a contagious disease, and were capable of performing their daily activities.
"My mental stress was greater than the physical symptoms. Occasionally, I was under the illusion that I didn't have cancer because I did not feel any critical symptoms. But my family and neighbors recognized cancer as an unconditionally fatal disease." (Participant #17) "A public advertisement is needed to change the wrong perception that cancer means dying immediately, and the ad should have contents clarifying that cancer is a chronic disease that can be treated." (Participant #1) Theme 2: Government policies about integrative management for cancer survivors Category 1. Government policies mandating cancer survivors' participation in integrative programs. The participants strongly requested for several government policies for the integrative management of cancer survivors that would ensure medical and psychological services even after cancer survivors were declared as cured of cancer. The policies they suggested included 1) building a system that provides customized care programs reflecting the needs of cancer survivors, 2) development of medical and psychological program according to the survival stage, 3) a multidisciplinary approach consisting of physicians, nurses, and psychologists for holistic care, 4) revitalization of the community cancer center near their residence, and 5) entitlement to national registration as cancer patients for benefits from Korean National Health Insurance, if the patients and their families are obligated to participate in basic education to inform the characteristics of cancer and patients.
"It is necessary for the government or hospitals to check the needs of cancer patients and give them what they require. If I must do what I need by myself, it is too hard. Furthermore, practitioners do not cooperate and annoy very much. It is good to create a total system in onestop mode to resolve the needs of cancer patients" (Participant #16) Category 2. Psychosocial supportive programs led by experts. Participants experienced anxiety and fear about recurrence or death from initial diagnosis. They felt the lack of programs to support them psychosocially. They stated that a psychological rehabilitation program should be introduced at initial diagnosis to provide cancer survivors with mental stability that ensures more self-confidence. They gave an example of organizing a self-help group program among cancer survivors managed by specialized psychologists. Category 3. Welfare policies tailored to cancer survivors. Participants were dissatisfied with the current welfare policies that concentrate on cancer patients under treatment and felt a lack of welfare benefits after the completion of active treatments. Therefore, they wanted the government to formulate welfare policies tailored to cancer survivors. The representative welfare policies that participants wished for were as follows.
"I would like for cancer survivors to be assigned disability ratings depending on the type of cancer and to be given extra benefits according to the physical discomforts they experience." (Participant #16) "I want to be assigned a public caregiver to help with the housework after chemotherapy." (Participant #12) "Private lessons and mentoring services are needed for children of cancer survivors who cannot afford educational expenditures." (Participant #12) "I wish for a government policy of childbirth support for young cancer survivors." (Participant #17) Theme 3: Preparing for cancer survivors' future Category 1. Career development and exploration. Participants who were primarily responsible for their household revealed that they struggled with economic burden due to difficulties in maintaining their full-time job during treatment. Other participants not primarily responsible for household wanted to reinforce their self-identity through vocational activities. Therefore, career development and exploration were important issues for cancer survivors to prepare for their future. Participants reported that career development and exploration programs could be helpful for cancer survivors by building their presence, reducing their pressure as head of household, preventing depressive mood disorders, and inspiring a sense of accomplishment.
Participants reported the following: "I was responsible for earning a living for treatment periods and struggled with economic burden due to difficulties in maintaining a full-time job during treatment." (Participant #15) "I really want to work. If I work, I can forget my disease. Also, work can prevent depression. Eventually, I can feel joy, worth, and a sense of accomplishment." (Participant #13) Category 2. Planning customized career options including reinstatement relative to stages of cancer treatment. Several participants thought that they needed information on reemployment and career exploration based on their ability and talent. In addition, they requested the advice on leisure activities like climbing hills or community service before return to work.
"I want to be busy, but I cannot work much because I lack physical energy. To recharge my batteries, I usually climb a hill in the morning. That helps me a lot. After a little while, I plan to do volunteer work" (Participant #11)
Discussion
This study was conducted to directly explore the difficulties in their daily lives and issues facing Korean cancer survivors so as to offer assistance to them. In addition, through focus groups, we aimed to identify cancer survivors' unmet needs in depth by collecting patients' voices and experiences. The results largely revealed three major themes: 1) Shifts in what cancer connotes, 2) establishment of government policies regarding integrative management for cancer survivors, and 3) cancer survivors' futures.
First, participants emphasized the need for change in the widespread perception of cancer. Nowadays, cancer has begun to be recognized as a chronic disease requiring long-term management and care because of remarkably improved survival, and not as a matter of life or death. However, this study reveals that in Korea, a cancer diagnosis still causes a serious shock that results in the cancer patient being pronounced dead to themselves and the people around them. Several participants were told by their neighbors that cancer might be a contagious disease and were branded as personae non gratae. Negative public awareness of cancer and/or cancer patients in Korea can be found in a previous study [11]. It is assumed that such a social prejudice towards cancer has been formed because: 1) Overall cancer survival had been low in the past; 2) a number of storylines of TV dramas or movies used to describe the direct connection between cancer and death; and 3) people often thought that cancer might be the price that cancer patients pay for their past misdeeds. Therefore, the widespread prejudice about cancer should be addressed by education, media, and public relations in order to change the perception of cancer by cancer survivors themselves; all medical staff involved in the diagnosis and treatment of cancer; family members of cancer patients; and the general public. Although cancer should be recognized as a chronic disease, cancer survivors are distinguished from other patients with chronic illness because they consistently suffer from physical discomforts and delayed sequelae such as fatigue, pain, and lymphedema for several months or years after active anti-cancer treatments. In addition, many cancer survivors suffer from mental disorders such as anxiety, depression, and fear of recurrence or death. Due to these problems, the overall HR-QOL of cancer survivors is markedly lower than those of patients with other chronic illnesses [12], suggesting that the perceptions of cancer survivors should not be equated with those of other chronically ill patients.
Many participants experienced discomfort upon receiving medical consultations from physicians. They complained of physicians' authoritative attitudes toward them and indifference about caring late sequelae related to anti-cancer treatments and mental problems. Lack of information about self-care management from clinicians is commonly found in the previous studies as unmet needs of cancer survivors [13,14]. This implies that there is a discrepancy in cancer care between physicians' thoughts and cancer survivors' expectations. Therefore, all medical practitioners involved should pay attention to cancer survivors' unmet needs with the aim of providing individualized care.
In the United States, the Institute of Medicine and National Research Council emphasized that cancer survivors should be provided with a survivorship care plan, including 1) specific information about the timing and content of recommended follow-up, 2) recommendations regarding preventive practices and how to maintain health and well-being, and 3) availability of psychosocial services in the community [15]. Based on this report, a number of long-term follow-up models for cancer survivors have been applied in clinical practice, including sharing the medical information of cancer survivors between oncologists and primary care providers, plans for follow-up, monitoring of late toxicities, and education for health promotion [15][16][17]. Medical staff involved in cancer survivorship in the United States are educated about these models in the refresher training courses. In contrast, little education and training for the management of cancer survivors is provided to medical staff in Korea. To help medical staff better understand cancer survivors' needs and to educate them about the integrative care to support cancer survivors' needs, it would be effective if this education course and regular curriculum are prepared in the university or refresher training process.
Integrative management programs for cancer survivors should be developed to simultaneously support medical, psychological, and social services according to the three stages of cancer survival: 1) acute stage, 2) extended stage, and 3) permanent stage [18].
Participants reported that after completing active anti-cancer treatments, they realized the importance of information about the inherent characteristics of cancer, treatment-related toxicities, cancer patient-specific diet, and psychological support from the time of initial diagnosis even though they were not concerned about these aspects during active treatment period. Therefore, at the acute stage, education for cancer patients is needed to inform them about the process of anti-cancer treatments, expected outcomes, possible treatment-related side effects, management of side effects, and diet planning. This education is provided by a multidisciplinary team consisting of oncologists, nurse practitioners, and nutritionists. Psychological services should be accompanied to decrease depression or anxiety for both cancer patients and their caregivers, because caregivers also experience them [19]. Therefore, we suggest that in order to implement an integrative management program from the time of diagnosis, a government policy should be established to ensure that national cancer registration allowing patients to benefit from greater health insurance coverage should require both patients' and care-givers' participation in integrative education/management programs along with cancer diagnosis as a prerequisite, as was requested by the participants during interviews.
At the extended stage, the character of cancer changes from an acute to a chronic illness. Many cancer survivors experience long-term physical and psychosocial sequelae at this stage. However, according to the participants in this study, supportive care for cancer survivors is insufficient. Medical supportive care at this stage includes smooth consultations with other departments or primary healthcare providers regarding the management of treatment-related sequelae and the provision of various rehabilitation therapies such as physiotherapy, occupational therapy, and movement therapy for physical adjustment. At this stage, a psychological rehabilitation program should aim to restore cancer survivors' self-esteem and family relationships. When cancer survivors return to their home and society at this stage, the community cancer center near their residence would be the optimal place to offer integrative management programs, even though many cancer survivors are unaware of community cancer centers because of insufficient publicity.
At the permanent stage, long-term survival is expected, along with low possibility of recurrence. Cancer survivors usually face an economic burden due to job loss during anti-cancer treatment courses or employment difficulties. One of the most important barriers to cancer survivors returning to work is negative public attitudes [20]. Study participants requested various welfare policies tailored to their need, including rating the degree of disability, mentoring services for their school-aged children, dispatch of public caregiver for helping housework, and childbirth support services. These requests can be implemented through utilizing existing welfare policies. For example, The Welfare Law for Persons with Disabilities, enacted in 1981 by the Korean government, stipulates the prioritization of the use of government facilities or other public organizations, disability allowances, rehabilitation counseling and measures for admission to institutions, grants for the educational expenses of children, support of helpers for post-natal care, and support through an activity-supporting allowance. Some of these policies are also aimed at cancer survivors, although they have not been widely publicized. Therefore, it is necessary to promote and improve existing welfare policies to better ensure their availability.
Psychosocial adjustment of cancer survivors implies preparations for the future. Cancer survivors, particularly at a socially active age, often suffer from extreme stress and lose the will to live because they cannot maintain a full-time job and fulfill their family responsibilities. Therefore, through a return to work, cancer survivors can recover economic stability, resume their ordinary lives, and regain self-esteem [21,22]. A previous study reported that approximately half of cancer survivors could not continue vocational activities during or after anticancer treatments [23]. Park et al. reported that cancer patients had a 1.56 times higher risk of job loss after diagnosis than the general population [24]. Cancer survivors fear job loss because their functional activities decline owing to sustained fatigue and physical sequelae, leading to growing state expenditures [25]. Yet, there is no agreed notion of return to work for cancer survivors in Korea. Therefore, the notion of return to work for cancer survivors should be defined before government policies are established. Career development and exploration programs should then be developed to help cancer survivors experiencing job loss to return to work. A vocational rehabilitation program tailored to cancer survivors should also be provided.
This study has several limitations. First, because this is an exploratory study, its results cannot be generalized. Second, we recruited participants irrespective of site of primary tumor, survival period, and age. Cancer survivors' needs might differ by age, gender, and survival period. Therefore, further studies are needed to address these limitations.
The significance of this study is highlighted by the fact that it attempts to explore the Korean cancer survivors' unmet needs and concern. This study can serve as a basis for creating government policies on the integrative management of cancer survivors and development of an integrative program. Finally, because this study suggests ways for cancer survivors to resume daily activities, further studies can be designed to offer practical assistance to cancer survivors. | 2020-01-18T14:03:06.835Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "fad1df812d0c0d87a8665c3ea1d803773c049d39",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0228054&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ba67ff800d209210aeaac3d50a56b3dac2d50a2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
54974870 | pes2o/s2orc | v3-fos-license | Performance of Hydra Probe and MPS-1 Soil Water Sensors in Topsoil Tested in Lab and Field
Soil water sensors are commonly used to monitor water content and matric potential in order to study hydrological processes such as evaporation. Finding a proper sensor is sometimes difficult, especially for measurements in topsoil, where changes of temperature and soil water dynamics occur generally with greater intensity compared to deeper soil layers. We assessed the performance of Hydra Probe water content sensors and MPS-1 matric potential sensors in topsoil in the laboratory and in the field. A common soil-specific calibration function was determined for the Hydra Probes. Measurement accuracy and sensor-to-sensor variation were within the manufacturer specification of ±0.03 m3·m−3. Hydra Probes can operate from dry to saturated conditions. Sensor-specific calibrations from a previous study were used to reduce sensor-to-sensor variation of MPS-1. Measurement accuracy can be expressed by a mean relative error of 10%. According to the manufacturer, the application range of matric potential readings is from −10 kPa to −500 kPa. MPS-1 delivered also values beyond this range, but they were not reliable. Sensor electronics of the MPS-1 were sensitive to ambient temperature changes. Beyond instrument effects, field measurements showed substantial temperature-driven fluctuations of soil water content and matric potential, which complicated data interpretation.
Introduction
The water cycle is a complex dynamical system that includes hydrological processes in atmosphere, vadose zone, and groundwater environment [1].Understanding the spatial and temporal dynamics of water and energy transfer across the soil-atmosphere boundary layer is an important issue for hydrological sciences [2].Generally, there is a close interaction among soil temperature, precipitation, evaporation and soil water [3].Researchers interested in evaporation and related quantities typically use sensors to measure soil temperature and either water content [2] [4] or matric potential [5], or both quantities [6].
Knowing sensor performance is a prerequisite when selecting a device for a certain application.Substantial properties describing this attribute are measuring range, precision (closeness of repeated measurements under unchanged conditions), accuracy (closeness of measurements of a quantity to that quantity's true value), and temperature range and sensitivity.Sensor characteristics should also be considered when interpreting data.However, finding a proper sensor is sometimes difficult, especially for measurements in topsoil, where temporal changes of temperature and soil water content are greater than that in deeper soil layers [7].Since sensors are typically tested and evaluated in the lab, another question is how a certain device performs under field conditions.
In this study we assessed the performance of a pair of sensor types for monitoring soil water content and matric potential in a topsoil in the laboratory and on a weighing lysimeter under natural conditions.We selected the well-known Hydra Probe soil moisture sensor (Stevens Water Monitoring System, Inc., Portland, OR, USA) and the MPS-1 water potential sensor (Decagon Devices, Inc., Pullman, WA, USA).The Hydra Probe represents a robust sensor that is used in various fields of applications both permanently installed as handheld device [8]- [12].Regarding calibration, following approaches exist: A default calibration is provided by the factory and consists of a unique function supposedly being suitable for all soils and all individual sensors of the same type.A soil-specific calibration generates a function with soil-dependent parameter values averaging the behavior of individual sensors.Sensor-specific calibration applies only to a particular sensor and a certain soil.
Several authors studied performance of the Hydra Probe sensor in laboratory, including calibration and temperature effects [13]- [17].However, there are some inconsistent conclusions.Seyfried et al. [15] indicated that a soil-specific calibration of Hydra Probes is sufficient for most applications.In contrast, Evett et al. [13] found a larger inter-sensor variability in soil, and Vaz et al. [17] reported a coefficient of variation of repeated measurements in core samples of 8.6%, which the authors assumed to arise from sensor electronics and oscillation frequency, probe geometry, and sensitivity to soil heterogeneities and air gaps.Moreover, there is a lack of information about temperature effects in natural conditions, especially regarding diurnal fluctuations of soil temperature and soil water content.The MPS-1 was presumably suitable for measuring matric potential in topsoil due to its design and its attributes, particularly the comparatively large measuring range of −10 kPa to −500 kPa [18].Since default calibration has proven to be improper, sensor-specific calibrations are recommended for the MPS-1 [19] [20].Furthermore, Malazian et al. [19] reported small and inconsistent temperature effects at a constant matric potential of −25 kPa, which seems insufficient regarding the large measuring range.They also compared measurements from field-installed MPS-1 to tensiometer readings as reference quantities; however, the range of matric potential was only between zero and −60 kPa, and the correlation was not very good.Beyond that, little is known about the performance of MPS-1 in natural conditions, especially regarding temperature sensitivity at small matric potential values.
General questions that are addressed in this study refer to calibration and accuracy, measuring range and temperature effects of HP and MPS-1 in order to enhance data interpretation when using both sensors in topsoil under natural conditions.
Materials and Methods
Three sensor pairs, each with one Hydra Probe and one MPS-1 (HP-MP), were tested first in the laboratory and afterwards in a lysimeter at the experimental station of the University of Natural Resources and Life Sciences, Vienna (BOKU) in Groß-Enzersdorf, Austria.
Sensor Specifications
The Stevens Hydra Probe measures soil water content θ (m 3 •m −3 ), dielectric permittivity ε R,TC (dimensionless), electrical conductivity EC (S•m −1 ), and soil temperature T s (˚C) [21].The main structural components of the Hydra Probe are a tine assembly and a body housing all electrical components including a thermistor for measuring soil temperature.The tine assembly consists of a base plate that is 25 mm in diameter and four 58 mm long metal rods, whereof three rods form an equilateral triangle around the center rod.The tines serve as wave guides for a 50-MHz-signal that is generated in the probe body.When the tines are pushed into soil material, the probe measures the behavior of an electromagnetic wave in the soil in between the conductors.The probe converts the signal response into the dielectric permittivity [21].Hydra Probe characteristics according to the manufacturer are indicated in Table 1.
The MPS-1 measures soil matric potential ψ m (kPa) in the range of −10 kPa to −500 kPa (pF 2 to pF 3.71).Each sensor consists of two circular ceramic disks with a diameter of 3.2 cm mounted between a shared circuit board and a perforated stainless steel screen.The sensor measures the water content of the ceramics with a frequency domain principle, and converts it to an analog output signal from 0.5 V (dry) to 0.8 V (wet) [18].When a sensor is installed in soil, the matric potential of the water inside the ceramic disks reaches equilibrium after some time by water uptake or release.Consequently, a relationship between the sensor signal and the matric potential of the surrounding soil can be found.MPS-1 characteristics according to the manufacturer are indicated in Table 2.
Data Acquisition
A telemetry system from Adcon Telemetry GmbH (Klosterneuburg, Austria) was used for data logging and data transmission.So-called Remote Terminal Units (RTUs) collected and transmitted data from Hydra Probes and MPS-1.The logging interval was set to 1 hour.Readings were temporarily stored in another RTU, and transmitted via UHF to a central logger.From there, data were transferred to a server and made available via Internet access [22].Digital versions of Hydra Probes that output T s and ε R,TC via SDI-12 protocol were used.Three Hydra Probes were connected to an RTU via an adapter.Three MPS-1 sensors were linked to the telemetry system with a standard interface from Decagon Devices, Inc.The interface supplied the sensors with the required stabilized excitation voltage of 2 V to 5 V DC and transmitted the sensor output to an RTU [18].
Sensor Calibration
Generally, a sensor is a device that converts a physical phenomenon into an electrical signal; hence, a calibration function is necessary to convert sensor readings into physical quantities.Usually the parameters of the calibration function are determined by minimizing the deviations between calibrated and real values and by the way optimizing accuracy.Therefore, a proper calibration is expected to increase sensor performance considerably [15] [17] [19].A sensor-specific calibration takes into account sensor-to-sensor variations usually originating from marginally different sensor materials and electronics.A calibration may also be necessary to compensate temperature sensitivity of sensor electronics.The Hydra Probe requires a soil-specific calibration considering its mineral and organic soil composition [15].Before converting the raw relative permittivity values ε R (dimensionless) of the Hydra Probe into soil water contents θ (L 3 •L −3 ), the readings must be corrected for temperature [15] [21].The standard function for temperature compensation is [23] ( ) where T is the temperature in ˚C and ε R,TC is t the temperature-corrected relative permittivity ([ε R,TC ] = 1).
For the soil-specific calibration we used a standard function of the form ( ) with two parameters A and B to convert the temperature-corrected real dielectric permittivity ε R,TC into soil water content θ (m 3 •m −3 ).The MPS-1 was tested by Malazian et al. [19] in the lab.They reported a highly nonlinear sensitivity, a large sensor-to-sensor variation, a good consistency of sensor readings, and minor hysteresis and temperature effects.Since the default calibration was inaccurate, they calibrated each sensor specifically in a pressure plate apparatus from −10 kPa to −400 kPa, which is the only suitable method for that range.For this study we used sensor-specific calibration functions from a previous study (also determined by means of a pressure plate apparatus) [20].
Inter-Sensor Variability and Sensing Volume of the Hydra Probes
To assess variations between the three Hydra Probes, ε R was measured with three replications in air and tap water with low electrical conductivity EC = 0.03 S•m −1 , eliminating any critical aspects associated with soil.After measuring in air, the sensor rods were slowly moved towards the water table.Since the sensor readings did not change until the tops of the tines contacted the water table (data not shown), it was evident that the electromagnetic field of the sensor did not extend considerably beyond the tines.While slowly immersing the sensor rods in water, the measured dielectric permittivity changed linearly with penetration depth, which is consistent with results of Loiskandl et al. [24].The readings at full submersion were later used for the sensor-specific calibration.
Temperature Effects of the MPS-1
For the MPS-1 Malazian et al. [19] reported small and inconsistent temperature effects at −25 kPa.Since temperature fluctuations of ψ m were assumed to be larger at small values, we performed a simple test to get an idea of possible temperature effects on the instrument.We watered the three MPS-1 (#1, 2, and 3) to a certain ψ m and sealed them with thin rubber caps to avoid evaporation from the sensor ceramics.Sensor output at a temperature of 24˚C in the air-conditioned lab, at 6˚C in a climatic chamber, again 24˚C, 6˚C and 24˚C was recorded, where each temperature condition was triply measured with one hour time interval in between.This added up to 3 × 3 readings for 24˚C and 2 × 3 readings for 6˚C for each sensor.Moreover we measured in tap water at 24˚C (only one reading) in two ways, submerging only the ceramic discs of the sealed sensors and the whole sensor.
Soil-Specific Calibration
The MPS-1 were coupled with the three HP sensors (pairs HP 1-MP 1, HP 2-MP 2, and HP 3-MP 3) and installed in a 6 cm layer of 2 mm-sieved loam soil (33% sand, 46% silt and 21% clay, USDA texture) filled into a small plastic box (16 cm × 26 cm).The topsoil material had been taken a few meters apart from the lysimeter cited above.Soil and its packing in the box were similar to the calibration procedure of Nolz et al. [20].The rods of the Hydra Probes with a length of 57 mm were completely inserted into the soil vertically, measuring the entire depth of the layer.The MPS-1 sensors were bedded into the wetted soil in a horizontal position with the center of the ceramic disks with a diameter of 3.2 cm in the middle of the soil layer.Considering the thickness of the soil layer of 6 cm, the disc was covered on top and at the bottom at least 1.4 cm.Distances between the probes were <10 cm.After initial wetting and sensor installation the soil had time to dry via evaporation at laboratory conditions (air temperature: 18˚C to 22˚C, relative humidity: 60% to 70%), and to consolidate.Since the Hydra Probe responds to the average water content within the sensing volume-independent of the distribu-tion of water within that volume [24]-we could neglect the influence of the vertical water content distribution in the soil layer on average θ determined with the calibration equation.After the initial wetting-drying-cycle, three more cycles were initiated for which ψ m , ε R,TC , and T s were measured continuously.Wetting to full saturation was done gradually within few hours from the bottom, so that air had time to escape from the soil.Dry bulk density and water content for sensor calibration were determined by means of four 35 cm 3 core samples taken during the third drying-cycle at four different soil moisture conditions from wet to dry.The height of the samples comprised the full thickness of the soil layer, their dry bulk density was ρ d = (1.32 ± 0.05) g•cm −3 .Water content θ was determined by weighing each sample, drying at 105˚C until mass remained constant, weighing again, and dividing the volume of water by the total sample volume.ε R,TC -readings from the 3 Hydra Probes were averaged for each θ step and the parameters A and B of Equation ( 2) were fitted by minimizing the sum of squared deviations between volumetric water content from soil sampling and full submersion in water (5 points) and the function values θ (ε R,TC ) with a standard software (Table Curve 2D v5.0).
Experimental Setup in the Field
After the laboratory experiments, the three sensor pairs were installed 2011 in the upper soil layer of a large weighing lysimeter [25] after harvest of spring barley from August 18 (day of year DOY 230) to November 21 (DOY 324).Soil texture was loam with 28% sand, 49% silt and 23% clay; dry bulk density determined from three 200 cm 3 core samples taken nearby was ρ d = (1.36 ± 0.16) g•cm −3 .In order to get a well-defined measuring zone of the topsoil layer, the Hydra Probes were pushed into the undisturbed soil vertically, measuring from soil surface to a depth of 57 mm (length of sensor rods).In that way they can be used for monitoring, but also for manual measurements along transects, e.g. to determine near-surface soil water dynamics of different tillage systems, or to determine spatial variability of soil water content.Generally, inserting the rods downwards is not typical for field studies, because the large head of the sensor shields the soil below from precipitation, and it may disturb evaporative and soil heat fluxes.However, vertical placement is applied for certain tasks focusing on water content in the topsoil layer [8] [12].For the MPS-1 a vertical slot of few millimeters thickness was made with a knife with a rounded blade and soil was wetted.The sensors were put into the prepared slot and the soil was carefully compacted by hand, aiming at a proper contact between the soil and the sensor material without disturbing the structure of the surrounding soil.The center of the ceramic disks rested at a depth of 3 cm.Distances between each Hydra Probe and the respective MPS-1 were more or less 10 cm, and the sensor pairs were installed <1 m apart.Soil water content and matric potential changed under natural conditions due to rainfall R and evaporation E, determined daily from lysimeter measurements [25].Several wetting-drying-periods were monitored by continuous measurements of ψ m and ε R,TC .Due to loss of Hydra Probe temperature readings, soil temperature data T s in 5 cm depth (−5 cm) and air temperature T a 5 cm above ground (+5 cm) were taken from separate temperature sensors a few meters distant.
Inter-Sensor Variation in the Laboratory
Hydra-Probe (HP) readings of ε R,TC in air at T = 24˚C are shown in Table 3.Standard deviations SD of repeated measurements representing precision were equal or smaller than the sensor specification of ±0.5 (Table 1).The inter-sensor variation of the readings in water were within ±0.3, thus also smaller than 0.5.Only the readings of the three sensors in air had a variation in the order of ±1 (Table 3).However, using a default calibration such a difference would result in a different water content of ±0.03 m 3 •m −3 in the dry range, which is of the same magnitude as the measurement accuracy specified by the manufacturer (Table 3).Under wet conditions ε R,TC -variation of ±1 is negligible.The three wetting-drying cycles in the soil box lasted 20, 30, and 20 days, respectively (Figure 1).The sensors reacted immediately on wetting; after start of the wetting phase it took several hours until the readings reached their maximum.Although all cables were fixed to the box, measurements were sensitive to disturbances that were accidentally induced by touching the cables or moving the box, e.g. on day after start (DAS) 8 and 64.Readings of the three Hydra Probes were between ε R,TC = 23 and 27 at saturation and between 4 and 6 at dry conditions (Figure 1(a)).SD ranged from 2.8 to 0.6, which was considerably larger than the variations measured both in air and water (Figure 1).Apparently, the variations did not arise from the sensors themselves, but from soil conditions such as fine cracks, air gaps within the measuring area of a sensor, or heterogeneous water distribution.Since SD was greater after wetting and decreased successively during drying (Figure 1(b)), structural soil heterogeneities such as cracks or macropores appeared to have a greater effect on inhomogeneous water distribution at or near saturation.We concluded that the differences between replications were rather due to soil characteristics than sensor variability.Hence, we decided to determine a common (soil-specific) calibration function for all three Hydra Probes for both laboratory and field application.
The MPS-1 reacted also immediately to wetting (Figure 1(c)).Sensor output ranged from 750 mV to 770 mV at saturation and from 560 mV to 580 mV at dry conditions.In total, SD ranged from 12 mV to 5 mV.In wet soil MPS-1 readings needed some time to approximate and reach equilibrium with soil water.SD decreased for a few days after wetting (Figure 1(d)).During this phase sensor output was not entirely reliable.In the course of the subsequent drying the output values decreased continuously to a certain value-apparently representing the measuring range-and then decreased only slightly or even became larger again.At the same time the readings diverged, thus SD became larger (this trend is contrary to Hydra Probe readings as mentioned above).The effect can be explained through the pore-size distribution of the sensor ceramics, which was obviously optimized for a certain measuring range, hence both hydraulic contact and equilibrium status between sensor and soil is not guaranteed anymore.
Sensor Calibration
Fitting the average readings of the three Hydra Probe to the respective sampled soil water content θ sample by means of Equation ( 2) resulted in A = 0.1382 and B = −0.2426.The resultant soil calibration is similar to the recommended factory calibration for silt loam for A-horizons with A = 0.1226 and B = −0.1903[23] (Figure 2(a)), which is assumed to provide good performance in many mineral soils [15].The calibrated water content θ HP1 overestimated θ sample , while θ HP2 underestimated and θ HP3 -values were in between (Figure 2 Figure 3 shows calibration functions of the utilized MPS-1 for a similar soil according to Nolz et al. [20].It has to be stated, however, that even by using those sensor-specific calibrations a mean relative error of 10% is supposed to remain, and that due to the form of the calibration function a small change of the sensor output causes a great change of ψ m when the soil becomes dry.
Soil Water Dynamics and Measuring Range in the Laboratory
Calibrated θ ranged from 0.50 m 3 •m −3 to 0.05 m 3 •m −3 during the three wetting-drying phases in the laboratory (Figure 4(a)).Estimating porosity n via the relation n = 1 − ρ d /ρ s with ρ d = 1.32 g•cm −3 and particle density ρ s = 2.65 g•cm −3 resulted in n = 0.50.This value was not entirely reached by HP 3, maybe due to air gaps or soil compaction within the sensing volume.Under dry conditions temperature-related fluctuations became remarkable, especially from day after start (DAS) 30 to 45 (Figure 4(a)).
Each MPS-1 started with a certain ψ m value between zero and −6 kPa.However, these measurements were not consistent, because in some cases the values changed with decreasing water content, while in other cases they did not (data not shown).Since it seemed rather impractical to define a specific starting point for each MPS-1, we decided to neglect readings larger than −10 kPa.Apparently, this makes the sensor unsuitable for measuring dominant flow processes such as infiltration.Nevertheless, measuring range depends on pore-size distribution within the sensor ceramic; hence, single sensors might be appropriate to measure also between zero and −10 kPa.In dry soil all sensors delivered values below the specified minimum value of −500 kPa (down to −1.2 MPa).But these data were also not reliable, because after a sensor reached its limit, the output value decreased to some extent, and fluctuated correlating with temperature around an individual level (Figure 4(b)).Soil temperature ranged from 17˚C to 27˚C with distinct diurnal variations (Figure 4(c)).
Soil Water Dynamics and Measuring Range in the Field
During the field experiment soil water content and matric potential changed evidently due to several rainfall events and considerable evaporation (Figure 5).We measured a range for θ from 0.45 m 3 •m −3 to 0.17 m 3 •m −3 (Figure 5(a)), thus smaller than at laboratory experiment.θ values from HP 1 and HP 2 were similar near saturation, but diverged until HP 1 delivered smaller values than HP 2 when getting drier.In the laboratory, on the contrary, HP 2 was always below HP 1 (Figure 4(a)), so it is evident that these differences were not caused by systematic errors from calibration.Due to a software failure there were no data from HP 3. Fluctuations of HP-readings as a result of temperature effects were distinctive, so small rainfall events, e.g. 4 mm on DOY 257 were hardly reflected in θ-measurements.ψ m was similarly difficult to interpret because of oscillations, espe- cially when the soil got dry, e.g. from DOY 250 to 260 (Figure 5(b)).Compared to the laboratory results, the range of ψ m was generally smaller with a minimum of −750 kPa.Under wet conditions MPS-1 provided evident ψ m -readings only when greater than −10 kPa, which corresponded well with the findings from the laboratory experiment.While from DOY 230 to 240 MP 2 gave the smallest ψ m , MP 1-values were lesser than the others from DOY 245 to 255.Similar to the θ measurements, the differences were not supposed to be related to sensor calibration, but rather to soil heterogeneity and inhomogeneous water distribution.On DOY 316 and 317 readings of both sensors (measuring from zero to −6 cm) were obviously incorrect due to freezing.
Soil temperature in 5 cm depth (soil −5 cm) was between 2˚C and 28˚C, temperature fluctuations above ground (surface +5 cm) were substantially larger.This illustrates the challenging conditions under which the sensors were operated (Figure 5(c)).
Temperature Effects in Laboratory and in the Field
Generally, temperature effects are related to both sensor electronics and soil hydraulic properties, making temperature compensation a very complex problem [17].As mentioned above, we used Hydra-Probe readings with a temperature correction from zero to 35˚C (ε R,TC ) [15] [23].Table 4 exemplifies calculated ε R,TC values (Equation (1)) for different θ and temperature that occurred in laboratory as well as in field.One can see that the maximum difference between zero and 35˚C was 0.01 m 3 •m −3 at dry and 0.03 m 3 •m −3 at wet conditions, giving a slope Δθ/ΔT of 0.0003 m 3 •m −3 •˚C −1 and 0.0009 m 3 •m −3 •˚C −1 , respectively.Seyfried and Grant [16] reported, that-with a change from 5˚C to 45˚C-the instrument effect was greater than 0.01 m 3 •m −3 in dry soil, and much lesser at greater θ.For a 20˚C change they found differences less than 0.01 m 3 •m −3 for all θ.In contrast to measurements in water, where θ decreases when temperature increases, temperature effects in soils are more complex.Seyfried and Grant [16] found a linear temperature response in different soils of ±0.03 m 3 •m −3 for temperatures ranging from 5˚C to 45˚C, and thus a slope Δθ/ΔT from −0.0007 m 3 •m −3 •˚C −1 to 0.0007 m 3 •m −3 •˚C −1 .Comparing this slope with those above derived from the values in Table 4 indicates that the temperature compensation (Equation ( 1)) accounts for both instrument and soil temperature effects.
In contrast to findings of Malazian et al. [19], the MPS-1 showed large temperature sensitivity during our experiments.Table 5 summarises the measurements with the sealed sensors (no uptake or release of water and water vapor) at different temperatures.The sensor output at 24˚C was relatively stable during the whole measuring period, but sensor readings at 6˚C were less consistent.While sensors #1 and #2 delivered substantially greater values (20% and 60%) at the smaller temperature, sensor #3 delivered slightly smaller values (Table 5).Partly submerging of the sealed sensor into water had no effect on the sensor output, but when completely under water, the sensor readings at once decreased evidently (Table 5).Since there was no exchange of water, the readings were likely influenced by the surrounding water.Another effect was that touching the sensor body decreased sensor readings within an instant, too.Evaluating these effects is beyond the scope of this paper, but it seems that the electromagnetic field of the sensor expands beyond sensor ceramics, which is problematic regarding calibration and data interpretation.
Looking thoroughly on the curves of Figure 4(a) (box experiment), θ showed oscillations of ±0.03 m 3 •m −3 when the soil became dry (e.g.near DAS 40).This can neither be explained by instrument nor by soil temperature effects as discussed at the beginning of this section.The corresponding ψ m values varied considerably ±100 kPa, but were already out of the defined measuring range of the sensors (Figure 4(b)).Figure 6 displays details of Figure 4 and Figure 5 in order to illustrate and compare temperature effects in the laboratory and in the field.As expected, Hydra Probe as well as MPS-1 measurements in the laboratory showed no substantial fluctuations in the wet range, with a soil temperature between 18˚C and 22˚C (Figure 6(a)).During the field experiment day-night-fluctuation of soil temperature in a selected period was between 13˚C and 20˚C (Figure 6(b)).Apparently, temperature-driven variations of soil water content and matric potential were predominant under field conditions.Despite temperature correction, θ changed during a day around ±0.03 m 3 •m −3 in the wet range, which is larger than any temperature effect estimated from ε R temperature response.Regarding fluctuations in ψ m -data it was impossible to separate real soil water dynamics from instrument temperature effects without detailed knowledge about the latter.Anyway, the fluctuations of ψ m were greater than 30 % and temperature differences were smaller (Figure 6(b)) compared to the MPS-1 temperature test (Table 5).Consequently, the shown variations arose likely to the most part from temperature-driven changes of soil water content itself [4] [26].The minimum daily θ and its corresponding maximum ψ m usually occurred at the minimum temperature at about 7 a.m., while vice versa maximum θ and minimum ψ m occurred at the maximum temperature at about 3 p.m. (Figure 6(b)).The daily mean values of θ and ψ m were typically represented by values measured between 10 and 11 a.m.This has to be taken into account when soil water is not monitored continuously.Apart from that, further studies are required with respect to water and water vapor transport to explain the observations.Another open question addresses the temperature sensitivity of the MPS-1.
Conclusions
Performance of Hydra Probe water content sensors and MPS-1 matric potential sensors was tested in a nearsurface soil layer in the laboratory and under natural conditions in the field.
A common soil-specific calibration function was assessed for the Hydra Probes, but also a default calibration from literature was suitable.Measurement accuracy was ±0.03 m 3 •m −3 , which is in accordance with the manufacturer specification.Sensor-to-sensor variation was considerable, but within the accuracy range.Generally, Hydra Probes measure from dry to saturated conditions.Soil water content ranged from 0.50 to 0.05 m 3 •m −3 during the laboratory experiment and from 0.45 to 0.17 m 3 •m −3 in the field.Temperature sensitivity was minor in laboratory, but substantial in field.However, in natural environment the diurnal fluctuations of soil water content with its minimum at the minimum temperature and vice versa arose likely to the most part from temperature-driven changes of soil water content.
Sensor-specific calibrations from a previous study were used to reduce inter-sensor variation of MPS-1.Measurement accuracy can be expressed by a mean relative error of 10%.According to the manual the applica-tion range of matric potential values is from −10 to −500 kPa.MPS-1 delivered also readings outside this range, but they were not reliable.Sensor electronics of the MPS-1 were prone to temperature changes, and sensor output was affected by the surrounding area.In this regard, further studies are recommended.Fluctuations of matric potential in the field were major with an opposite trend to water content.Generally, soil water dynamics in the field was well described from both sensor types, but data showed substantial temperature-driven fluctuations that made data interpretation difficult to some extent.The daily mean values of water content and matric potential were typically represented by values measured between 10 and 11 a.m.
Figure 2 .
Figure 2. Hydra Probe user calibration: (a) Water content versus dielectric permittivity with calibration functions, and (b) Calibrated water content with measuring accuracy according to the manufacturer.
Figure 4 .
Figure 4. Soil water dynamics during the laboratory experiment.The dotted line illustrates the limit of operation specified by the manufacturer.
Figure 6 .
Figure 6.Soil water dynamics and soil temperature (a) in laboratory; and (b) in the field.
a Range of temperature correction.
Table 3 .
Hydra Probe readings of ε R , TC at T = 24˚C in air and water (mean ± standard deviation).
Table 4 .
Hydra Probe: temperature corrected water content θ(ε R , TC ) for different water contents and temperatures T.
Table 5 .
ψ m -values of the sealed MPS-1 at different temperatures T. | 2018-12-13T09:18:12.054Z | 2014-09-29T00:00:00.000 | {
"year": 2014,
"sha1": "f2efe5b49dde6ef52143aceb9cb402b9d1a4a21e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50206",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f2efe5b49dde6ef52143aceb9cb402b9d1a4a21e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
250102612 | pes2o/s2orc | v3-fos-license | Construction of Prediction Model of Radiotherapy Set-Up Errors in Patients with Lung Cancer
Objective . This study intends to construct an error distribution prediction model and analyze its parameters and analyzes the boundary size of CTV extension to PTV, so as to provide a reference for lung cancer patients to control clinical set-up errors and radiotherapy planning. Methods . The prior SBRT set-up error data of 50 patients with lung cancer treated by medical linear accelerator were selected, the Gaussian mixture model was adopted to construct the error distribution prediction model, and the model parameters were solved, based on which the emission boundary from CTV to PTV was calculated. Results . According to the analysis of the model parameters, the spatial distribution of set-up errors is mainly concentrated in the direction of four central points ( μ 1 ~ μ 4 ), and the error is smaller in the Vrt direction (-0.991 ~ 2.808mm) and Lat direction (-0.447 ~ 1.337mm) and larger in the Lng direction (-1.065 ~ 4,463mm). The possibility of o ff set of set-up errors in μ 2 and μ 3 direction (0.4440, 02198) is greater than that of μ 1 and μ 4 (0.1767, 0.1595). The standard deviation of set-up errors can reach 0.538 mm. The theoretical expansion boundary of CTV to PTV in Vrt, Lng, and Lat can be calculated as 1.7963mm, 2.3749 mm, and 0.6066mm. Conclusion . The GMM Gaussian mixture model can quantitatively describe and predict the set-up errors distribution of lung cancer patients and can obtain the emission boundary of CTV to PTV, which provides a reference for radiotherapy set-up errors control and tumor planning target expansion of lung cancer patients without SBRT.
Introduction
With the development of tumor radiotherapy technology in recent years, stereotactic body radiation therapy (SBRT) has gradually become the standard treatment method for inoperable patients with early non-small-cell lung cancer (NSCLC) because of its advantages of less damage, less treatment times, and promoting the reoxidation of tumor hypoxic cells [1]. As SBRT is more adopted in tumor radiotherapy, the positioning accuracy of radiotherapy has higher requirements. In order to achieve accurate radiotherapy, the key is to keep the height of the patient's posture consistent with that of positioning. In addition, in order to ensure the accuracy of radiotherapy, the postural uncertainty of patients during radiotherapy must be taken into account. When sketching and planning the target volume, it can be achieved by expanding a certain distance on the clinical target volume (CTV) to form a planning target volume (PTV), which includes organ movement and set-up errors [2]. In this study, 550 groups of set-up error data of 50 patients with lung cancer during the treatment period were collected, observed, and statistically analyzed, and the emission boundary of CTV-PTV was obtained, which provides the data basis for clinical practice.
Materials and Methods
2.1. Case Selection. 50 patients with lung cancer were treated with medical linear accelerator (Elekta Infinity), and SBRT was performed in the department of radiotherapy of Zhejiang People's Hospital from January 2020 to July 2021. The clinical data and SBRT set-up error data of each treatment were obtained by cone-beam computed tomography (CBCT) with medical linear accelerator (Elekta Infinity). Among the 50 patients with lung cancer, 36 were male, and 14 were female. The age distribution of the patients was 28-86 years old. According to the choice of chest plain scan in patients with lung cancer, conical beam CT scan was performed once a week. After scanning, CT image registration was performed and the data of lung cancer patients in three directions: vertical direction (Vrt), longitudinal direction (Lng), and lateral direction (Lat) were recorded. A total of 550 times SBRT error data were collected to build the model, and all treatments were carried out on the Elekta accelerator. After the scanning was completed, the CT image data were transmitted to the Monaco treatment planning system workstation through DICOM, the target volume was sketched by the doctor, and the plan was designed by the physicist.
Construction of Prediction Model of Set-up Error Distribution in Radiotherapy for Patients with Lung Cancer.
For the collected data samples, many uncertain factors in the actual recording process were taken into account. It is necessary to screen the data, filter out the "noise" samples, and retain the real set-up error data before building a distribution prediction model for the data samples. As a consequence, based on the 3σ principle, this study excludes the data points that do not meet this rule and constructs the set-up error distribution prediction model for the screened data. The main purpose of this study is to build a prediction model of set-up error distribution. When getting data samples, there is no real sample label to calibrate; that is, this is an unsupervised learning data set. This study uses the common unsupervised learning algorithm: Gaussian mixture clustering (GMM) to cluster the untagged data and label each data point to get the overall distribution of the data [3]. In the process of GMM, because we do not know the label of the data points, it is impossible to evaluate the clustering results. Usually, the clustering results are evaluated based on the principle of maximum intercluster distance and minimum intracluster distance. Contour coefficient is the most commonly adopted evaluation index of this kind of clustering. This study is also based on the index of contour coefficient to evaluate the result of GMM clustering.
The Concrete Process of the Prediction
Model of Set-Up Error Distribution 3.1. Data Preprocessing. Considering the authenticity of the data samples, the data were screened based on the 3σ principle. First of all, the center of the data point of the sample was calculated. For the data in three directions, the Vrt direction and Lng were larger, and Lat could reflect the three coordi-nates of the data point in the three-dimensional space. We calculated the cosine distance between each data point and the center point, got the distance data of each data point, fitted the distance data, and got the result of Gaussian distribution ( Figure 1). When the data point was more than 3σ, the data point shown in Figure 2 was finally obtained. The sphere was a boundary with the center point as the sphere center and 3σ as the radius. In Figure 2, the X axis was lateral, the Y axis was longitudinal, and the Z axis was vertical.
Optimal Clustering Number Based on Contour
Coefficient Index. Contour coefficient is the evaluation index of the most commonly adopted clustering algorithm [4]. It is defined for each different sample, and the profile coefficient of a single sample is calculated as follows: It can be expressed as follows: We could find out the best cluster number by setting different cluster numbers and giving the corresponding contour coefficient values. In this study, the clustering number was set to an integer within [2,5], and the data points were clustered by GMM. Finally, the change of the value of the profile factor was obtained, and the result was shown in Figure 3.
When the clustering number k was set to 4, the value of the contour coefficient reached the maximum, and the curve of the contour coefficient reached the inflection point ( Figure 3). As a consequence, it is most suitable for the data set that the clustering number k is 4 from this index.
GMM Model for Predicting the Distribution of Set-up
Errors. From a mathematical point of view, any continuous nonlinear function can be superimposed by several Gaussian distribution functions and approach the function infinitely. Gaussian mixture clustering is a clustering method based on this principle, which belongs to unsupervised learning algorithm in machine learning. The GMM model uses Gaussian probability density function to quantify things accurately and decomposes a thing into several journeys based on Gaussian probability density function. Theoretically, no matter what the distribution law of the observed data set is, the real distribution can be infinitely approximated by the GMM model [6]. The distribution of this data can be expressed as follows:
BioMed Research International
where α k represents the weight coefficient of each Gaussian distribution, the sum of which is 1; φ (y|Θk) is the Gaussian distribution density; Θ k = ðμ k , σ 2 k Þ, and the Gaussian distribution density is as follows: That is, it represents the k-th Gaussian distribution density function.
In general, because the GMM model function is difficult to deal with through expansion to find the partial derivative, and the optimization problem is troublesome, so the EM algorithm is usually adopted to solve its parameters. The EM algorithm is an expectation maximization algorithm. In statistics, it is often adopted to find the maximum likelihood estimation of parameters of probability models that depend on unobservable hidden variables. It is an effective method to solve the optimization problem of hidden variables [4].
Results
As a consequence, we can pass the cluster number and cluster center point determined by the contour coefficient method to the GMM model, use the EM algorithm for iterative calculation, and finally get the three parameters of the GMM model, so as to build a prediction model about the set-up error distribution. The clustering effect is shown in Figure 4.
The parameters of the GMM error distribution prediction model are as follows: the coordinates of each error center (that is, the mean μ of the GMM model) is shown in Table 1; the covariance matrix of the error model (i.e. the GMM model σ) is shown in Table 2; the probability of each error center (that is, the coefficient α of the GMM model) is shown in Table 3.
The distribution characteristics of the set-up errors can be obtained from the data in the table, which is mainly concentrated in the direction of four central points (μ 1~μ4 ). of each center, the possibility of set-up errors offset in the direction of μ 2 and μ 3 (0.4440, 02198) is greater than that of μ 1 and μ 4 (0.1767, 0.1595). The covariance matrix (coefficient σ) of the model reflects the statistical standard deviation, which can reach 0.538 mm.
Emission Boundary of CTV to PTV
According to the formula M PTV = 2:5∑ Total + 0:7σtotal proposed by Herk et al., ∑ is the standard deviation, and σ is the root mean square of the standard deviation. M is the boundary value of PTV obtained by CTV expansion based on the above calculation [7,8].The theoretical expansion boundaries of Vrt, Lng, and Lat can be calculated as shown in Table 4.
Discussion
How to improve the positioning accuracy of radiotherapy and effectively reduce the set-up errors is the most concerned issue in clinic with the development of radiotherapy technology. When the set-up errors are large, it will lead to insufficient dose in the target volume and too much X-ray exposure to the normal tissue. With the application of CBCT technology, the set-up errors of patients before treatment can be corrected. However, the dose of X-rays produced by CBCT tends to increase the probability of secondary tumors [8,9]. If we can accurately predict the set-up errors of patients during each treatment, we can reduce the set-up errors of patients and minimize the frequency of using CBCT.
The results of Van's research show that the set-up errors during treatment includes three axial direction errors between and within radiotherapy [10]. On the basis of this theory, the Gaussian mixture model is adopted to construct the error distribution prediction model by collecting the SBRT set-up error data set of 50 patients with lung cancer. After analyzing the parameters, the error distribution law is obtained, and the set-up error probability is predicted. The set-up errors are not only a simple error in three axial directions but also tends to be concentrated in several definite central directions in the space. By calculating the coordinates and probabilities of several central points, the possible offset direction and distribution probability of each central point can be obtained.
In addition, the determination of PTV emission boundary is a key issue in tumor radiotherapy [5,11]. A reasonable PTV boundary should not only ensure the possible movement volume including the target volume but also reduce the organ tolerance of normal tissue near the target volume as much as possible. As a consequence, the set-up errors are an important factor in determining the extension distance from CTV to PTV [12]. The research results of this study show that the emission boundary of PTV should not only be considered from the three axes of Vrt, Lng, and Lat but also should be expanded comprehensively in the direction and variance of its four offset centers. It is necessary to carry out nonuniform expansion in each center offset direction and include the variance offset [6].
The set-up error prediction model constructed in this study needs to be further improved. It can only predict the overall set-up error distribution of patients but cannot accurately predict the set-up errors of patients during each treatment [13,14]. In addition, all patients are fixed in supine posture. The set-up errors of patients with other fixed positions have not been predicted by this model. In addition, only 50 cases were collected for statistical analysis, and more clinical data can be collected in the future.
Data Availability
No data were used to support this study.
Ethical Approval
The authors confirmed that the guidelines outlined in the Declaration of Helsinki were followed. The author's institution: School of Nuclear Science and Technology, University of South China had reviewed this study. | 2022-06-29T15:22:41.155Z | 2022-06-25T00:00:00.000 | {
"year": 2022,
"sha1": "aa0236fc4e82b66522cc50951268c6790557028e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2022/5642529.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03f3733a8e67c910ff090878527ea46ba8aa1789",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": []
} |
124331380 | pes2o/s2orc | v3-fos-license | Operation Model Summary and Trend Research of the Highway Freight Logistics Park in China
Road transportation accounts for 80% of the market in China, and it is important to research development for the road hub Logistics Park. In this paper, through the analysis of the concept and the main characteristics of Highway Freight Logistics Park, based on the analysis and summary typical operation mode of Highway Freight Logistics Park, combining with the operating environment and development needs to obtain the development trend of operation mode of the Highway Freight Logistics Parkin China.
The Concept of Highway Freight Logistics Park
According to the State Administration of Quality Supervision Inspection and quarantine China National Standardization Management Committee issued the "classification and fundamental requirements of logistics in the logistics park" standard is defined as: it is the spatial logistics industry agglomeration aggregated by a variety different types of logistics facilities and logistics enterprises that has a certain scale and integrated logistics services, and also is a industry intensive economic function area riled on the relevant facilities in order to reduce the cost of logistics and improve service efficiency.According to the function of logistics park, the logistics park can be divided into: Production Logistics Park, Commerce Distribution Logistics Park, Highway Freight Logistics Park and Integrated Logistics Park [1].
The traffic hub is also known as the transport hub, is comprehensive facilities system formed with transportation mode or transportation line intersection, that has the functions of handling, storage, transit, development, transport, informa- tion services and others [2] including highway hub, railway hub, port hub, aviation hub and complex forms.As the highway is the most important form accounted for 80% of the proportion, so this paper is mainly for highway type research.
According to the conceptual definition of the traffic hub and the logistics park, the definition of the Highway Freight Logistics Park can be defined as follows: it is the area of freight or logistics node agglomeration that relies on making the numerous logistics enterprises together, and bases on gathering logistics resources to provide one-stop service, for the goal to improve the matching efficiency of vehicle cargo information, and reduce logistics costs, with the network as the carrier, to enhance the efficiency of the supply chain for the direction of the development platform of the logistics enterprise cluster and integrated logistics facilities to integrate.In reality, the Highway Freight Logistics Park should meet the following requirements: Have a certain amount of space to achieve the road transport function of the main set of distribution and information service platform.
Main Characteristics of Operation for Highway Freight Logistics Park
The Highway Freight Logistics Park is especially important in the transportation of goods.It can improve the level of intensive degree and organization in logistics, promote the logistics development in scale and reduce the waste of cargo and reduce traffic congestion.At present, the development of China's traffic hub's logistics park mainly presents the following characteristics:
Most of the Park Primarily Devoted to Platform, Does Not Participate in Operations
The Highway Freight logistics park consists of two parts: One is the owner, he provides the parking lot, warehouse, transportation channel, office space and information trading center and other places and to regulate the operation of public park environment.He belongs to the platform and is not involved in the logistic operation generally, the logistics function mainly rely on enterprises settled.Another is settled enterprises, they access to vehicle or cargo information by the park platform, mainly for freight services.
The Park More Focus on Long Distance Transportation than City Distribution [3]
Logistics park is the logistics node connecting the long-distance transportation
Rent Is the Main Revenue Source, Lack of Business Income
The main revenue of logistics park including source: warehouse rent, equipment rental, management fees, value-added services fees, property management fees, the state and local fiscal funds, tax incentives, the enterprise's profit, land after the appreciation of land rental or sale (land value-added returns) and other.Due to the park owners and enterprises relationship is alliance relationship generally and the Park owner do not participate in operation.So in the current the rent and management fee are the main revenue sources, lack of business income with complementary services to the park enterprises.3) Transfar Logistics characteristics [6] Figure 1.Transfar logistics operation mode.
Summay on Characteristics of Existing Operation Modes of Highway Freight Logistics Park
As a comprehensive loose type management operation mode, Transfar logistics is the fourth party platform.Transfar logistics park is equivalent to a market, which is to give the freight for the subject freedom platform and providing the site and information for truck drivers and freight enterprise.Usually, general management is run by the freight companies themselves, Transfar only play the role of supervision and execution.In addition to these operations center and information center.
Affiliate Management Platform Model-KXTX Company
KXTX is a network platform type company, which is committed to driving the high quality micro logisticsenterprises together.KTXT built highway hub parks in the national regional.So that, the small and scattered provinces enterprises and distribution enterprise gathered within the park.Each park circuit can reach capital and major cities in China.In addition to the high quality special line within the park, the outside of park also has join network and the high quality special line members.Through the connection between nodes, KTXT has its own ground network which is covered all the first-tier cities and second-tier cities.Thus, KTXT formed a logistics transportation network across the country (Figure 2).
At the same time, KXTX provides a unified intelligent information system for all the members to realize the whole transportation service visualization, and requires a unified settlement, and ultimately form a set of settlement, finance, monitoring and other functions as one of the operational service system.
2) KXTX profit model [8] KXTX current revenue mainly comes from rental and software system use fees.It account for around 80% of the total profits.Management fee Meanwhile, KXTX system customer must open the uniform settlement account when uses intelligent management system.This will gather a lot of money to precipitate in the platform system.Thay canget better development and investment returns through certain means of financial interest and financial benefits which is use this money to continue to expand itself.
3) KXTX characteristics [9] KXTX is the fourth party platform operation mode.Logistics park and the network use joint venture form.They are both use unified image which together Figure 2. KTXT logistics process and pattern.
form KXTX logistics network.The freight enterprise in platform is also use joint venture form.The transportation of league is responsible for their own and freedom to choose delivery.But, these freight companies need to use unified image and the same settlement in the platform of KXTX.Therefore, KXTX compared with Transfar, the main profit source are fixed management fees according to the proportion of freight charge and settlement with precipitation besides the cost of rent and software system.It opens up new profit space.ANE business system has two core, one is the special line, the other is the store.ANE business entity is special line which formed a network.The store is receiving and distribute system of ANE.The two system of ANE use management platform to operation.Each line has a fixed schedule that marks arriving time and delivering place.It is complete commitment to customers.ANE is build provincial center platform over the country, include large freight station, timing bus, the regional secondary distribution center and a large number of outlets.In areas of the province of platform, organization structure including regional secondary distribution center, the timing of business and some small station.Proprietary lines and hub integrate terminal distribution of a wide range of network by the way of joint venture (Figure 3).
Unified Operation Platform Mode-ANE Logistics
2) ANE Logistics profit mode ANE Logistics profits besides rent also has the line jointing fee, vehicle management fee, ticket fee and other property income.3) ANE Logistics characteristics [11] ANE Logistics uses unified enterprise platform operation mode.The operation of node and line should to be in accordance with the ANE requirements.
That is to say, no matter how many join node and line operators, they only use ANE Logistics image.In particular, ANE is express network model with strict rules and all stores should follow ANE logistics rules.For example, delivery goods must use unified charge, any self-delivery fee is strictly forbidden, otherwise, this will be lead to heavy fines.ANE Logistics take over the original line management from join partners.The cooperation partners can participate in the management and share interests.Every line has a fixed schedule, which marks arriving time and delivering place.It is complete commitment to customers.As a result, in addition to the above several fees, such as rents, join fees, system fees and unification settlement fund precipitation.This profit model also includes the city unified goods collection and distribution fees and brand value due to attract demand of growing business.
Summary
The How to cite this paper: Wen, W.J. and Wu, Y. (2017) Operation Model Summary and Trend Research of the Highway Freight Logistics Park in China.Modern Economy, 8, 745-752.https://doi.org/10.4236/me.2017.85052 or long-distance transportation and city distribution service, built in the traffic arteries and junction area, in order to be able to play the advantages of regional transport of goods collect and distribution, form the inflow and outward radiation of the powerful.But Pay more attention to long distance transportation than city distribution, Logistics enterprises in the park are also more concerned with long-distance transportation.City distribution service capacity is W. J. Wen, Y. Wu weak.
3. 1 .
Integrated Loose Platform Mode-Transfar Logistics 1) Introduction of Transfar mode [4] Transfar Logisticsis a leading domestic Highway Logistics Parkplat form operators, is committed to build China Highway Logistics network operating system, which is based on the organized, informationization, normalized and logistics resources agglomeration, Through the entity Park and information platform for logistics enterprises to provide comprehensive logistics and ancillary services (Figure 1).2) Transfar profit model [5] ① warehouse rental, ② office rental, ③ parking fees, ④ member management fees, ⑤ online integrated resource platform services, ⑥ information technology services.In addition to the above profit, Transfar also provideslogistics consulting, project planning, logistics, financial, exhibition, training and other derivative services.
4. 1 .
Park Platform Model Become Mature and Coexist with Various at the Some Time Internet fully detonate a new change of China logistics in 2014, the logistics platform in hatch, such as transfar logistics, KXTX logistics, and ANE logistics etc. Logistics Park entity and intelligent management system of information platform of base + network model has been more and more accepted by the industry and gradually mature.Although these Freight Logistics Park modes represent different operation modes respectively, such as the comprehensive loose platform mode, joining platform management mode, and Unified operation platform management mode.These Highway Freight Logistics Parks have good function of integrate the platform, although they have different operation characteristics.They play a good role that to concentration and optimize resources for small, scattered and weak part in China and solve the problems of transportation resources scattered, lack of scale effect, market chaos and disorderly competition.At present, although logistics resource integration platform in different operating mode, but all of them complied with China current owner intensive enterprise services, integrated freight organization and government requirements.They fit the current development environment and requirementsof Chinese logistics market.China has a large logistics market and with a wide range of resources.An enterprise cannot monopolizes all.Thus, each enterprise choose different model according to their own conditions.As a result, these patterns will be foreseeable co-exist at the same time.
4. 2 . 1 ) 2 ) 3 )
Logistics Park Will Pay More Attention to the End Resource Integration and the New Pattern Will Be Created With the development of China transportation market and Logistics Park, China highway transportation has been fully developed and got resources integration.China arterial road transportation has a good service ability, but from the perspective of the whole chain of transport, at the end of transport of receiving and distribution has not keep up with the construction time.So that, the last kilome-W.J. Wen, Y. Wu ter efficiency is low and the cost is higher.As urban congestion is more serious in the future as well as the big traffic pressure, predictably, unified the model of integrated city distribution (receiving) and long-distance transport is a Highway Freight Logistics Park development trend (Figure 4).Specific features include: Have the function of integrating logistics companies, providing space and platform same as the general Freight Logistics Park; Have function of unified goods collection, to provide the door receiving for city enterprises and wholesale markets.Complete the Long-distance transportation by choose logistics enterprises inside or outside of park through the intelligent information system as well as unified settlement.With a joint distribution function, it provides joint distribution function in view of the Long-distance transportation goods from inside or outside of park.It will help the freight enterprises of park to realize the last kilometer of urban distribution.It also will charge service fee.
Figure 4 .
Figure 4.The new logistics park process and pattern. | 2019-04-18T07:47:15.876Z | 2017-05-12T00:00:00.000 | {
"year": 2017,
"sha1": "ac1b14ed105284c72f6710592d3e059daf38b8ea",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=76328",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ac1b14ed105284c72f6710592d3e059daf38b8ea",
"s2fieldsofstudy": [
"Business",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
248388164 | pes2o/s2orc | v3-fos-license | Use of Low-Cost Particle Counters for Cotton Dust Exposure Assessment in Textile Mills in Low- and Middle-Income Countries
Abstract Objective There is a lack of consensus on methods for cotton dust measurement in the textile industry, and techniques vary between countries—relying mostly on cumbersome, traditional approaches. We undertook comparisons of standard, gravimetric methods with low-cost optical particle counters for personal and area dust measurements in textile mills in Pakistan. Methods We included male textile workers from the weaving sections of seven cotton mills in Karachi. We used the Institute of Occupational Medicine (IOM) sampler with a Casella Apex 2 standard pump and the Purple Air (PA-II-SD) for measuring personal exposures to inhalable airborne particles (n = 31). We used the Dylos DC1700 particle counter, in addition to the two above, for area-level measurements (n = 29). Results There were no significant correlations between the IOM and PA for personal dust measurements using the original (r = −0.15, P = 0.4) or log-transformed data (r = −0.32, P = 0.07). Similarly, there were no significant correlations when comparing the IOM with either of the particle counters (PA and Dylos) for area dust measurements, using the original (r = −0.07, P = 0.7; r = 0.10, P = 0.6) or log-transformed data (r = −0.09, P = 0.6; r = 0.07, P = 0.7). Conclusion Our findings show a lack of correlation between the gravimetric method and the use of particle counters in both personal and area measurements of cotton dust, precluding their use for measuring occupational exposures to airborne dust in textile mills. There continues to be a need to develop low-cost instruments to help textile industries in low- and middle-income countries to perform cotton dust exposure assessment.
Introduction
Byssinosis is an occupational respiratory disease typically associated with exposure to cotton dust among textile workers. It develops progressively after prolonged exposure over several years (Schilling et al., 1963) and is largely preventable by dust control measures in the workplace (NIOSH, 1986). There is a lack of consensus on methods for cotton dust measurement in the textile industry, and techniques vary between countries. In the UK, for example, standards are based on the use of the Institute of Occupational Medicine (IOM) sampling head (HSE, 2002a), whereas those in the US call for the use of vertical elutriators (OSHA, 1981). The particle size to be measured and permissible levels also vary across countries. This is despite the fact that healthbased sampling principles have been well established and are generally recognized globally (ACGIH, 1985, ISO, 2012-it seems that cotton dust sampling procedures have not been updated in some countries. In any case, the use of cumbersome instruments and lengthy procedures undermines the widespread use of exposure monitoring by environmental health and safety managers at textile mills, especially perhaps in resourcepoor settings. MultiTex is a randomized controlled trial of a lowcost, multi-component intervention to improve dust control and worker health in cotton textile mills in Karachi, Pakistan (Nafees et al., 2019). As part of the study, we undertook comparisons of standard, gravimetric methods with low-cost optical particle counters for personal and area dust measurements in five mills.
Setting and population
Textile workers in Pakistan work in weekly shift patterns with 8-or 12-h shifts, depending on the type and size of mill. For each of these experiments, we included male textile workers from the weaving sections of five textile mills in Karachi.
Dust measurement
We used the IOM sampler with a 25-mm MCE glass fibre filter for the collection of inhalable airborne particles. The sampler was attached to a Casella Apex 2 standard pump operating at 2 l min −1 and was clipped to workers' collars. Such an arrangement allows the IOM sampler to trap particles up to 100 μm in aerodynamic diameter, within the breathing zone of workers; closely simulating the way particles are inhaled through the nose and mouth. The filters were pre-and post-weighed as a single unit; all particles collected were included in the analysis. For weighing, we used a fine weighing scale in a temperature and humidity-controlled environment; changes in weights were recorded in micrograms. We used one field blank for each batch of 10 filters.
The Purple Air (PA-II-SD) device is a wearable, air quality sensor that measures real-time PM 2.5 concentrations. It uses a fan to draw air past a laser, causing reflections from dust particles that may be counted in sizes between 0.3 and 10 μm diameter. Using 1-s particle counts, estimated total mass for PM 2.5 can be averaged using the device. Built-in Wi-Fi enables the sensor to upload readings to the cloud, and store in the PurpleAir map, from where data can be downloaded. PA has been used for measuring ambient air pollution in African countries (Awokola et al., 2020).
The Dylos DC1700 is a static particle counter using laser beams to detect passing particles by their reflectivity. The sensors count particles in two sizes of >0.5 and >2.5 µm; the particle counts can be converted to PM 2.5 mass in µg m −3 (Semple et al., 2015).
Experimental procedures
For experiment I, personal dust measurements were undertaken on 32 machine operators from the weaving sections of five textile mills. IOM and PA samplers were attached in parallel, on the same worker. The personal dust measurements were performed using a standard approach for gravimetric sampling (HSE, 2002b). For experiment II, we included 30 area dust measurements in the weaving sections of five textile mills. The IOM and
What's Important About This Paper?
There is a need to develop low-cost instruments to help textile industries in low-and middle-income countries undertake cotton dust exposure assessments. This study found that particle concentrations measured with two low-cost particle counters (Purple Air and Dylos) were not correlated with a standard method (IOM samplers with gravimetric analysis) in cotton mills in Karachi, Pakistan. These low-cost optical particle counters may not provide a satisfactory alternative to gravimetric methods of measuring occupational exposure to airborne dust in this setting.
PA samplers and Dylos monitors were placed adjacent to each other on a designated place near the centre of the section.
For both experiments, sampling was performed for 6 and 8 h for 8-and 12-h working shifts, respectively, during the daytime. Temperature and humidity were recorded at the workplace. Personal and area-level dust exposures were estimated by determining the 8-h time weighted average (TWA) for each worker.
Statistical analysis
We discarded one sample each in experiments I and II due to inadequate duration of measurement; analyses were of 31 and 29 samples, respectively. We calculated the 8-h TWA values in µg m −3 for dust measurements carried out in each experiment, and report arithmetic means, and geometric means (GM) with standard deviations (GSD). We developed scatter plots and calculated Pearson coefficients for determining correlations between different instruments. We re-assessed the correlations after log transformation of data.
Similarly, using the IOM data in experiment II, we found higher area exposures in weaving rooms using airjet compared with shuttle-less looms [1150.1 µg m −3 (± 2.3) versus 576.7 µg m −3 (±2.3); P = 0.011]. We found a similar trend using the PA, but not the Dylos data (Supplementary Table 1).
Discussion
Our findings show a lack of correlation between the gravimetric method and the use of particle counters in both personal and area measurements of cotton dust. The latter seem unlikely to be helpful in measuring dust concentrations in textile mills and cannot substitute for the traditional, more expensive approach.
Our findings may be explained in several ways. Cotton textile dust is likely comprised of high numbers of large particles, in comparison to combustion-derived aerosols where the particulate matter produced is generally below 2.5 µm in diameter: previous work using the optical particle counters to measure second-hand tobacco smoke or smoke from household cooking have shown good correlation between gravimetric and particle counters (Lim et al., 2018, Coffey et al., 2019. Several studies have shown that Dylos may be used as a simple low-cost substitute for gravimetric analysis when measuring fine particles, such as second-hand cigarette smoke or ambient air pollution (Semple et al., 2015, Carvlin et al., 2017, Ferdous et al., 2020. Similarly, PA has been used to measure fine particle ambient air pollution in various settings (Mousavi et al., 2021). Moreover, we found the use of the Dylos counter particularly problematic since larger cotton particles ('fluff') tended to choke the device's internal fan, necessitating frequent cleaning during field sampling.
As far as we are aware, particle counting devices have only rarely been used to assess occupational exposures to dust comprising larger particles such as that common in textile mills. Recently, Khan et al. (2015) undertook a study involving 47 cotton factories in the Faisalabad region of Pakistan where they determined cotton dust exposures using particle counters (Grimm Portable Aerosol Spectrometer 1108, and the MiniDiSC), in addition to IOM samplers. Compared to our findings for area measurements (PA = 0.08 mg m −3 , Dylos = 0.09 mg m −3 ), they reported a higher PM 2.5 level, 0.57 mg m −3 . Moreover, compared with our finding for gravimetric analysis (IOM; 1.07 mg m −3 ), they report a higher level of 2.55 mg m −3 for the inhalable fraction. They report too that, on average, over 50% of the total dust measured was from coarse particles (>2.5 µm) but also found a high level of correlation (R 2 = 0.7-0.8) between fine and coarse particle concentrations, suggesting that instruments measuring PM 2.5 could be used to reliably provide indications of inhalable dust concentrations. Our findings with the low-cost PA and Dylos devices do not replicate their findings with the GRIMM and MiniDiSC devices, perhaps reflecting the different operation of these higher cost instruments.
A potential limitation of our work is the fact that optical particle counters are generally manufactured to provide an estimate for the fine particles in the respirable fraction (≤PM 2.5 ) and these may not be appropriate for comparison with the IOM samplers, designed to estimate the inhalable fraction (between PM 10 and PM 100 ). Recalibration of these devices by the manufacturers resulting in provision of another calibration curve, or a fixed factor across the whole concentration range could be a possible solution to this problem. Another limitation includes the fact that both the PA and Dylos make use of measurement principles that count particles in the air, such particulate counts may be biased due to physical properties of particles (like size and shape). Moreover, these samplers may need a regular calibration while being used-that was not done in our study.
Conclusion
We conclude low-cost optical particle counters are not a satisfactory alternative to gravimetric methods for measuring occupational exposure to airborne dust in textile mills. There continues to be a need to develop low-cost instruments to help textile industries in lowand middle-income countries perform cotton dust measurement to aid in controlling workers' exposure.
Supplementary data
Supplementary data are available at Annals of Work Exposures and Health online.
Funding
Funding for this project was provided by Wellcome Trust, United Kingdom (ref. 206757/Z/17/Z). The authors declare no conflict of interest relating to the material presented in this Article. Its contents, including any opinions and/or conclusions expressed, are solely those of the authors.
Data availability
Data are available on reasonable request. | 2021-11-19T06:17:08.579Z | 2021-11-14T00:00:00.000 | {
"year": 2021,
"sha1": "68d9b09e5e451fc7f5f05c731601f9c261f92121",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/annweh/advance-article-pdf/doi/10.1093/annweh/wxab102/41147163/wxab102.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "febf457976e33526b3d6f999c5791db5726a306c",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125434281 | pes2o/s2orc | v3-fos-license | THE DIRICHLET-TO-NEUMANN MAP FOR SCHRÖDINGER OPERATORS WITH COMPLEX POTENTIALS
Let Ω ⊂ Rd be a bounded open set with Lipschitz boundary and let q : Ω → C be a bounded complex potential. We study the Dirichlet-toNeumann graph associated with the operator −∆ + q and we give an example in which it is not m-sectorial.
There are various extensions of the Dirichlet-to-Neumann operator.The first one is where the operator −∆ in (1) is replaced by a formally symmetric pure secondorder strongly elliptic differential operator in divergence form.Then one again obtains a self-adjoint version of the Dirichlet-to-Neumann operator, which enjoys a description with a form by making the obvious changes in (2).Similarly, if one replaces the operator −∆ in (1) by a pure second-order strongly elliptic differential operator in divergence form (which is possibly not symmetric), then the associated Dirichlet-to-Neumann operator is an m-sectorial operator.
There occurs a significant difference if one replaces the operator −∆ in (1) by a formally symmetric second-order strongly elliptic differential operator in divergence form, this time with lower-order terms.Then it might happen that D is no longer a self-adjoint operator, because it could be multivalued.Nevertheless, it turns out that D is a self-adjoint graph, which is lower bounded (see [6] Theorems 4.5 and 4.15, or [8] Theorem 5.7).
The aim of this note is to consider the case where the operator −∆ in ( 1) is replaced by −∆ + q, where q : Ω → C is a bounded measurable complex valued function; in a similar way a general second-order strongly elliptic operator in divergence form with lower-order terms could be considered.In Section 2 the form method from [3,4,5,6] will be adapted and applied to the present situation in an abstract form, and in Section 3 the Dirichlet-to-Neumann graph D associated with −∆ + q will be studied.Although one may expect that D is an m-sectorial graph it turns out in Example 3.7 that this is not the case in general.
2.
Forms.In this section we review and extend the form methods and the theory of self-adjoint graphs.
Let V and H be Hilbert spaces.Let a : V × V → C be a continuous sesquilinear form.Continuous means that there exists an M > 0 such that |a(u, v)| ≤ M u V v V for all u, v ∈ V .Let j ∈ L(V, H) be an operator.Define the graph D in H × H by D = {(ϕ, ψ) ∈ H × H : there exists a u ∈ V such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V }.
We call D the graph associated with (a, j).
In general, if A is a graph in H, then the domain of A is dom A = {x ∈ H : (x, y) ∈ A for some y ∈ H} and the multivalued part is mul A = {y ∈ H : (0, y) ∈ A}.
We say that A is single valued, or an operator, if mul A = {0}.In that case one can identify A with a map from dom A into H.
Clearly mul D = {0} if j(V ) is not dense in H.If (ϕ, ψ) ∈ D, then there might be more than one u ∈ V such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V .For that reason we introduce the space We say that the form a is j-elliptic if there exist µ, ω > 0 such that Re a(u) for all u ∈ V .Graphs associated with j-elliptic forms behave well.
Theorem 2.1.Suppose that a is j-elliptic and j(V ) is dense in H. Then D is an m-sectorial operator.Also W j (a) = {0}.
If Ω ⊂ R d is a bounded open set with Lipschitz boundary, V = H 1 (Ω), H = L 2 (Γ), j = Tr and a is as in ( 2), then D is the Dirichlet-to-Neumann operator as in the introduction; cf.Section 3 for more details.
In general the form a is not j-elliptic.An example occurs if one replaces a in ( 2) by , where ∆ D is the Laplacian on Ω with Dirichlet boundary conditions.Then (3) fails for every µ, ω > 0 if u is a corresponding eigenfunction and j = Tr .In addition, the graph associated with (a, j) is not single valued any more.We emphasize that we are interested in the graph associated with (a, j).To get around the problem that the form a is not j-elliptic, it is convenient to introduce a different Hilbert space and a different map j.
Throughout the remainder of this paper we adopt the following hypothesis.
Hypothesis 2.2.Let V , H and H be Hilbert spaces and let a : V × V → C be a continuous sesquilinear form.Let j ∈ L(V, H) and let D be the graph associated with (a, j).Furthermore, let j ∈ L(V, H) be a compact map and assume that the form a is j-elliptic, that is, there are μ, ω > 0 such that for all u ∈ V .
As example, if Ω ⊂ R d is a bounded open set with Lipschitz boundary as before, then one can choose V = H 1 (Ω), H = L 2 (Γ), H = L 2 (Ω), j = Tr and j is the inclusion map from H 1 (Ω) into L 2 (Ω).For a one can choose a continuous sesquilinear form on H 1 (Ω) like in (2).We consider this example in more detail in Section 3.
In general, if A is a graph in H, then A is called symmetric if (x, y) H ∈ R for all (x, y) ∈ A. The graph A is called surjective if for all y ∈ H there exists an x ∈ H such that (x, y) ∈ A. The graph A is called self-adjoint if A is symmetric and for all s ∈ R \ {0} the graph A + i s I is surjective, where for all λ ∈ C we define the graph (A + λ I) by A symmetric graph A is called bounded below if there exists an ω > 0 such that (x, y) H + ω x 2 H ≥ 0 for all (x, y) ∈ A. Under the above main assumptions we can state the following theorem for symmetric forms.We next wish to study the case when a is not symmetric.Proposition 2.4.Adopt Hypothesis 2.2.Then the graph D is closed.
for all v ∈ V , where the orthogonal complement is in V .We first show that ( j(u n )) n∈N is bounded in H. Suppose not.Set τ n = j(u n ) H for all n ∈ N. Passing to a subsequence if necessary, we may assume that τ n > 0 for all n ∈ N and lim n→∞ for all n ∈ N. Let μ, ω > 0 be as in (4).Then for all n ∈ N. Since ( ψ n H ) n∈N is bounded and ψn H j τn < 1 for all large n ∈ N, it follows that (Re a(w n )) n∈N is bounded.Together with (4) it then follows that (w n ) n∈N is bounded in V .Passing to a subsequence if necessary there exists a w ∈ W j (a) ⊥ such that lim n→∞ w n = w weakly in V .Then j(w) = lim n→∞ j(w n ) in H since j is compact.So j(w) H = 1 and in particular w = 0.Alternatively, for all v ∈ V it follows from (6) that Moreover, j(w) = lim n→∞ 1 τn j(u n ) = lim n→∞ 1 τn ϕ n = 0, where the limits are in the weak topology on H.So w ∈ W j (a).Therefore w ∈ W j (a) ∩ W j (a) ⊥ = {0} and w = 0.This is a contradiction.So ( j(u n )) n∈N is bounded in H.
Let n ∈ N. Then with v = u n in (5) one deduces that where we used (4) in the last step.Hence (Re a(u n )) n∈N is bounded.Using again (4) one establishes that (u n ) n∈N is bounded in V .Passing to a subsequence if necessary, there exists a u ∈ V such that lim u n = u weakly in V .Then j(u) = lim j(u n ) = lim ϕ n = ϕ weakly in H. Finally let v ∈ V .Then (5) gives So (ϕ, ψ) ∈ D and D is closed.
Proposition 2.5.Adopt Hypothesis 2.2.Suppose j is compact.Then the map where u ∈ W j (a) ⊥ is the unique element such that j(u) = ϕ and a(u, v) = (ψ, j(v)) H for all v ∈ V .We first show that the graph of Z is closed.Let ((ϕ n , ψ n )) n∈N be a sequence in D, let (ϕ, ψ) ∈ H × H and u ∈ V .Suppose that lim ϕ n = ϕ, lim ψ n = ψ in H and lim u n = u in V , where Hence Z(ϕ, ψ) = u and Z has closed graph.
The closed graph theorem, together with Proposition 2.4 implies that Z is continuous.Since j is compact, the composition We say that A has compact resolvent if (A − λ I) −1 is a compact operator for all λ ∈ ρ(A).
For the sequel it is convenient to introduce the space V j (a) = {u ∈ V : a(u, v) = 0 for all v ∈ ker j}.
Theorem 2.7.Adopt Hypothesis 2.2.If V j (a) ∩ ker j = {0} and ran j is dense in H, then D is an m-sectorial operator.
Note that the operator A D in the next lemma is the Dirichlet Laplacian if a is as in (2) and j is the inclusion map from H 1 (Ω) into L 2 (Ω).
Lemma 2.8.Adopt Hypothesis 2.2.Suppose that j(ker j) is dense in H and j is injective.Then the graph A D associated with (a| ker j×ker j , j| ker j ) is an operator and one has the following.
(a)
ker If ker A D = {0} and ran j is dense in H, then mul D = {0}.
Proof.The graph A D in H × H associated with (a| ker j×ker j , j| ker j ) is given by Now suppose that k ∈ mul A D .Let u ∈ ker j be such that j(u) = 0 and a(u, v) = (k, j(v)) H for all v ∈ ker j.The assumption that j is injective yields u = 0 and hence 0 = a(u, v) = (k, j(v)) H for all v ∈ ker j.Since j(ker j) is dense in H it follows that k = 0. Therefore mul A D = {0} and A D is an operator.'(a)'.'⊃'.Let u ∈ V j (a) ∩ ker j.Then u ∈ ker j.Moreover, a(u, v) = 0 for all v ∈ ker j.So j(u) ∈ dom A D and A D j(u) = 0. Therefore j(u) ∈ ker A D .
The converse inclusion can be proved similarly.'(b)'.Since A D has compact resolvent, this statement follows from part (a) and the injectivity of j.
In Corollary 3.4 we give a class of forms such that the converse of Lemma 2.8(c) is valid.
We conclude this section with some facts on graphs.In general, let A be a graph in H.In the following definitions we use the conventions as in the book [22] of Kato.The numerical range of A is the set 2 ) such that (x, y) H ∈ Σ θ for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.The graph A is called quasi-accretive if there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I.The graph A is called quasi m-accretive if there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.Clearly every m-sectorial graph is sectorial and quasi m-accretive.Moreover, every sectorial graph is quasi-accretive.Lemma 2.9.Let A be a graph.
(a)
If not dom A ⊥ mul A, then the numerical range of A is the full complex plane.
(b)
If A is a quasi-accretive graph, then dom A ⊥ mul A. 3. Complex potentials.In this section we consider the Dirichlet-to-Neumann map with respect to the operator −∆ + q, where q is a bounded complex valued potential on a Lipschitz domain.
Throughout this section fix a bounded open set Ω ⊂ R d with Lipschitz boundary Γ.Let q : Ω → C be a bounded measurable function.Choose V = H 1 (Ω), H = L 2 (Γ), j = Tr : H 1 (Ω) → L 2 (Γ), H = L 2 (Ω) and j the inclusion of V into H.Then j and j are compact.Moreover, ran j is dense in H by the Stone-Weierstraß theorem.Define a : H Then a is a sesquilinear form and it is j-elliptic.Let D be the graph associated with (a, j).Note that all assumptions in Hypothesis 2.2 are satisfied.In order to describe D, we need the notion of a weak normal derivative.
Let u ∈ H 1 (Ω) and suppose that there exists an f ∈ L 2 (Ω) such that ∆u = f as distribution.Let ψ ∈ L 2 (Γ).Then we say that u has weak normal derivative ψ if for all v ∈ H 1 (Ω).Since ran j is dense in H it follows that ψ is unique and we write ∂ ν u = ψ.
The alluded description of the graph D is as follows.
Proof.The easy proof is left to the reader.
Let A D = −∆ D + q, where ∆ D is the Laplacian on Ω with Dirichlet boundary conditions.Then A D is as in Lemma 2.8.Moreover, (A D ) * = −∆ D + q.Proposition 3.2.Let u ∈ ker A D .Then u has a weak normal derivative, that is, ∂ ν u ∈ L 2 (Γ) is defined.Similarly, if u ∈ ker(A D ) * , then u has a weak normal derivative.
The claim for (A D ) * follows by replacing q by q.
Note that the right hand side is indeed defined and it is a subspace of L 2 (Γ) by Proposition 3.2.
1 .
Introduction.The classical Dirichlet-to-Neumann operator D is a positive selfadjoint operator acting on functions defined on the boundary Γ = ∂Ω of a bounded open set Ω ⊂ R d with Lipschitz boundary.The operator D is defined as follows.Let ϕ, ψ ∈ L 2 (Γ).Then ϕ ∈ dom D and Dϕ = ψ if and only if there exists a u ∈ H 1 (Ω) such that Tr u = ϕ, −∆u = 0 weakly on Ω,
(c)If A is a quasi m-accretive graph, then mul A = (dom A) ⊥ .Proof.'(a)'.There are x ∈ dom A and y ∈ mul A such that (x, y ) H = 0. Without loss of generality we may assume that x H = 1.There exists a y ∈ H such that (x, y) ∈ A.Then (x, y + τ y ) ∈ A for all τ ∈ C.So (x, y + τ y ) H ∈ W (A) for all τ ∈ C.'(b)'.This follows from Statement (a).'(c)'.By Statement (b) it remains to show that (dom A) ⊥ ⊂ mul A. By assumption there exists a γ ∈ R such that Re(x, y) H ≥ 0 for all (x, y) ∈ A − γ I and A − (γ − 1)I is invertible.Without loss of generality we may assume that γ = 0. Let y ∈ (dom A) ⊥ .Define x = (A + I) −1 y.Then x ∈ dom A and (x, y − x) ∈ A. So − x 2 H = Re(x, y − x) H ≥ 0 and x = 0. Then (0, y) ∈ A and y ∈ mul A as required. | 2019-04-22T13:05:48.905Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "84ff94683a65076ce9e31f92c2888ec5bb3c36c6",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=94eb151a-dcba-4290-821f-3dc33e7ee80b",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "84ff94683a65076ce9e31f92c2888ec5bb3c36c6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
228902343 | pes2o/s2orc | v3-fos-license | The Phylogenetic Analysis of Cimex hemipterus (Hemiptera: Cimicidae) Isolated from Different Regions of Iran Using Cytochrome Oxidase Subunit I Gene
Background: Bedbugs are blood feeding ectoparasites of humans and several domesticated animals. There are scarcity of information about the bed bugs population throughout Iran and only very limited and local studies are available. The aim of this study is to assess the phylogenetic relationships and nucleotide diversity using partial sequences of cytochrome oxidase I gene (COI) among the populations of tropical bed bugs inhabiting Iran. Methods: The bedbugs were collected from cities located in different geographical regions of Iran. After DNA extraction PCR was performed for COI gene using specific primers. Then DNA sequencing was performed on PCR products for the all 15 examined samples. Results: DNA sequencing analysis showed that the all C. hemipterus samples were similar, despite the minor nucleotide variations (within the range of 576 to 697bp) on average between 5 and 10 Single nucleotide polymorphisms (SNPs). Subsequently, the results were compared with the database in gene bank which revealed close similarity and sequence homology with other C. hemipterus from other parts of the world. Conclusion: In conclusion, this study has demonstrated the ability of the COI gene to differentiate between the C. hemipterus populations from a few different locations in Iran. The current research is the first report of phylogenetic and genetic species diversity analysis conducted on C. hemipterus in Iran. These results provided basic information for further studies of molecular epidemiology, public health and pest control operators in Iran.
Introduction
"Bed bugs" is a term often applied to the approximately 90 species within the Cimicidae; of these, only Cimex lectularius, the common bed bug, and C. hemipterus, the tropical bed bug, show a strong host preference for humans (1). Cimex hemipterus, is considered as the common tropical bedbug whereas C. lectularius is common in temperate climates (2). It has been well documented that bed bugs harbor at least 40 human pathogens, but there is no tangible evidence regarding the ability of routine mechanical transmission any of them (1,(3)(4)(5)(6). All the daytime, bed bugs disappear and hide in inaccessible places such as: cracks and crevices in beds, wooden furniture, floors, and walls and reappear at night to feed from their preferred host, humans (7).
Apart from the discomfort caused by the bite, bedbugs have been known to cause secondary infections and psychological disorders (4). Chronic infestation can cause nervousness, lethargy, pallor, diarrhea, and even iron deficiency (8)(9)(10). Bedbugs infest any kind of temporary accommodations, especially hotels, serviced apartments, and cause severe problems. A resurgence of bed bugs has been reported from the United States, Canada, Australia, Europe, and some Asian countries during the past 15 years (11)(12)(13)(14)(15)(16). The reasons for the explosion have not yet been clarified; however, several factors such as increased rate of international travel, reduced use of residual insecticides indoors, and insecticide resistance may play a role (17).
Few studies have been undertaken on these bugs in Iran. Dehghani et al. examined an outbreak of these bugs in 1998 in villages west of Kashan. Out of 495 houses in 10 villages there was 6.7% contamination1 (18). In that year, the National Association of Managing World Pests announced the contamination in New York at 6.7%. Shahraki et al. in 2000, examined an outbreak of bugs in university dormitories among 180 boys and 145 girls in Yasuj, western Iran. They reported 28.9% contamination which approximated the current study after taking into consideration that the numbers of individuals were not close to our study (19). Haghi et al. reported that in Bahman Amir, Mazandaran, Iran most bugs were found in bedrooms (56.54%), living rooms (31.25%), and kitchens (8.59%) (17). Also, from the 182 examined containers in Polour, Mazandaran Province, 164 (approximately 90.1%) had evidence of contamination by bed bugs (20).
Genetic analysis of medically important insect species is absolutely required, because it provides useful information about vector transmission, disease epidemiology and disease control (21). There is scarcity of information about the bed bug population throughout Iran and only very limited and local studies are available (17,20). Taxonomic and morphologic identification of bed bugs require highly experienced person and appropriate samples. Nowadays, molecular techniques such as nucleotide sequence analysis and phylogenetic tree have been developed for taxonomic identification (22).
In recent years, the application of highresolution molecular markers has provided im-portant new insight into the population genetic structure and infestation dynamics of many insect pest species of public health concern (23-25). New molecular tools now make it feasible to not only accurately identify the number of populations actively infesting a building (24,26) but also to elucidate dynamics and characteristics essential for understanding infestation patterns and history, e.g., levels of genetic diversity (a measure often associated with population health (27), temporal stability of populations after pest control efforts (28), and the presence or absence of genetic mutations associated with insecticide resistance (29).
Population genetics studies on bed bugs have been completed using nuclear rRNA, mt DNA genes, and microsatellite loci as markers (25,(30)(31)(32)(33)(34). The aim of the first of these studies (Szalanski et al. 2008) shed light on the dispersal patterns of bed bugs during their recent global resurgence. All of the studies on bed bugs thus far have found genetic diversity within human associated population to be low, resulting from a great deal of inbreeding (25,(30)(31)(32). Despite the importance of bed bugs in Iran, genetic evaluation has not yet been studied. Therefore, we aimed at this shortcoming and analyzed DNA sequences of C. hemipterus gathered from different parts of Iran, using partial sequences of cytochrome oxidase I gene (COI) and evaluated the genetic relationship between them.
Sample collection
All procedures in this study were carried out in accordance with the guidelines of the Animal Ethics Committee of Faculty of Veterinary Medicine, Urmia University (AECVU) and supervised by authority of Sample collection Urmia University Research Council (UURC). Geographically, there are 4 different zones in Iran, known as region 1: Caspian Sea (temperature: 8-26 °C, annual rainfall: 400-1,500 mm), region 2: Mountainous area (temperature: -5-29 °C, annual rainfall: 200-500mm), region 3: Persian Gulf (temperature: 12.6-35 °C, annual rainfall: 200-300mm), and region 4: the Central Desert (temperature: -4-44 °C, rainfall: less than 100mm). The pattern of bed bug species distribution for the 4 different areas was determined according to the method of Skerman and Hillard (35). Considering 10% prevalence, 95% confidence level and 5% error rate, 138 bed bugs collected and were used in this study. That way, adult bed bugs were collected from various locations such as infested hotels, residential houses and industrial buildings during May 2016 to August 2017, with the help of pest control companies. The following cities were selected from each region: Region 1: Sari and Rasht, region 2: Tehran, Isfahan, Urmia, Tabriz, Shiraz, Saghez, Sanandaj, Kermanshah and Hamadan, region 3: Ahvaz and Bandar Abbas, and region 4: Semnan. These locations were mapped by collecting the locality data via Google Earth (Fig. 1).
Individual samples were collected by using forceps then stored in a sample collection bottle and preserved in 95% alcohol and stored at -20 °C until analysis onset. The insects were transferred to the laboratory of parasitology division, faculty of veterinary medicine, Urmia University and identified under a stereo microscope (Olympus SZ61, Olympus Corporation, Tokyo, Japan), using morphological keys described by Usinger (1) and Walpole (36). The pronotum which is the most distinguishing feature used to identify the two bedbugs species. The pronotum of C. lectularius is wider than that of C. hemipterus because of an upturned lateral flange on the margin of the pronotum on the thorax of C. lectularius which is absent in C. hemipterus (1). The dorsal and ventral sides of the Bedbug pronotum were observed to get a distinct image of the pronotum because the projecting edge of some were not very clear dorsally.
DNA extraction
Genomic DNA from Individual tropical bed bug of each collection site was extracted using DNA isolation kit, MBST (Molecular and Biological system transfer, Tehran, Iran) following the manufacturer's instructions. For this, samples were grinded by pestle and placed into 1.5ml micro centrifuge tube and total DNA was eluted in 100µl of elution buffer. DNA quality and concentration from each specimen was determined spectrophotometrically using the NanoDrop (Thermo Scientific 2000c, United States) and stored in -20 °C for further procedures. We control the contaminations by check the 260/230 ratio because a poor ratio may be the result of a contaminant absorbing at 230nm or less. Also we check the wavelength of the trough in the spectra; this should be at 230nm.
Amplification of the mitochondrial COI and sequencing
For PCR amplification of the COI, the 658bp amplicon, the forward and the reverse primers LEP-F (5′-ATT CAA CCA ATC ATA AAG ATA TNG G-3′) and LEP-R (5′-TAW ACT TCW GGR TGTCCR AAR AAT CA-3′) were used (32). The PCR was conducted in 25µl total volume, each containing 2.5µl 10x PCR buffer, 2μL 50mM MgCl2, 0.5µl dNTPs, 3µl DNA template, 10 picomole forward and reverse primers (0.5µl for each), 0.5µl Taq DNA polymerase (Sinaclon, Iran) and 15.5µl ddH2O. The PCR cycling conditions set in the program were as follows: initial denaturation at 94 °C for 5min followed by 35 cycles of 94 °C for 30sec (denaturation), 42 °C for 30sec (annealing), 72 °C for 45sec (extension) and a final extension step of 72 °C for 2min. PCR products were analyzed by electrophoresis in 1.5% agarose gel to confirm that the samples con-242 http://jad.tums.ac.ir Published Online: September 30, 2020 tained a single band. Then, stained with safe DNA stain gel and visualized with UV-Transilluminator (BTS-20M, Japan). Finally, Purified PCR products were sent to SinaClon Company (Tehran, Iran) for sequencing.
Nucleotide Diversity and Phylogenetic Tree Construction
The nucleotide sequences of each species from various regions were aligned for variation positions. Sequences were uploaded on NCBI to search for the most similar reference sequences, and positions of the COI were determined with the help of BLAST, available at NCBI. A total of eleven COI sequences belong to C. hemipterus available in the Gene Bank were used to phylogenetic analysis, including 3, 3, 2, 1, 1 and 1 sequence related to Malaysia, Bangladesh, Thailand, Czech Republic, USA and Iran, respectively. The Triatoma dimidiata (Hemiptera: Reduviidae), accession number JQ575031, was used as an out group. The alignment was manually edited to remove any alignment errors using the aligning tool Clustal W (37) and exported as MEGA and FASTA format files. All the obtained nucleotide sequences were deposited in the GenBank with the assigned accession numbers (Table 1). Subsequently, phylogenetic relationship was examined and constructed by Maximum-likelihood method (ML) using the Molecular Evolutionary Genetics Analysis (MEGA), version 6.0. The reliability of an inferred tree was tested by 1000 bootstrap. The DNA sequence polymorphism analyses for determining nucleotide diversity were estimated using BioEdit Version 7, 0, 1 and Blastn software (38).
Results
The average size of COI fragment obtained from the amplified C. hemipterus was found to be 655bp which was at the expected PCR product size (approximate length 658 bp) for the all 15 examined samples, within the range of 576 to 697bp. The accepted COI sequences found in NCBI GenBank database showed that the percentage identity ranged from 97 to 100. Nucleotide sequences of COI obtained from this study were submitted to NCBI GenBank and then accession numbers of MG770888 to MG77089, MG739319 to MG 739326, MG737714 and MG696803 were assigned to them (Table 1).
After some processing, for example, deleting and aligning sequences using Mega 6 Molecular Software, 362bp of partial COI from 15 sequences of C. hemipterus were obtained successfully. Construction of phylogenetic tree is done based on mitochondrial COI sequencing by maximum likelihood (ML) method. The sequences obtained from the present study were compared with sequences of C. hemipterus and C. lectularius from different parts of the world. As the tree shows, the samples were classified in three major clusters which confirm the genetic variation among different species of Cimex. All our isolates and those of other parts of the world were placed in one clustered together (G1) showing no significant difference between various regions despite the minor nucleotide variations.
Consistently, they were far from C. lectularis and T. dimidiata clusters (G2 and G3, respectively). Our isolates were further clustered into three subgroups (Fig. 2). The first one (SG1) contained 9 isolates which are shown in the phylogenetic tree in Fig. 2 from USA (JQ782821). The second one (SG2) contained two isolates collected from Kermanshah (MG770888) and Esfahan (MG 739326). These sequences showed a significant nucleotide similarity with each other and were distinct from the both subgroups SG1 and SG3, and there is no reference sequence in GenBank that corresponds to this subgroup. Our third subgroup (SG3) contained four isolates collected from Hamedan (MG739319), Bandar Abbas (MG739324), Tehran north (MG739321) and Semnan (MG739325). Anal-ysis of these sequences showed a significant nucleotide similarity between Hamedan and Bandar Abbas and also between Tehran north and Semnan.
This phylogenetic tree is also supported by a mean pair-wise distance that is calculated at 0.005, suggesting that all of the C. hemipterus populations studied are clustered together, showing no significant variance between different regions despite minor nucleotide variations, on average between 5 and 10 Single nucleotide polymorphisms (SNPs).
Discussion
There is a paucity of data and analysis regarding phylogenetic relationships below infraorder Cimicomorpha (39,40). The proposed evolutionary relationships of the taxa within superfamily Cimicoidea and the family Cimicidae are based on morphological characters, chromosome numbers, and host associations (1,32). Previous studies have associated the Cimicidae with other families within the superfamily Cimicoidea using both morphological and molecular characters (39,41). A total evidence analysis using 16S, 18S, 28S and COI DNA sequence data and 73 morphological characters (39) has determined 13 infraorder Cimicomorpha and superfamily Cimicoidea are both monophyletic. The same study reported that the families Cimicidae, Polyctenidae and Curaliidae form a monophyletic clade (39).
In this work, we studied the phylogeny of C. hemipterus populations come from different regions of Iran, using the mitochondrial COI gene sequences. The COI gene is a part of the mitochondrial DNA genes and has been used as a potent marker for molecular phylogenetic studies, because it is species specific and appropriate for analyses of intra specific variations. The rate of evolvement in the COI gene is also relatively rapid which allow distinction at the species level and the identification of obscure species (42,43). Despite the vast outbreak of bedbug in various rural and urban regions of Iran, there are very limited and unsatisfactory reports about the prevalence (17,20). Probably due to resurgence and propagation of bed bugs in other countries, the population also has increased in Iran (44,45). Limited public awareness, increase in internal and international travels, increase in utilization of second-hand furniture and resistance to pesticides may contribute to this resurgence (46,47). Therefore, the resurgence and subsequent problems inspired us to perform phylogenetic analysis, as well as study genetic diversity and population dynamics.
The perceived near extirpation of bed bugs from many areas around the world suggests a genetic bottleneck would have occurred, which would be reflected in low genetic diversity across current bed bug populations (25,(30)(31)(32)(33)(34). However, all of the studies completed thus far have found a relatively high genetic diversity between populations in different locations. Such high diversity across populations is atypical for species that have undergone a recent and single founder event (30). Szalanski et al. (30) focused on genetic variation among various bed bug populations in USA, Canada and Australia. They examined a partial sequence of the mitochondrial (mt) 16S rRNA gene and nuclear rRNA ITS-1 region of 136 adult bed bugs sampled from 22 populations and found a relatively high genetic diversity in the 16S gene, and low diversity in the ITS-1 gene.
The current research is the first report of phylogenetic and genetic species diversity analysis conducted on C. hemipterus in Iran. Our analysis showed one main cluster in phylogenetic tree. Therefore, from the present study it can be concluded that C. hemipterus in Iran is an arthropod with low genetic diversity with a potentially high capability for raise the levels of inbreeding. The previous study, done by Booth et al. (25) regarding molecular markers of bed bug infestation dynamics within apartment buildings supported the same pattern of genetic diversity in C. lectularius as they reveal restricted genetic diversity. Similar researches also have conducted in other countries. Seri-Masran and Majid (45) studied genetic diversity of bed bugs in Malaysia. They considered 22 selected infested structures and consistent to our findings, they observed one main monophyletic clade. Another study was performed in Thailand and one main cluster of C. he-246 http://jad.tums.ac.ir Published Online: September 30, 2020 mipterus was obtained when it was compared with C. lectularis isolates. Both of the aforementioned studies support our results.
Given that the current phylogenetic and taxonomic relationships within the family Cimicidae are based on host relationships and morphology, it is possible that the inclusion of molecular data could cause a restructuring of the systematic relationships (1,32). In addition to COI genes, the DNA sequences of the entire mitochondrial genome of C. hemipterus may be useful for population genetics studies. There are several reports in the country using molecular methods for species identification of insects (48)(49)(50)(51)(52)(53)(54)(55)(56)(57)(58).
Conclusion
These results provided basic information for further studies of molecular epidemiology and control of C. hemipterus infestation to the public, medical association, entomologists and pest control operators in Iran. | 2020-11-12T09:04:30.794Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "85eab32f916ab8a4b3726ef507bbacdc2bfd78e4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/jad.v14i3.4557",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffa1e3f083f9baf83c4d6c8fdbe504777446b655",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
7525717 | pes2o/s2orc | v3-fos-license | Assembly of productive T cell receptor delta variable region genes exhibits allelic inclusion.
The generation of a productive "in-frame" T cell receptor beta (TCR beta), immunoglobulin (Ig) heavy (H) or Ig light (L) chain variable region gene can result in the cessation of rearrangement of the alternate allele, a process referred to as allelic exclusion. This process ensures that most alphabeta T cells express a single TCR beta chain and most B cells express single IgH and IgL chains. Assembly of TCR alpha and TCR gamma chain variable region genes exhibit allelic inclusion and alphabeta and gammadelta T cells can express two TCR alpha or TCR gamma chains, respectively. However, it was not known whether assembly of TCR delta variable regions genes is regulated in the context of allelic exclusion. To address this issue, we have analyzed TCR delta rearrangements in a panel of mouse splenic gammadelta T cell hybridomas. We find that, similar to TCR alpha and gamma variable region genes, assembly of TCR delta variable region genes exhibits properties of allelic inclusion. These findings are discussed in the context of gammadelta T cell development and regulation of rearrangement of TCR delta genes.
L ymphocyte antigen receptor variable region genes are assembled during development from component variable (V), 1 diversity (D), and joining (J) gene segments in the case of the TCR  and ␦ chain genes and the Ig heavy (H) chain gene or from V and J gene segments in the case of TCR ␣ and ␥ chain genes and Ig light (L) chain genes (1,2). Productive rearrangement of TCR  or IgH chain variable region genes results in cessation of further V to DJ rearrangements on the alternate allele, a process referred to as allelic exclusion (2,3). "Functional" rearrangement of IgL or L chain genes (i.e., rearrangements which generate an IgL chain that can pair with a pre-existing IgH chain) also lead to cessation of further IgL chain rearrangements resulting in both allelic and IgL chain isotype exclusion (4). In contrast, TCR ␣ and TCR ␥ chain variable region gene assembly does not exhibit properties of allelic exclusion (3,(5)(6)(7). Consequently ␣ and ␥␦ T cells can express two TCR ␣ or ␥ chains, respectively (6,7).
Several models have been proposed to account for allelic exclusion. One model proposed that the probability of a productive rearrangement is low making it unlikely that an individual cell could have two productive rearrangements (8). However, it is now known that the probability of a productive rearrangement can be as high as 33% (9). Another model proposed that the probability of two complete V(D)J rearrangements in any one cell was low. However, a significant percentage of peripheral B and T cells have two IgH or TCR  V(D)J rearrangements, respectively, arguing against this model (3,10). It has been proposed that IgH chain allelic exclusion occurs due to a toxic effect of expressing two IgH chains (11). However, the recent demonstration that B cell development proceeds normally in mice that express two IgH transgenes essentially rules out this model (12). An early model, based on analyses of rearrangement patterns in cell lines, proposed that allelic exclusion is regulated and that expression of a productively rearranged IgH or IgL chain prevents further rearrangements at the IgH and IgL chain loci, respectively (4,13,14). This regulated model was supported by studies demonstrating that expression of IgH or IgL transgenes resulted in a block in endogenous IgH or IgL chain gene rearrangement, respectively (15)(16)(17)(18). Studies of TCR  transgenic mice have supported an analogous model by which the TCR  transgene feeds 1466 TCR-␦ Allelic Inclusion back to block endogenous TCR  rearrangements (19). In addition, it has recently been demonstrated that expression of IgH or TCR  chains as pre-B or pre-T cell receptors, respectively, is required for allelic exclusion (20)(21)(22)(23).
T cells can be divided into two distinct lineages based on expression of either ␣ or ␥␦ TCRs. The genes that encode the TCR  and TCR ␥ chains lie in distinct loci, whereas the genes that encode the TCR ␦ and TCR ␣ chains lie in a single locus (TCR ␣ / ␦ locus; Fig. 1; references 24, 25). In the adult thymus TCR  rearrangements are initiated at the CD4 Ϫ /CD8 Ϫ (double negative, DN) stage of thymocyte development and are ordered with D  to J  rearrangement occurring on both alleles before V  to DJ  rearrangement (3,26,27). Once a productive V(D)J  rearrangement is made and a TCR  chain expressed, cells proceed to the CD4 ϩ /CD8 ϩ (double positive, DP) stage of development and further V  to DJ  rearrangements cease (3,26,27). As a result, many ␣ T cells have DJ  rearrangements on a single allele (28). V ␣ to J ␣ rearrangements are initiated at the DP stage. However, unlike the TCR  locus, expression of a TCR ␣ chain does not result in cessation of V ␣ to J ␣ rearrangements (3,26,27). This process continues on both alleles, and V ␣ to J ␣ rearrangements can result in the deletion of previously assembled productive VJ ␣ rearrangements (29). It has been proposed that the downregulation of recombinase activating gene (RAG) gene expression may ultimately be responsible for termination of V ␣ to J ␣ rearrangement (30).
Several notable differences exist between the developmental regulation of assembly of ␣ and ␥␦ TCR variable region genes. Assembly of TCR ␥ and TCR ␦ variable region genes occurs at the DN stage of thymocyte development (31). It is not known whether rearrangement of these genes is concurrent or sequential. In addition, assembly of TCR ␥ genes does not appear to exhibit allelic exclusion (7). Similar to the TCR  locus, assembly of TCR ␦ variable region genes does not proceed to completion on all alleles. However, unlike the TCR  locus, TCR ␦ variable region gene assembly does not appear to be ordered, since incomplete DD ␦ , DJ ␦ and VD ␦ rearrangements have been described (32,33). It is unresolved whether productive TCR ␦ rearrangements lead to termination of further TCR ␦ rearrangements (allelic exclusion) or whether TCR ␦ rearrangements are limited by factors independent of the formation of productive rearrangements. To address this issue, we have analyzed TCR ␦ rearrangements in a panel of T cell hybridomas derived from splenic ␥␦ T cells. We find the percentage of cells with two in-frame V(D)J ␦ rearrangements is similar to that predicted in the absence of allelic exclusion. These findings are discussed in the context of ␥␦ T cell development.
Isolation of ␥␦ T Cells and Production of ␥␦ T Cell Hybridomas.
Whole spleen cell suspensions from C57BL6 ϫ CBA mice were incubated in DMEM-15 containing 40 U recombinant human IL-2/ml (PharMingen, San Diego, CA) on plates that had been coated with 10 g/ml rat anti-hamster Ig (PharMingen) followed by 10 g/ml of an anti-TCR ␦ chain mAb (GL4; PharMingen). Cultures were maintained for 6 d and the resulting cells were Ͼ90% pure ␥␦ T cells as determined by flow cytometry (data not shown). Hybridomas were produced by fusing these ␥␦ T cells to the thymoma BW-1100.129.237 using a fusion protocol that has been described elsewhere (34,35).
PCR and Sequence Analysis. PCR reactions were carried out using 200 ng of genomic DNA isolated from hybridomas and 2. Southern blotting of PCR products was carried out with internal oligos to J␦1 (CGACAAACTCGTCTTTGG) or J␦2 (CTC-CTGGGACACCCGACAGA).
Theoretical Determination of In-Frame Rearrangement Percentages. All mature ␥␦ T cells must have at least one productive VDJ␦ rearrangement. If the probability that a VDJ␦ rearrangement will be in-frame equals P then the probability that a VDJ␦ rearrangement will be out of frame will be (1 Ϫ P). If there is an equal chance of a rearrangement in each of the three reading frames then P ϭ 1/3. In the absence of allelic exclusion and in cells with two VDJ␦ rearrangements, the percentage of cells with two in-frame TCR ␦ rearrangements will be equal to the probability that the cell will have two in-frame rearrangements divided by the probability that the cell will have at least one in-frame rearrangement.
Results
Generation of ␥␦ T Cell Hybridomas. Splenic ␥␦ T cell hybridomas were generated from C57BL6 ϫ CBA F1 mice by stimulating unfractionated spleen cells with platebound anti-TCR ␦ antibody (GL3) as described in Materials and Methods section. The resulting cell population was Ͼ90% pure ␥␦ T cells as determined by flow cytometry (data not shown). These cells were fused to the BW-1100.129.237 (BW) thymoma which is incapable of producing TCR ␦, , or ␣ chains (35). T cell hybridomas generated by fusion of a ␥␦ T cell to BW were identified by flow cytometric analysis of cell surface TCR ␦ expression (data not shown). Only those hybridomas that expressed TCR ␦ were chosen for further analysis.
To ensure that both TCR ␦ alleles were present in the resulting panel of ␥␦ T cell hybridomas genomic DNA isolated from these hybridomas was assayed by Southern blot analyses using TCR ␦ restriction fragment length polymorphisms that exist between C57BL6 and CBA mice (44). Genomic DNA isolated from ␥␦ T cell hybridomas was digested with HindIII and subjected to Southern blot analysis using probe 6 which is directed against the TCR ␦ constant region gene (C␦; Figs. 1 and 2). Probe 6 hybridizing bands are not found in BW as C␦ has been deleted on both alleles due to V␣ to J␣ rearrangements (Fig. 2). Using probe 6, distinct size bands are generated by the C57BL6 and CBA TCR ␦ alleles, and hybridomas that had lost either allele (for example F1D.11) were excluded from further analysis (Fig. 2).
To determine whether the ␥␦ T cell hybridomas chosen for analysis were clonal, genomic DNA was subjected to Southern blot analysis using probe 4 to detect rearrangements to J␦1 (Fig. 3 a), probe 5 to detect rearrangements to J␦2 (Fig. 3 b) or a probe that detects rearrangements to J2 (data not shown). Hybridomas that were oligoclonal on the basis of having three or more TCR ␦ or TCR  rearrangements were excluded from further analysis. The resulting 27 ␥␦ T cell hybridomas that satisfied the above criteria were characterized further.
TCR ␦ Allele Configurations in ␥␦ T Cell Hybridomas.
To determine the extent of TCR ␦ rearrangement in the panel of ␥␦ T cell hybridomas, hybridoma genomic DNA was digested with BglII and subjected to Southern blot analysis using probe 4 (Figs. 1 and 3 a, data not shown). None of the hybridomas exhibited germline size bands, demonstrating that most TCR ␦ alleles are rearranged in splenic ␥␦ T cells (Fig. 3 a, data not shown). In addition, most hybridomas gave two bands with probe 4, showing that most TCR ␦ rearrangements in splenic ␥␦ T cells use the J␦1 gene segment. F1D.58 exhibited single nongermline bands with probes 4 and 5, demonstrating that it had undergone rearrangements to J␦1 and J␦2 (Fig. 3 b). All Figure 1. Schematic of the mouse TCR ␣/␦ locus. Shown are the V␣/V␦ gene segments, the D␦ and J␦ gene segments, the TCR ␦ enhancer (E␦), the TCR ␦ constant region gene (C␦), and the V␦5 gene segment. This is followed by the J␣ gene segments, the TCR ␣ constant region gene (C␣), and the TCR ␣ enhancer (E␣). Also shown are probes 1 and 3 through 6 and the approximate position of the BglII (B2) sites. The schematic is not drawn to scale.
Figure 2.
Analysis for the presence of the CBA and C57BL6 TCR ␦ alleles. Genomic DNA from the hybridoma fusion partner BW 1100.129.237 (BW), CBA kidney (CBA), C57BL6 kidney (B6), or ␥␦ T cell hybridomas (F1D) was digested with HindIII and subjected to Southern blot analysis using probe 6 (Fig. 1). The 2-kb marker is shown. Figure 3. Analysis of rearrangements to J␦1 and J␦2. Genomic DNA samples described in the legend to Fig. 1 were digested with either BglII (a) or HindIII (b) and subjected to Southern blot analysis using probes 1, 3, and 4 (a) or probe 5 (b). Shown are the 9-, 6-and 4-kb molecular mass markers. other hybridomas exhibited germline bands from the C57BL6 and CBA TCR ␦ alleles using probe 5, showing that there is minimal rearrangement to the J␦2 gene segment in splenic ␥␦ T cell hybridomas analyzed here (Fig. 3 b, data not shown).
To assay for incomplete TCR ␦ rearrangements, BglIIdigested hybridoma genomic DNA was probed with probes 1 and 3 (Figs. 1 and 3 a, data not shown). Probe 1 hybridizing bands of similar size to probe 4 hybridizing bands would be generated by alleles that have undergone D␦1 to D␦2 or D␦1 to J␦1 rearrangements. Hybridomas F1D.19, 45, 51, 55, 71, and 72 all exhibit a 4.5-kb BglII band with probes 1 and 4 (Fig. 3 a, Table 1, data not shown), whereas hybridoma F1D.58 yields a 5.5-kb BglII band with probes 1 and 4 (Fig. 3 a and Table 1). To determine which hybridomas had undergone a D␦1 to D␦2 rearrangement, BglIIdigested DNA was assayed with a probe (probe 3) to the region between D␦1 and D␦2 which will be deleted upon D␦1 to D␦2 rearrangement (Fig. 1). F1D.58 has a 5.5-kb BglII band that hybridizes to probe 3, demonstrating that one of the alleles in this hybridoma has undergone a D␦1 to D␦2 rearrangement (Fig. 3 a, Table 1). The hybridomas that yielded a 4.5-kb BglII band with probes 1 and 4 do not have probe 3 hybridizing bands, demonstrating that they have undergone D␦1 to J␦1 rearrangements (Fig. 3 a).
The DV105S1 (V␦5) gene segment rearranges by inversion and, therefore, a nongermline probe 1 hybridizing band should be generated by the reciprocal product of a DV105S1 to D␦1 rearrangement. Furthermore, this band would likely be of a different size than the probe 4 hybridizing band generated by the same rearrangement. Hybridomas F1D.17, 23, 32, and 61 all have 9-kb BglII probe 1 hybridizing bands (Fig. 3 a, data not shown). None of these hybridomas has a 9-kb probe 4 hybridizing BglII band, and each was found to have a DV105S1 to J␦1 rearrangement by PCR analysis (Table 2). Finally, V␦ to D␦ rearrangements by V␦ gene segments other than DV105S1 will result in loss of probe 1 hybridizing bands and generation of a non-germline probe 3 hybridizing band that should be similar in size to the band generated by probe 4 when probing BglII-digested DNA. In this regard F1D.68 has a 3.5-kb BglII band that hybridizes to probes 3 and 4 and was found to have a V␦ to D␦2 rearrangement by PCR analysis (Fig. 3 a, Table 1).
These Southern blot analyses revealed that, of the 27 ␥␦ T cell hybridomas analyzed, all had complete V(D)J ␦ rearrangements on one allele (Fig. 3, a and b, Tables 1 and 2). On the other allele, 19 hybridomas also had complete V(D)J ␦ rearrangements, one had a D␦1D␦2 rearrangement, one had a VD␦2 rearrangement and 6 had D␦1J␦1 rearrangements (Fig. 3 a, Table 1). In addition, the J␦2 gene segment was used in only one rearrangement (Fig. 3 b, Table 1).
Analysis of V(D)J␦ Rearrangements in ␥␦ T Cell
Hybridomas. Using primers that should recognize the members of the 11 known mouse V␦ gene families in conjunction with primers that were just downstream of J␦1 or J␦2, PCR
Table 2. ␥␦ T Cell Hybridomas with Two Complete VDJ␦ Rearrangements
Hybridoma Rearrangement In-frame Rearrangement In-frame F1D.32 has a rearrangement utilizing a DV7S V␦ gene segment that differs from other known family members by at least five nucleotides (data not shown). This V␦ gene segment may represent a novel DV7S family member or is due to strain differences. The two DV104S1 rearrangements in F1D.73 are distinct as determined by differences in junctional diversity (data not shown). The in-frame TCR ␦ rearrangement in F1D.64 is presumed as the cell expresses a TCR ␦ chain.
analysis was carried out on all hybridomas to determine V␦ gene segment usage (Tables 1 and 2; reference 43). By this analysis none of the hybridomas analyzed gave more than two distinct PCR products (data not shown). PCR products from the 19 hybridomas that had two complete V(D)J␦ rearrangements were cloned and sequenced. None of the V(D)J␦ rearrangements isolated used known V␦ pseudogenes (43). Two distinct rearrangements were isolated from 16 of the 19 hybridomas determined to have two V(D)J␦ rearrangements. All hybridomas had at least one in-frame V(D)J␦ rearrangement except for F1D.64 in which only a single out of frame V(D)J␦ rearrangement was isolated ( Table 2). The other allele of this hybridoma must have an in-frame V(D)J␦ rearrangement, that was undetected by this analysis, as it expresses a TCR ␦ chain (data not shown). Only a single in-frame rearrangement was isolated from F1D.36 and F1D.91. The inability to detect more than a single rearrangement in these hybridomas could be due to the use of novel V␦ gene segments unable to be detected by the primer set used in this analysis. Alternatively, these hybridomas may have two rearrangements involving members of the same V␦ gene family that were not both detected upon nucleotide sequence analysis. Significantly, analyses of the 17 hybridomas with two defined V(D)J␦ joins revealed that 6 had two in-frame rearrangements, demonstrating that assembly of TCR ␦ variable region genes does not exhibit allelic exclusion (Table 2).
Discussion
To determine if assembly of TCR ␦ variable region genes is regulated in the context of allelic exclusion, we have analyzed a panel of 27 clonal hybridomas derived from mouse splenic ␥␦ T cells. Of the 17 hybridomas with defined V(D)J␦ rearrangements on both alleles, 6 (35%) have two in-frame rearrangements. This demonstrates that TCR ␦ variable region gene assembly does not exhibit allelic exclusion. Although this percentage is higher than the 20% (see Materials and Methods for calculations), which would be expected in the absence of allelic exclusion, this difference is not statistically significant (P Ͼ 0.10). Two human ␥␦ T cell clones with in-frame TCR ␦ rearrangements on both alleles have been described previously (33,45). However, given the number of cells analyzed in these studies, it was not possible to determine whether these clones represented rare events or a general lack of TCR ␦ allelic exclusion. As TCR ␥ rearrangements do not exhibit allelic exclusion, failure of TCR ␦ allelic exclusion further increases the possibility that a single ␥␦ T cell will express two or more distinct ␥␦ TCRs (7).
It is possible that one of the TCR ␦ rearrangements in each of the six cells with two in-frame rearrangements encodes for a TCR ␦ chain that cannot be expressed on the surface of the cell and therefore would not signal a block of further TCR ␦ rearrangements. This may occur, for example, if the TCR ␦ chain were not able to pair with a TCR ␥ chain or a component of a ␥␦ pre-TCR, if such a receptor exists. In this regard, it has recently been shown that 2-4% of peripheral B cells have two in-frame IgH rearrangements but that only one encodes for an IgH chain that is capable of forming a pre-B cell receptor (22). Our data is more consistent with the notion that assembly of TCR ␦ variable region genes exhibits properties of allelic inclusion as the percentage of ␥␦ T cell hybridomas with two in-frame TCR ␦ rearrangements is in agreement with the percentage expected in the absence of allelic exclusion. Furthermore, this percentage is similar to that of ␣ T cells with two in-frame rearrangements at the TCR ␣ locus, which also exhibits allelic inclusion (3).
It has been proposed for the IgH locus (and by analogy for the TCR  locus) that the precise ordering of variable gene segment rearrangement during lymphocyte development may be important for effecting allelic exclusion (14). In both of these loci, D to J rearrangement occurs on both alleles before V to DJ rearrangement. Presumably V to DJ rearrangement proceeds initially on one allele, at which point the rearrangement is "tested." If it encodes a protein that can be expressed, signals are generated that prevent further V to DJ rearrangements on the other allele. In accordance with this model, the expected number of B and T cells have V(D)J/DJ configured rearrangements of their IgH and TCR  alleles, respectively (3,10).
Unlike the IgH and TCR  loci, assembly of TCR ␦ variable gene segments is not ordered during development, and we now show that the TCR ␦ locus is not regulated in the context of allelic exclusion. However, the finding that many ␥␦ T cells have incomplete TCR ␦ rearrangements demonstrates that rearrangement is frequently terminated before completion. The events that lead to termination of TCR ␦ rearrangement are not known. Thymic ␥␦ T cells do not express RAG-1 or RAG-2, and it is possible, as proposed for TCR ␣ rearrangement, that down regulation of RAG expression leads to termination of TCR ␦ rearrangement (30,46). Termination of TCR ␦ rearrangement, by whatever mechanism, may be part of a developmental program that is independent of TCR ␦ expression. Alternatively, rearrangement may cease upon TCR ␦ expression, and failure of allelic exclusion may be due to the unordered simultaneous rearrangement of TCR ␦ alleles.
We thank F. Livak and D. Schatz for providing us with probes and C.H. Bassing for critical review of the manuscript. This work is supported by the Howard Hughes Medical Institute and by National Institutes of Health grants | 2014-10-01T00:00:00.000Z | 1998-10-19T00:00:00.000 | {
"year": 1998,
"sha1": "90face5c756bfabc1c1686bd2bc8467aa31d690d",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/188/8/1465.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "90face5c756bfabc1c1686bd2bc8467aa31d690d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
23849178 | pes2o/s2orc | v3-fos-license | New Measurement Service for Determining Pressure Sensitivity of Type LS2aP Microphones by the Reciprocity Method
A new National Institute of Standards and Technology (NIST) measurement service has been developed for determining the pressure sensitivities of American National Standards Institute and International Electrotechnical Commission type LS2aP laboratory standard microphones over the frequency range 31.5 Hz to 20 000 Hz. At most frequencies common to the new service and the old service, the values of the expanded uncertainties of the new service are one-half the corresponding values of the old service, or better. The new service uses an improved version of the system employed by NIST in the Consultative Committee for Acoustics, Ultrasound, and Vibration (CCAUV) key comparison CCAUV.A-K3. Measurements are performed using a long and a short air-filled plane-wave coupler. For each frequency in the range 31.5 Hz to 2000 Hz, the reported sensitivity level is the average of data from both couplers. For each frequency above 2000 Hz, the reported sensitivity level is determined with data from the short coupler only. For proof test data in the frequency range 31.5 Hz to 2000 Hz, the average absolute differences between data from the long and the short couplers are much smaller than the expanded uncertainties.
A new National Institute of Standards and Technology (NIST) measurement service has been developed for determining the pressure sensitivities of American National Standards Institute and International Electrotechnical Commission type LS2aP laboratory standard microphones over the frequency range 31.5 Hz to 20 000 Hz. At most frequencies common to the new service and the old service, the values of the expanded uncertainties of the new service are one-half the corresponding values of the old service, or better. The new service uses an improved version of the system employed by NIST in the Consultative Committee for Acoustics, Ultrasound, and Vibration (CCAUV) key comparison CCAUV.A-K3. Measurements are performed using a long and a short air-filled plane-wave coupler. For each frequency in the range 31.5 Hz to 2000 Hz, the reported sensitivity level is the average of data from both couplers. For each frequency above 2000 Hz, the reported sensitivity level is determined with data from the short coupler only.
For proof test data in the frequency range 31.5 Hz to 2000 Hz, the average absolute differences between data from the long and the short couplers are much smaller than the expanded uncertainties. This paper introduces a new National Institute of Standards and Technology (NIST) measurement service for determining the pressure sensitivities of type LS2aP laboratory standard microphones [4,5] over the frequency range 31.5 Hz to 20 000 Hz, and describes the equipment and procedures of this new service. Table 1 shows the expanded (coverage factor k = 2) uncertainties, expressed in decibels, for the old and the new NIST measurement services for pressure calibration of these microphones. The old service uses manually operated equipment to perform a comparison calibration of the customer microphone against each of two NIST-owned type LS1P laboratory standard microphones [4,5] that are periodically calibrated by the reciprocity method and used as references.
Customers use the NIST-calibrated microphones to measure the sound pressure produced by instruments such as very stable sound calibrators, which are used to provide traceability for thousands of other acoustical measuring instruments used in critical measurements for product design and conformance, worker and military personnel safety, and health care. These instruments include measuring microphone systems, sound level meters, personal sound exposure meters (dosimeters), sound power measurement systems, and audiometric equipment.
Uncertainties for sound calibrator testing laboratories are established by U.S. national [6] and international [7] standards. The maximum permitted expanded (k = 2) uncertainty of measurement of the sound pressure level produced by the most accurate (Class LS) sound calibrators in laboratories at reference environmental conditions is 0.10 dB for the frequencies in Table 1 from 200 Hz to 1000 Hz. At these frequencies, the uncertainty of the new service is 0.06 dB smaller than the maximum permitted uncertainty.
The system used for the new service is an improved version of the automated test bed system used by NIST in the Consultative Committee for Acoustics, Ultrasound, and Vibration (CCAUV) key comparison CCAUV.A-K3 [8], and is now ready for the new NIST measurement service. The NIST results in CCAUV.A-K3 agreed well with the key comparison reference values. Subsequent improvements to this test bed system are: 1. control of the ambient static (barometric) pressure used for calibrations so that calibrations can be performed at the reference pressure specified in documentary standards [2][3][4][5][6][7] and key comparisons, 2. automated measurement and recording of the static pressure and acoustic coupler temperature at each frequency of measurement, 3. the use of the voltmeter in its most accurate alternating voltage measurement mode, 4. the use of new programmable highpass and lowpass filters to provide bandpass filtering with greater dynamic range and lower distortion than the previously used filter set, 5. an improved technique for measuring the front cavity depths of individual Type LS2aP microphones [9], and 6. adjustment of the power line frequency as needed to reduce interference effects at calibration frequencies harmonically related to 60 Hz [10].
Measurement System
Reciprocity calibrations involve measurements on acoustically coupled pairs of microphones (see Fig. 1). Each pair includes a transmitter, which is electrically driven to produce sound, and a receiver, which produces an output voltage in response to the sound pressure at its diaphragm. These microphones are installed at opposed ports of an air-filled test cavity bounded by the microphone diaphragms and front-cavity walls, and the walls of a cylindrical plane-wave coupler. Two such air-filled couplers are used to perform reciprocity calibrations at NIST. Both couplers have an inner diameter equivalent to the diameter of an LS2aP microphone diaphragm. The first coupler, which has a length of 9.4 mm and will be referred to here as the long coupler, is a Bruel & Kjaer Type 1414. 1 This coupler forms a nominal cavity volume of 700 mm 3 with the microphones installed. The second coupler, which has a length of 4.7 mm and will be referred to here as the short coupler, is a Bruel & Kjaer Type 1430. It forms a nominal cavity volume of 400 mm 3 with the microphones installed. Each coupler has a small tube passing through the coupler wall. This tube is partially plugged with a tapered pin and provides a vent for barometric pressure equalization during the measurements. Each microphone has a barometric pressure equalization vent from the back cavity of the microphone to its exterior outside the coupler. These coupler and microphone vents allow the barometric pressure on both sides of the diaphragm to be in equilibrium.
Laboratory standard microphones have a circular diaphragm that is recessed from a front outer annulus that is parallel to the diaphragm. The distance between the microphone diaphragm and the plane at the annulus is known as the front cavity depth. This depth is measured using a depth-measuring microscope according to gage block measurement subsampling method D [9]. The volume between the diaphragm and the plane at the annulus is known as the front cavity volume, which is about 34 mm 3 for a Type LS2aP microphone. Because each microphone is installed at a coupler port, the front cavity volume is included in the total geometric cavity volume. An additional volume term that is used to calculate the microphone sensitivities is the equivalent diaphragm volume, which is related to the acoustic impedance of the microphone diaphragm [4,5].
A block diagram of the microphone pressure reciprocity calibration system is shown in Fig. 1. At each frequency of measurement, a Hewlett Packard (HP) 8904A Multifunction Synthesizer supplies a sinusoidal test signal to a Bruel and Kjaer Type 5998 Reciprocity Calibration Apparatus (RCA). The RCA amplifies the test signal by 6 dB, and directs it to the transmitter through the transmitter unit, which contains a capacitor in series with the transmitter. Switches internal to the RCA direct either the receiver voltage signal path or the capacitor voltage signal path through an internal highpass filter to the RCA output. The RCA output signal is passed through a bandpass filter to the voltmeter. The RCA amplifies the receiver and capacitor voltages by 40 dB, and also provides the 200 V polarizing voltage, which is required for the linear operation of the airdielectric capacitive transmitter and receiver microphones.
The bandpass filter comprises cascaded Frequency Devices Model 90PF H8B and Model 90PF L8B programmable eight-pole highpass and lowpass Butterworth filters. For all measurements, the highpass filter corner frequency is set 8 % lower than the measurement frequency, and the lowpass filter corner frequency is set 8 % higher than the measurement frequency.
The output of the bandpass filter is connected to an HP 3458A Multimeter configured as a voltmeter in its mode that offers the highest accuracy for the measure-ment of periodic signals and requires a synchronous trigger. A trigger circuit that produces transistor-transistorlogic (TTL) pulses synchronized to the test signal is used to provide the voltmeter trigger. This arrangement is used to measure the amplified and filtered receiver and capacitor voltages, and the difference between the gains of the receiver and capacitor voltage signal paths. The synthesizer, RCA, bandpass filter, and voltmeter are controlled over an IEEE-488 bus by a personal computer running scripts with a Visual Basic script host.
The transmitter microphone is attached to a Bruel and Kjaer Transmitter Unit ZE 0796, and the receiver microphone is attached to a Bruel and Kjaer Preamplifier Type 2673. Both the transmitter unit and preamplifier incorporate Bruel and Kjaer Modification WH 3405, which conforms to the grounded shield configuration specified as the reference configuration in the relevant standards [2,3,4,5]. The transmitter unit, receiver preamplifier, microphones, adapters, and coupler are installed in a Bruel and Kjaer Type UA 1412 Microphone Fixture with integral measurement chamber. This equipment isolates the coupled microphones from acoustical noise during the measurements.
An O-ring on the measurement chamber cover is used to seal the chamber shut, which allows for control of the measurement system barometric pressure. A small port on the bottom of the chamber is connected via tubing to an accumulator comprising two large insulated tanks, to minimize the effects of laboratory temperature fluctuations on the pressure. The second tank is connected to an air pump that is used to set the system to the desired pressure before electrical measurements are started with a given pair of microphones. The tanks and the measurement chamber are placed on a Technical Manufacturing Corporation Model 68-561 Vibration Isolation Table. Barometric pressure is measured with a Paroscientific 745-16B Barometer connected to a second small port on the bottom of the measurement chamber. A Hart Scientific 1529-R CHUBE4 Thermometer Readout is used with a Hart Scientific 5611A-CST Thermistor Probe to monitor the coupler temperature. Both the barometer and thermometer readings are acquired over the IEEE-488 bus. Because the barometer only has a RS232 port, a National Instruments GPIB-232CV-A, which is a GPIB-to-RS232 protocol converter, is used with the barometer on the bus. Relative humidity is measured with an RH Systems 473 Chilled Mirror Hygrometer System.
Data Acquisition
The reciprocity calibration procedure implemented at NIST is a primary method used to determine the sensitivities of three microphones in a triad comprising two NIST-owned microphones (microphones 1 and 2) and the customer microphone (microphone 3). Three pairwise combinations of transmitter and receiver are created from the triad.
Before starting the measurements, the microphone DC polarization voltage is adjusted as necessary to obtain a voltmeter reading between 199.999 V and 200.001 V, inclusive. Before data are acquired for a given pair, the pressure is adjusted to obtain a reading between 101.300 kPa and 101.350 kPa, inclusive, and the relative humidity is measured.
For each microphone pair, voltage data are acquired in order to determine the ratio of the receiver voltage to the capacitor voltage. Measurements are completed for all pairs in the long coupler from 31.5 Hz to 2000 Hz, and also in the short coupler from 31.5 Hz to 20 000 Hz. The barometric pressure and temperature are measured at each test frequency.
Data Reduction
The moduli of the pressure sensitivities of the three microphones | M p,1 | , | M p,2 | , and | M p,3 | , when expressed in V / Pa, are given by the following equations based on standards [2,3] and manufacturer's technical documentation [11].
where the subscript x identifies the microphone number of the transmitter, and the subscript y identifies the microphone number of the receiver, R xy is the ratio of the receiver voltage to the capacitor voltage, V 0,xy is the sum of the geometrical cavity volume and the low frequency equivalent diaphragm volumes of the microphones, P s,xy is the barometric pressure, κ xy is the ratio of specific heats of the air in the cavity, C is the capacitance of the capacitor in series with the transmitter, and Cor HW,xy is a frequency-dependent parameter that accounts for the heat conduction and wave motion in the cavity as well as the frequency dependence of the equivalent diaphragm volumes.
The parameter Cor HW,xy includes a heat conduction correction that compensates for departures from purely adiabatic conditions due to heat exchange between the surfaces of the cavity and the air in the cavity. This correction, which increases with decreasing frequency, is a function of the cavity length to diameter ratio, volume to surface area ratio, and both the thermal diffusivity and the ratio of specific heats of the air. Wave motion effects are accounted for in Cor HW,xy by including a term that is dependent on the length of the cavity and the speed of sound in the cavity. This term is based on a homogeneous transmission line model used to evaluate the acoustic transfer impedance of the cavity. These effects are more pronounced at higher frequencies, where the wavelength of sound is smaller compared to the dimensions of the coupler.
To determine the front cavity volume of each microphone, an iterative fitting procedure is used. The value of the front cavity volume of each microphone is optimized in a two step procedure. The initial value for the first step is calculated from the measured front cavity depth and a nominal surface area. This value is varied in increments of 0.5 mm 3 to find the minimum average absolute difference between the sensitivities determined for that microphone with the short coupler as compared to the long coupler at the eight frequencies measured in the range from 200 Hz to 2000 Hz. In this range the equivalent diaphragm volume is relatively constant with frequency, and the corrections for wave motion and heat conduction are relatively small for both couplers. Therefore, the match between the results for the two couplers is expected to be very good in this range. After the first step has been completed for all microphones, the results are used as initial values and the procedure is repeated to perform fits in increments of 0.05 mm 3 . The front cavity volume values obtained from these fits are used to calculate the final microphone sensitivities for both couplers.
(1) (2) For frequencies in the range from 31.5 Hz to 2000 Hz, the reported sensitivity is the average obtained from the results for both couplers. Above 2000 Hz, the reported sensitivity is the one obtained from results acquired with the short coupler only.
Uncertainties and Proof Tests
Uncertainties in the measured sensitivity levels of microphones calibrated by this new service were developed by applying guidelines for evaluating and expressing uncertainties [12]. Standard uncertainties and expanded (k = 2) uncertainties are shown in Table 2 for each frequency of measurement using the long coupler, and in Table 3 for each frequency of measurement using the short coupler. Relative uncertainties are expressed in percent. The standard uncertainties are rounded to the nearest 0.01 %. These uncertainties were combined to obtain the expanded (k = 2) uncertainties U, which were rounded up to the nearest 0.01 % and the nearest 0.01 dB. The Type B standard uncertainties correspond to the terms shown in Eq. (1-3) and are given in the tables. The standard uncertainties u P due to the terms denoted P s,xy are derived from the barometer manufacturer's specifications. The standard uncertainties u C due to the C -1 term are derived from the transmitter unit manufacturer's specifications. The standard uncertainties u R due to the terms denoted R xy are derived from the voltmeter manufacturer's specifications and measurements of the electrical cross-talk, signal-tonoise ratios, and polarizing voltages in the calibration system. The standard uncertainties u V due to the terms denoted V 0,xy are based on measurements of coupler dimensions and measuring instrument manufacturer's specifications. The standard uncertainties u κ and u Cor due to the terms denoted κ xy and Cor HW,xy respectively, are based on general knowledge of the limitations of the models used to determine these terms.
The Type A standard uncertainties u A were determined from the results of proof tests conducted with seven microphones. For each microphone, six tests were conducted in both couplers. Each test was performed over 11 frequencies from 31.5 Hz to 2000 Hz for the long coupler, and 30 frequencies from 31.5 Hz to 20 000 Hz for the short coupler. At each of the 11 frequencies common to both couplers (31.5 Hz to 2000 Hz), there were 84 proof tests, 42 from each coupler.
For each frequency, coupler and microphone, the variance of the six measured sensitivities was calculated. For each frequency and coupler, the Type A standard uncertainty was calculated by pooling the variances for all seven microphones.
For each coupler, values of U are typically smallest for the frequencies near the middle of the frequency range, and are largest at the extremes. This variation with frequency is principally attributable to the frequency dependence of u Cor , the standard uncertainty in the parameter that accounts for heat conduction and wave motion effects in the cavity. Table 4 shows the agreement between the results obtained for the long and short couplers at the frequencies where the data from both are averaged. At each frequency the absolute difference in the sensitivity levels determined using the long and short couplers was computed. The averages of the 42 absolute differences from all proof tests were determined, and are reported in Table 4 along with the expanded (k = 2) uncertainties for the new measurement service. These uncertainties are the results of rounding the average of the expanded uncertainties for both couplers upward to the nearest 0.01 dB. At all frequencies, the average absolute difference is much smaller than the expanded uncertainty. Therefore, the values of the microphone front cavity volumes are adequately determined by the iterative fitting procedure, and the significant influence factors that are different for the two couplers have been sufficiently taken into account.
Summary
This paper introduces a new NIST measurement service for determining the pressure sensitivities of ANSI and IEC type LS2aP laboratory standard microphones over the frequency range 31.5 Hz to 20 000 Hz, and describes the equipment and procedures used for this service. This service uses an improved version of the automated test bed system that was used for the NIST participation in the worldwide key comparison CCAUV.A-K3 [8].
Measurements are performed in a long and a short air-filled plane-wave coupler. An iterative fitting procedure varies the values of the front cavity volumes used to calculate the sensitivity of the microphones in order to achieve a match between the sensitivities determined using the long and the short couplers over the critical frequency range 200 Hz to 2000 Hz.
For each frequency in the range from 31.5 Hz to 2000 Hz, the reported sensitivity level for the new NIST measurement service is the average obtained from the results at that frequency for both couplers. For each frequency above 2000 Hz, the reported sensitivity level is the one determined from data acquired with the short coupler only. A series of proof test calibrations was performed to determine the Type A standard uncertainties. These uncertainties and the Type B standard uncertainties were used to obtain the expanded (k = 2) uncertainties.
As is shown in Table 1, the values of the expanded uncertainties of the new service are one-half the corresponding values of the old service, or better, at most frequencies common to the old and new services. This would particularly benefit customers who use the NIST service to meet the 0.10 dB maximum permitted uncertainty [6,7] of measurement of the sound pressure level produced by Class LS sound calibrators from 200 Hz to 1000 Hz. At these frequencies, the uncertainty of the new service is now 0.06 dB smaller than the maximum permitted uncertainty.
A series of proof test calibrations was used to determine the average absolute differences between sensitivities measured with the long and the short couplers. At each frequency in the range 31.5 Hz to 2000 Hz, the average absolute difference is much smaller than the expanded uncertainty. Therefore, the values of the microphone front cavity volumes are adequately determined by the iterative fitting procedure, and the significant influence factors that are different for the two couplers have been sufficiently taken into account. | 2016-05-10T04:30:54.597Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "b5ae6b86ec28f6cc693ba8b3a6be167e761b82b2",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.6028/jres.116.019",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5ae6b86ec28f6cc693ba8b3a6be167e761b82b2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
80708251 | pes2o/s2orc | v3-fos-license | Mysetry in Diagnosis of Long Subhepatic Acute Appenidicitis
The average length of an appendix in a human is 9 centimetres (cm), with a range of between 2 to 20 cm. The most common positions are descending intraperitoneal (31%-74%) and retrocecal (26%-65%) [1] Given the variability in length and position the clinical presentation of acute appendicitis may vary significantly. The first description of appendicitis was in 1886 [2] since then the morbidity and mortality has markedly reduced.
Introduction
The average length of an appendix in a human is 9 centimetres (cm), with a range of between 2 to 20 cm. The most common positions are descending intraperitoneal (31%-74%) and retrocecal (26%-65%) [1] Given the variability in length and position the clinical presentation of acute appendicitis may vary significantly. The first description of appendicitis was in 1886 [2] since then the morbidity and mortality has markedly reduced.
Since Willard Packard perform first surgery in 1867 [3], appendectomy remains one of the most common surgical emergencies. Sub hepatic appendix where described in 1955 by Allen King [4] and may occur due a malrotation of the gut. In a study of 7,210 patients, subhepatic appendicitis was found in 0.08% of cases [5]. Therefore, a subhepatic appendicitis may present as an acute diagnostic dilemma.
Case Presentation
A 57-year-old obese male presented to the hospital with complains of sudden onset of severe right renal angle and right upper quadrant pain for 2 days associated with low grade pyrexia, anorexia and two episodes of vomiting and burning micturition. He had a background history of Chronic Obstructive Pulmonary Disease (COPD), hypertension, pyelonephritis and spleen conserving surgery 20 years ago.
Vitals were stable and on examination there was marked tenderness in right renal angle and right upper quadrant and epigastrium. On palpating the right iliac fossa, had mild discomfort. He did not illicit Rosving sign, rebound or obturator or psoas sign. Blood investigations revealed a white blood cell (WBC) of 9 g/dl and C-reactive protein (CRP) of 102. The remaining test and examination were unremarkable. He underwent a computed tomography (CT) of the abdomen and pelvis which revealed a retrocecal, subhepatic acute appendicitis. No other acute pathology.
Patient was taken to theatre for a laparoscopic appendectomy after informed consent. Four ports were required, the extra port made in upper abdomen to access tip of appendix.
Intra-operative, the base of appendix was visualised. A careful and blunt dissection was required to find the tip of the appendix. The tip was found to be covered by the liver and inflamed and oedematous. It was adherent below the right lobe of the liver and stuck to the hepatic flexure. Meticulous dissection of tip of appendix was performed ( Figure 1) and appendix taken out. Before placing in formalin for histology it measured 18 cm in length. On the histology report it was 16 cm and acute suppurative appendicitis. Post-operative course was unremarkable and discharge after 48 hour of intravenous antibiotic (Figure 1).
Discussion
Although the most common general surgical emergency is acute appendicitis. Given the variation in size and location it may cause a diagnostic dilemma. The rare instance that the appendix is sub hepatic may often delay the diagnosis. As the differential diagnosis of either pathology related to gallbladder, liver or kidney may be raised. This is where imaging plays a crucial role. The overall lifetime occurrence is approximately 12% in men and 25% in women [6][7][8]. Imaging has reduced the number of negative appendicectomy and helped in the diagnosis of difficult cases.
Ultrasound or CT scan is used for complex cases and helpful for diagnosis and clarification. An article from 2010 shows a decrease in negative appendectomy from 23% to 1.7% [9]. Following algorithm illustrated in Figure 2 [10] is a useful roadmap. Choice of surgical approach may vary according to experience of surgeon, facilities, and available equipment.
Conclusion
In conclusion, a high index of suspicion is required for atypical appendicitis; imaging is not only useful but crucial in these cases ( Figure 2). | 2019-03-18T14:03:21.585Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "c5ca8daf68ccb50f6cbf3c603414306acb94b95f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7438/1584-9341-14-2-7",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b28a5034a646e0ed5b28074e8bac5b107b322100",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211751501 | pes2o/s2orc | v3-fos-license | Cancer in General Responders Participating in World Trade Center Health Programs, 2003-2013.
Abstract Background Following the September 11, 2001, attacks on the World Trade Center (WTC), thousands of workers were exposed to an array of toxins known to cause adverse health effects, including cancer. This study evaluates cancer incidence in the WTC Health Program General Responder Cohort occurring within 12 years post exposure. Methods The study population consisted of 28 729 members of the General Responder Cohort enrolled from cohort inception, July 2002 to December 31, 2013. Standardized incidence ratios (SIRs) were calculated with cancer case inclusion and follow-up starting post September 11, 2001 (unrestricted) and, alternatively, to account for selection bias, with case inclusion and follow-up starting 6 months after enrollment in the WTC Health Program (restricted). Case ascertainment was based on linkage with six state cancer registries. Under the restricted criterion, hazard ratios were estimated using multivariable Cox proportional hazards models for all cancer sites combined and for prostate cancer. Results Restricted analyses identified 1072 cancers in 999 responders, with elevations in cancer incidence for all cancer sites combined (SIR = 1.09, 95% confidence interval [CI] = 1.02 to 1.16), prostate cancer (SIR = 1.25, 95% CI = 1.11 to 1.40), thyroid cancer (SIR = 2.19, 95% CI = 1.71 to 2.75), and leukemia (SIR = 1.41, 95% CI = 1.01 to 1.92). Cancer incidence was not associated with any WTC exposure index (composite or individual) for all cancer sites combined or for prostate cancer. Conclusion Our analyses show statistically significant elevations in cancer incidence for all cancer sites combined and for prostate and thyroid cancers and leukemia. Multivariable analyses show no association with magnitude or type of exposure.
Following the attacks on the World Trade Center (WTC) towers on September 11, 2001, more than 50 000 workers (1) were involved in rescue and recovery, with many of them caught directly in the dust cloud from the collapsing towers. The potential exposure of these workers extended until cleanup of the site ended in June 2002. The complex, sustained exposure and the unknown long-term health effects it may cause are matters of national concern and the subject of continued monitoring and research. Because of the presence of carcinogens (asbestos, polychlorinated biphenyls, benzene, dioxins) (2), several studies have examined cancer incidence among different WTCexposed responder cohorts compared with the general population. A 10-year post-September 11 study by the WTC Health Registry of recovery workers and of people exposed in the vicinity of the WTC found a statistically significant greater incidence in all reportable, cancer registry types combined, with a standardized incidence ratio (SIR) of 1.11 (3). A study of 7-year post-September 11 cancer incidence among members of the WTC Health Program General Responder Cohort (eg, law enforcement, construction, telecommunication workers) found a 15% elevation in all-site cancer incidence (4,5). A study of New York City Fire Department (FDNY) firefighters found a slight, yet statistically significant, incidence elevation in all sites combined (6). However, a later study comparing the same cohort with other firefighters with similar occupational (but not WTC) exposures found no greater all-sites-combined incidence (7).
This article is an update of the earlier Solan et al. study (4) of the General Responder Cohort extending follow-up time by an additional 5 years, thereby increasing sample size and, given the longer latency for some cancers, the ability to detect associations between WTC exposure and cancer risk. Solan et al. used an unrestricted criterion, where cancer counts and personyears of observations began post September 11, and a restricted criterion, where both counts and person-years began 6 months after member enrollment in the WTC Health Program. When using the restricted approach, the earlier study found elevations in thyroid cancer only.
Cancer is now classified as a WTC-related condition by the National Institute for Occupational Safety and Health, and diagnosed members of the WTC Health Program are eligible for federally funded treatment.
Methods
The WTC Health Program General Responder Cohort has been described in detail elsewhere (8). Briefly, the WTC Health Program is a federally funded medical monitoring and treatment program designed to assess responder health over time and to provide treatment for health conditions deemed WTC related. Human investigations are performed after approval by local institutional review boards and in accord with assurances filed with and approved by the US Department of Health and Human Services.
The study population included all members enrolled in the WTC Health Program General Responder Cohort between its inception (July 2002) and the end of follow-up (December 31, 2013, for New York residents and December 31, 2012, for residents of other states). To be included in these analyses, the following eligibility requirements pertained: self-reported time working on the WTC rescue and recovery effort was a 4-hour minimum in the first 4 days from September 11, 2001, 24 hours in September 2001, or 80 hours in September-December 2001; consent to have data aggregated for research; consent to have data shared with cancer registries; and completion of at least one monitoring visit (8).
Cancer cases were ascertained via linkage with the cancer registries of New York (NY) and surrounding states of New Jersey, Pennsylvania, and Connecticut, as well as Florida and North Carolina, where responders are known to retire. Cohort percentages living in these states at some point post September 11 were 87.7% (NY), 8.9% (New Jersey), 1.3% (Florida), 1.2% (Pennsylvania), 0.6% (Connecticut), and 0.3% (North Carolina). Linkages used probabilistic matching algorithms based on name, address, social security number (SSN), sex, race, and birth date. Registries use differing matching algorithms and degrees of manual review; false positive and false negative matches are possible, especially for those missing full SSN (full SSN was available for 31.8% of the study population and the last four SSN digits for an additional 15.1%). The NY registry performed an additional and extensive manual review of possible matches using Department of Motor Vehicle records and other sources.
The NY registry data are complete through December 31, 2013, and the others through December 31, 2012. Consequently, person-years based on responders' most recently reported state of residence were censored to December 31, 2013, for NY residents and to December 31, 2012, for residents of the five other states; residents of all other states were excluded from the analysis. Overlapping reporting of cancer among state registries was assessed via manual review; duplicates were removed to yield a dataset of unique registry-reported cancer cases for analysis.
Cancer cases were grouped into an "all cancer sites combined" category and into individual groupings per the National Cancer Institute's Surveillance, Epidemiology, and End Results site recode classifications (https://seer.cancer.gov/siterecode/icdo3_dwhoheme/index.html).
WTC exposure indices were obtained via a structured exposure assessment interview and consisted of reported exposure to the dust cloud (direct, significant, some, none) combined with arrival time (first day on the effort between September 11 and September 14 inclusive, first day on the effort after September 14); cumulative days working on the WTC effort; and working directly on the debris pile at any time.
A four-level (low, medium, high, very high) composite of these exposure measures was also used (9) and defined as follows: Low-and medium-exposure groups consisted of those who were not directly in the dust cloud, with the low-exposure group also requiring fewer than 40 days working on the WTC effort and not having worked at any time on the debris pile. The high and very high groups consisted of those who were directly in the dust cloud, with the very high group also requiring 90 or more days working on the WTC effort and working at some point on the debris pile.
This study includes exposure assessment and demographic data from monitoring visits starting 2002 through December 31, 2013. The exposure assessment questionnaire was administered at members' initial monitoring visits only. Demographic information, smoking status, and member address were obtained at first visit and updated at subsequent monitoring visits; address information was also updated through outreach efforts by the WTC Health Program and by member communications with the program's phone bank. The average number of member visits was 4.3, and average time between visits was 1.9 years.
Statistical Analysis
To calculate SIRs, population rates were extracted using SEER*STAT software, and expected counts were derived through indirect standardization to the age, sex, race and/or ethnicity, diagnosis year, and residency-state-specific population rates. These calculations were performed for each year of observation and then summed so that members could be aged yearly. Residence state from a member's most recent monitoring visit was used for combining with external rates. Observed counts were extracted from our registry-confirmed cancer dataset.
Under the restricted criterion, person-years of observation and observed counts began 6 months after member enrollment through December 31, 2013, for NY residents and December 31, 2012, for everyone else. Based on a linkage with the National Death Index, deceased members were censored at date of death. Both external population rates and our observed counts can include multiple cancer primaries per person (ie, a member with two stomach cancer primaries and colon cancer would be counted twice in the calculation of the site-specific SIR for stomach cancer and once for colon cancer). The 95% confidence intervals (CIs) for the SIRs (not presented by sex because of small numbers) were calculated using standard methods (10).
For comparison with previous research (4), unrestricted SIRs were calculated by not limiting to 6 months' post enrollment (ie, follow-up time and cancer cases started September 11, 2001). To test the adequacy of the restricted criterion's 6-month threshold to fully account for bias from selective enrollment due to prediagnosis symptoms of cancer, SIRs were calculated as a sensitivity analysis, with follow-up time and case inclusion beginning 12 and 24 months postenrollment.
To address potential confounders and explore the effects of WTC exposure and demographic variables on cancer risk, hazard ratios (HR) and 95% CIs were estimated using multivariable Cox proportional hazards models. Estimations were calculated separately for all cancer sites combined and for prostate cancer, with censoring at date of death, end-of-study, or cancer diagnosis date. No violations of the proportional hazards assumption were observed when tested using Schoenfeld residuals. We modeled multiple cancer events per member by using a shared frailty model, with subject treated as a gamma-distributed random effect to account for within-subject correlations among cancer event times (11). The time scale for the model was calendar time from September 11, 2001, but entry into the model was lefttruncated at 6 months' postenrollment, per our restricted criterion. In addition to the four-level exposure metric, the model included race and/or ethnicity, WTC Health Program clinic, sex, age on September 11, 2001 smoking status, pre-September 11 occupation, presence of SSN for registry matching, and enrollment date. These known and suspected cancer risk factors are consistent with Solan et al. (4) and control for the heterogeneous nature of the types of responders in our cohort. A separate model, using the same covariates but substituting the three primary exposure indices for the four-level derived indices, was also examined. All models used the restricted criterion.
Demographics
Of the 29 455 responders with appropriate consents and whose information was provided to the cancer registries for linkage, 726 were excluded because their restricted criterion start date (enrollment date þ 6 months) occurred after the end date of available registry data or after their date of death, leaving an analysis sample size of 28 729.
The 28 729 responders were predominantly male (85.5%), white non-Hispanic (47.4%) with a median age of 38 years on September 11, 2001 (Table 1). Construction and protective services (eg, law enforcement) were the most common pre-September 11 occupations (20.8% and 49.0%, respectively), and 44.4% had at least some level of exposure to the dust cloud caused by the collapse of the WTC towers. The median time spent working on the rescue and recovery effort was 52 days.
Unrestricted Analyses
The unrestricted criterion yielded statistically significant elevations in SIRs for all cancer sites combined, and with melanoma of the skin, prostate, bladder, kidney and thyroid cancers, hematologic neoplasms, leukemia, non-Hodgkin lymphoma, multiple myeloma, and chronic lymphocytic leukemia.
12-and 24-Month Restricted Sensitivity Analyses
Twelve-and 24-month sensitivity analyses were performed (Supplementary Table 1
Exposure Indices
None of the three separate exposure measures (dust exposure and arrival time, length of work time, or work on the pile) and none of the levels of the derived, four-level WTC exposure index displayed a statistically significant association with cancer risk, for both all cancer sites combined and prostate cancer (Table 3).
Demographic and Other Variables
The multivariable analysis of all cancer sites combined (Table 3) suggested an elevated risk for men, compared with women, but the association did not achieve statistical significance (HR ¼ 1.21, 95% CI ¼ 0.97 to 1.52). Age on September 11, 2001, showed a statistically significant elevation of cancer risk, with a 1.09-fold greater risk for each 1-year increase (HR ¼ 1.09, 95% CI ¼ 1.08 to 1.10), and being a current smoker likewise showed a statistically significant association, with a 1.29-fold greater cancer risk compared with being a never smoker (HR ¼ 1.29, 95% CI ¼ 1.07 to 1.57). Being a former smoker was not associated with having an elevated cancer risk compared with being a never smoker (HR ¼ 1.0, 95% CI ¼ 0.85 to 1.17).
In the multivariable analysis of prostate cancer (Table 3), age on September 11, 2001, was statistically significant and was associated with a 1.13-fold greater cancer risk for each 1-year increase (HR ¼ 1.13, 95% CI ¼ 1.12 to 1.14); current and former smokers were associated with lower cancer risk compared with never smokers (HR ¼ 0.74, 95% CI ¼ 0.50 to 1.10, and HR ¼ 0.94, 95% CI ¼ 0.73 to 1.21, respectively).
Supplemental Multivariable Results
To test the adequacy of a 6-month threshold to account for selection bias, 12-and 24-month multivariable sensitivity analyses were performed (Supplementary Table 2, available online). Age on September 11, 2001, and current smoking status showed statistically significant associations with increased cancer risk for all cancer sites combined.
Discussion
Under the restricted criterion, we found statistically significant elevations in SIRs for all cancer sites combined, prostate and thyroid cancers, and leukemia; SIRs for lung cancer and colorectal cancer were below 1.0. Multivariable survival analysis showed no exposure dose-response for all cancer sites combined or prostate cancer, although some risk factors such as age at September 11, 2001, sex, and current smoking were associated with increased cancer risk. These analyses are most comparable in methodology with the previous restricted only analysis of Solan et al. (4), which had 5 fewer observation years, a smaller sample size, and wherein, an elevated SIR was reported for thyroid cancer only.
Solan et al. found elevations under the unrestricted criterion for all-sites cancer and prostate, thyroid, and hematologic malignancies and an elevation in soft tissue cancer, no longer evident in the current study. The current study found elevations under the unrestricted criterion in those same sites and many others (Supplementary Table 1, available online). Increased selfselection by a sicker subset of the overall responder population, due to both cancer becoming eligible for federally funded treatment and publicity from previous research on cancer risk among WTC-exposed populations, could explain these increases.
Although other studies have revealed elevated SIRs for other hematologic malignancies, this is the first reported statistically significant elevated SIR for leukemia (3,4,6,7). Leukemia is known to occur after exposure to occupational carcinogens, including benzene [burning jet fuel and other sources at the WTC site (12)], possibly at low levels of exposure (13,14) and with a latency of several years from exposure (15). Our study did not find an increase in multiple myeloma, as suggested by other studies (16,17), although all results are based on a small number of cases; thus, variation among studies is not surprising. Although we did not find an increase in multiple myeloma, continued surveillance is warranted, as a study of FDNY firefighters found a statistically significant association between WTC exposure and the myeloma precursor monoclonal gammopathy of undetermined significance (18). Lung cancer is commonly associated with occupational exposures, and WTC debris contained substances known to increase lung cancer risk (asbestos, particulate matter), yet we found the SIR of this cancer to be below 1.00, albeit of borderline statistical significance (SIR ¼ 0.83, 95% CI ¼ 0.66 to 1.04). Three considerations for this finding are as follows: Our cohort has a lower prevalence of smokers compared with the general population; many members are certified with WTC-related musculoskeletal conditions, commonly treated with nonsteroidal, antiinflammatory drugs for pain management that have been shown to decrease lung cancer risk (19); and latency might be a greater factor for lung cancer than for other cancers. These questions should be examined in future studies.
Routine screening for thyroid cancer is not offered through the WTC Health Program; however, General Responder Cohort members are routinely administered chest x-rays, and those with certain respiratory problems are administered chest computerized tomography scans, which can lead to early diagnosis of thyroid cancer. Consequently, medical surveillance could partially explain elevations in the thyroid cancer SIR.
An increase in prostate cancer among WTC-exposed firefighters has been reported, although surveillance bias may play a role, given that prostate-specific antigen screenings are routinely performed as part of the FDNY monitoring program (7) (the WTC Health Program does not screen for prostate cancer). Typical latency for prostate cancer has been estimated as long as 20 years post exposure (20); however, recent research suggests that respiratory exposure to WTC dust could induce inflammatory and immune responses in prostate tissue and that WTC-related prostate cancer displays a distinct gene expression pattern that could have resulted from exposure to specific carcinogens (21). These factors could be associated with a shorter latency period for WTC exposure.
Cancers commonly treated in outpatient settings may be underreported to registries, particularly melanoma and myeloid leukemia (22,23). Consequently, underreporting of cancer is a possible source of incomplete case ascertainment in our cohort. However, such undercounting would also affect the population rates used in SIR calculations and presumably would be nondifferential with respect to the exposure variables used in our multivariable analyses, leading to loss of power but not bias. Another potential source of undercounting would be missing SSN information in 68.2% of our cohort, because SSN is an important part of the probabilistic matching algorithms employed by the registries. Because of a renewed effort to collect the last four digits of SSN, this current study was able to provide the registries with partial SSN numbers on an additional 4331 responders, improving match accuracy and reducing potential undercounting. Based on a comparison of the registry linkage results on 5627 members initially without SSN with their updated results with the inclusion of the last four digits of SSN (4331 members) or the full SSN (1296 members), we can estimate registry matching without SSN results in an undercounting of true cancer numbers by 7.9% compared to full SSN and 5.4% cf last four digits.
Although Solan et al. (4) reported a non-statistically significant suggestion of an exposure dose-response, our analysis found no exposure effect, regardless of whether component or derived measures were examined. A more recent FDNY study (7) compared cancer incidence in their cohort to firefighters from other cities. Although our analysis used internal WTC-exposure comparisons whereas the firefighter study used an external control group, the firefighter study likewise reported no WTC exposure effect.
Days and hours on-site, location, and dust cloud exposure have been assumed surrogates for duration and intensity of exposure to cancer-causing substances. However, it is possible that these categories do not completely capture the true level of exposures to carcinogens. Similarly, use of respirators and protective clothing may mitigate the effects of exposure. Unfortunately, however, this information was not consistently captured via the exposure assessment instrument. Recall bias may also play a role among those who enrolled up to a decade or more after exposure, although the percentage of late enrollees remains small (14.2% after 2009). These potential limitations may result in exposure misclassification, reducing our ability to identify a dose-response effect, if one exists.
In conclusion, our investigation identified elevated incidence rates in all-sites cancer combined, as well as in prostate, thyroid, and leukemia when compared to the general population. However, no dose-response association was observed between cancer risk and estimated level of exposure while working on the WTC rescue and recovery effort. Shorter latency prostate cancer elevations could be contributed to the unique makeup of the WTC dust exposure. Future studies of other WTC-exposed cohorts may similarly find elevations in leukemia. Thyroid cancer continues to show the greatest elevations in SIR, possibly because of surveillance bias from increased monitoring and treatment, although thyroid cancer screening is not offered through the WTC Health Program. Because of the long latency period of many types of cancer, it is possible that increased rates of other cancers, as well as WTC exposure-cancer associations, may emerge after longer periods of follow-up. | 2019-11-07T14:18:12.907Z | 2019-11-06T00:00:00.000 | {
"year": 2020,
"sha1": "dd06981420bf481de94950cd46dcb7edad1e4fb3",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jncics/article-pdf/4/1/pkz090/33108138/pkz090.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cbab0b667a2d23902dd79ef09b8bead2a282ad6b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196176973 | pes2o/s2orc | v3-fos-license | Urban Parcel Grouping Method Based on Urban form and Functional Connectivity Characterisation
: The grouping of parcel data based on proximity is a pre-processing step of GIS and a key link of urban structure recognition for regional function discovery and urban planning. Currently, most literature abstracts parcels into points and clusters parcels based on their attribute similarity, which produces a large number of coarse granularity functional regions or discrete distribution of parcels that is inconsistent with human cognition. In this paper, we propose a novel parcel grouping method to optimise this issue, which considers both the urban morphology and the urban functional connectivity. Infiltration behaviours of urban components provide a basis for exploring the correlation between morphology mechanism and functional connectivity of urban areas. We measured the infiltration behaviours among adjacent parcels and concluded that the occurrence of infiltration behaviours often appears in the form of groups, which indicated the practical significance of parcel grouping. Our method employed two parcel morphology indicators: the similarity of the line segments and the compactness of the distribution. The line segment similarity was used to establish the adjacent relationship among parcels and the compactness was used to optimise the grouping result in obtain a satisfactory visual expression. In our study, constrained Delaunay triangulation, Hausdor ff distance, and graph theory were employed to construct the proximity, delineate the parcel adjacency matrix, and implement the grouping of parcels. We applied this method for grouping urban parcel data of Beijing and verified the rationality of grouping results based on the quantified results of infiltration behaviours. Our method proved to take a good account of infiltration behaviours and satisfied human cognition, compared with a k-means ++ method. We also presented a case using Xicheng District in Beijing to demonstrate the practicability of the method. The result showed that our method obtained fine-grained groups while ensuring functional regions-integrity.
Introduction
Urban parcels, with a certain scale and spatial size, are bounded by a network of urban roads and comprise the basic spatial units for fine-scale urban modelling and urban studies, as well as for spatial planning [1]. Robert Krier [2] described a parcel as the original cell of urban design structures, which determines the form of the surrounding road network and the structure of internal buildings. Land parcel data are one of the cornerstones of contemporary urban planning [3]. Researchers have performed empirical research based on parcel data [4,5]. Normative planning and policies are performed expanding boundaries and spatial interaction of urban areas. The method of parcel grouping is not only a means of data organisation for urban planning but also a concrete practice for addressing the MAUP.
In this paper, we proposed a method for grouping urban parcels by considering functional connectivity and urban morphology. To analyse the potential link between urban function and parcel form, the infiltration behaviours (including dominance, functional complementarity, and imitation) of the components were measured using POIs within parcels. According to the results, we determined that infiltration behaviours often appear in the form of groups, which revealed the rule of functional interaction in the neighbourhood and further indicated the practical significance of parcel grouping. Based on this finding, we designed a parcel grouping method that generally involves three main steps: First, according to the Tobler's Law and Gestalt theory, we analysed the adjacent relationship among parcels and measured the proximity based on the line-segment similarity. Second, the grouping result was obtained by traversing the parcel adjacency matrix after setting a reasonable distance threshold. Last, we quantified the compactness of the grouping result and constructed the compactness curve, which was used to optimise the grouping result in obtain a reasonable visual expression. The proposed method was applied to a case study in Xicheng District in Beijing, and its feasibility was verified.
Related Work
A grouping of urban elements can reflect the urban form and distribution characteristics, which can be applied in numerous applications, such as urban planning, geovisualisation, transportation, and praxeology. Primarily, generalising maps for grouping has been adopted in existing research, which requires a substantial amount of effort for grouping buildings [43][44][45][46][47]. Regnauld [43] detected and organised building relations using the minimum spanning tree (MST). Li [48] presented an integrated methodology for the fully automated generalisation of buildings, including an automated grouping of buildings. Boffet and Serra [49] classified different types of settlement patterns and presented methods to characterise urban blocks and buildings. Rainsford and Mackness [45] focused on the simplification of the shape of individual buildings using a template matching technique for grouping buildings. Cetinkaya, Basaraner and Burghardt [50] presented a comparison of grouping algorithms for buildings in urban blocks and found that DBSCAN and spatial clustering based on Delaunay triangulation (ASCDT) were superior to CHAMELEON and MST.
The construction of proximity relationships among urban polygon elements is the primary task of grouping. From a cartographic viewpoint, individuals tend to visually perceive close objects in graphic representations as groups, because objects in proximity can be more closely associated with each other than objects further away. In some studies, Gestalt principles are the theoretical basis for forming groups and are usually introduced to identify urban morphology and spatial distribution patterns of urban elements. Li determined the direct alignments between neighbouring buildings based on Gestalt theory [48]. Yan and Weibel [51] considered the directional relations among buildings, and presented three rules and six parameters based on Gestalt theory to achieve a more comprehensive approach. Wang [52] applied multiple Gestalt rules and a graph-cut method to cluster similar buildings into the same group.
Common to all methods discussed above is that they tend to focus on urban buildings [43][44][45][46][47][48][49][50][51][52], and few studies address polygon grouping at the parcel (block) level. Urban parcels are carriers of buildings. Compared with buildings, the size of parcels is coarser, and the distribution of parcels is more compact. Due to the different size and irregular shape of parcels, unlike building grouping methods, it is difficult to obtain an accurate grouping result based on size, shape and orientation of parcels. Two main questions and issues for parcel grouping need to be addressed: (1) How to measure the proximity between parcels? A major problem with the traditional distance calculation methods (e.g., maximum distance, minimum distance, and centroid distance) is that they only measure the distance between two points or the centroid distance of parcels [43] and cannot adequately describe the similarity of the form between adjacent parcels and the compactness of discrete parcels. (2) How to measure the quality of the grouping? It will not be comprehensive enough for parcel groups if we evaluate the results merely by geometric indications [50], since the parcels have abundant semantic information. Some methods, from the viewpoint of area aggregation, have been proposed based on mesh density, geometric and semantics of blocks [53][54][55]. To ensure suitable forms of aggregation results, Haunert [56] presented an area aggregation method by mixed-integer programming, which optimised the problem. Luan [57] built a model for the aggregation of urban blocks based on Haunert's method to maintain the grids pattern after road selection. However, the results of the aggregation approach are not sufficiently flexible, and continued disaggregation is lacking, unlike the grouping method. For these reasons, our intention is to propose a new, comprehensive approach for parcel grouping, integrating urban morphology and urban functional connectivity. In the grouping process, the constrained Delaunay triangulation, Hausdorff distance, and graph theory were employed as supporting techniques to construct the proximity, delineate the parcel adjacency matrix, and implement the grouping of parcels. Two morphology indicators of parcels, i.e., line-segment similarity and compactness, were used to construct spatial proximity relationships and optimise the visual expression of the grouping results. The evaluation of grouping results involved Gestalt theory and quantised urban behaviours.
Study Area
The urban Xicheng District in Beijing, China was chosen as the study area ( Figure 1). Xicheng District, which is located in the western part of the urban core area, is the political and cultural centre of Beijing, China. As the central district in Beijing, Xicheng District is home to the offices of State agencies and more than 80 governmental ministries and agencies. Xicheng District has a modern financial industry, a booming cultural innovation industry, a high-tech industry with great potential and a wealth of commercial activities to offer. Its historical and cultural legacies form the basis for a unique tourism experience that is available to millions of foreign and domestic visitors each year. optimised the problem. Luan [57] built a model for the aggregation of urban blocks based on Haunert's method to maintain the grids pattern after road selection. However, the results of the aggregation approach are not sufficiently flexible, and continued disaggregation is lacking, unlike the grouping method. For these reasons, our intention is to propose a new, comprehensive approach for parcel grouping, integrating urban morphology and urban functional connectivity. In the grouping process, the constrained Delaunay triangulation, Hausdorff distance, and graph theory were employed as supporting techniques to construct the proximity, delineate the parcel adjacency matrix, and implement the grouping of parcels. Two morphology indicators of parcels, i.e., linesegment similarity and compactness, were used to construct spatial proximity relationships and optimise the visual expression of the grouping results. The evaluation of grouping results involved Gestalt theory and quantised urban behaviours.
Study Area
The urban Xicheng District in Beijing, China was chosen as the study area ( Figure 1). Xicheng District, which is located in the western part of the urban core area, is the political and cultural centre of Beijing, China. As the central district in Beijing, Xicheng District is home to the offices of State agencies and more than 80 governmental ministries and agencies. Xicheng District has a modern financial industry, a booming cultural innovation industry, a high-tech industry with great potential and a wealth of commercial activities to offer. Its historical and cultural legacies form the basis for a unique tourism experience that is available to millions of foreign and domestic visitors each year.
POI is a kind of dot data that represents real geographic entities, including spatial information, such as latitude and longitude, address, and attribute information, such as name and category. Our study shows region-wide planned land use for 516 parcels obtained from the Institute of Geographic Sciences and Natural Resources Research, CAS, while actual land use is measured by 23,123 geotagged POIs that are synthesised from a leading online business catalogue in China: the Baidu Map catalogues business establishments and housing options throughout the region (Xicheng District). The initial 25 POI types are re-classified into eleven general categories (refer to Table 1). Table 1). Table 1). POI is a kind of dot data that represents real geographic entities, including spatial information, such as latitude and longitude, address, and attribute information, such as name and category. Our study shows region-wide planned land use for 516 parcels obtained from the Institute of Geographic Sciences and Natural Resources Research, CAS, while actual land use is measured by 23,123 geo-tagged POIs that are synthesised from a leading online business catalogue in China: the Baidu Map catalogues business establishments and housing options throughout the region (Xicheng District). The initial 25 POI types are re-classified into eleven general categories (refer to Table 1).
Experimental Urban Parcel Datasets and Environment
The two selected experimental parcel datasets illustrated in Figure 2 clearly indicate two different morphologies (i.e., parcel group size and parcel components) in Beijing. Table 2 presents details about the prepared data, including 'number of parcels', 'number of POI', and 'main components'. The two sets of data were mainly used for the analysis of urban infiltration behaviours, as well as for the testing and results analysis of grouping urban parcels. The experiments were conducted on an Intel Core I7-6700 CPU running at 3.4 GHz, with 8 GB of RAM and a 1024 GB solid-state disk. The operating system was Windows 7 (64-bit). The proposed algorithms were implemented in Python. The two selected experimental parcel datasets illustrated in Figure 2 clearly indicate two different morphologies (i.e., parcel group size and parcel components) in Beijing. Table 2 presents details about the prepared data, including 'number of parcels', 'number of POI', and 'main components'. The two sets of data were mainly used for the analysis of urban infiltration behaviours, as well as for the testing and results analysis of grouping urban parcels. The experiments were conducted on an Intel Core I7-6700 CPU running at 3.4 GHz, with 8 GB of RAM and a 1024 GB solidstate disk. The operating system was Windows 7 (64-bit). The proposed algorithms were implemented in Python. Our proposed method for processing an urban parcel grouping is composed of three parts ( Figure 3). (1) An analysis of the infiltration behaviour of adjacent parcels (refer to Section 3.2). This study implemented three indicators to discover the infiltration behaviours of the components among parcels. (2) Construct the adjacent relationships among parcels and calculate the proximity between two adjacent parcels (refer to Section 3.3). (3) Urban parcel grouping method (refer to Section 3.4). The proposed urban parcel grouping method uses a graph algorithm to form parcel groups, and obtains the optimum grouping result by analysing the compactness among parcels. Our proposed method for processing an urban parcel grouping is composed of three parts ( Figure 3). (1) An analysis of the infiltration behaviour of adjacent parcels (refer to Section 3.2). This study implemented three indicators to discover the infiltration behaviours of the components among parcels. (2) Construct the adjacent relationships among parcels and calculate the proximity between two adjacent parcels (refer to Section 3.3). (3) Urban parcel grouping method (refer to Section 3.4). The proposed urban parcel grouping method uses a graph algorithm to form parcel groups, and obtains the optimum grouping result by analysing the compactness among parcels.
Infiltration Behaviours of Components among Urban Parcels
Parcels are multiple functionally mixed land areas. With the enhancement of functional mixing, parcels promote the mixing of users, which promotes the development of urban diversification. Cooperative relationships exist between adjacent parcels; they utilise urban public space, that is, streets, as a transmission medium, to share social resources.
Compact distribution of parcels causes frequent interactions among the urban elements and population; this kind of interaction also causes infiltration of components without distinct boundaries [58]. The mutual infiltration forms the functional connectivity characteristics among parcels, which
Infiltration Behaviours of Components among Urban Parcels
Parcels are multiple functionally mixed land areas. With the enhancement of functional mixing, parcels promote the mixing of users, which promotes the development of urban diversification. Cooperative relationships exist between adjacent parcels; they utilise urban public space, that is, streets, as a transmission medium, to share social resources.
Compact distribution of parcels causes frequent interactions among the urban elements and population; this kind of interaction also causes infiltration of components without distinct boundaries [58]. The mutual infiltration forms the functional connectivity characteristics among parcels, which ensure the stability of shape and structure. This kind of infiltration behaviour consists of two main modes: (1) Functional complementarity among parcels, that is, the adjacent parcels form a multi-functional place, where a combination of functions (living, working, communicating, cultural and sporting activities) provide mutual support for various human activities. (2) Component imitation between adjacent parcels, that is, two adjacent parcels are similar in the combination of land use types. The imitation behaviour satisfies the description of Tobler's first law, i.e., everything is related to everything else, but near things are more related than distant things. These two modes do not exist in isolation. When functional complementary behaviour frequently occurs between two parcels, the imitation behaviour will eventually occur due to long-term infiltration of components.
The potential distribution patterns of the parcel components and urban behaviours within parcels are discussed based on an analysis of the infiltration behaviours of parcels. To effectively explore the infiltration behaviour of components among parcels, we use the parcels dominant function [1], the mixed land-use index (MLU) [29,59], and the Jaccard similarity coefficient [60] to quantify three indicators that describe the infiltration behaviour.
Definition 1. Dominant function:
Urban function for individual parcels is identified by examining dominant POI types within the parcels. A dominant function within a parcel is defined as the POI type that has accounted for more than 50% of all POIs within the parcel.
Definition 2. Functional complementarity behaviour:
In different parcels, the types of functions and the degree of functional mixing usually differ, which produces a functional requirement between adjacent parcels. A fine-grained mixing of residential, commercial, and recreational land use may enable local residents to walk or bike to desired destinations, which increases the frequency of interaction among parcels. Over time, a functional complementarity behaviour is formed between adjacent parcels.
As a supplemental measurement for the dominant function, we computed a mixed index to denote the degree of MLU. The mixed index (M) of a parcel is the entropy index and can be expressed as the following Equation (1): where (1) P ij = Percent of land use i in parcel j.
(2) N j = Number of represented land uses in all parcels. According to maximum entropy theory, when all data types in the dataset are evenly distributed, the information entropy of the data set will attain the maximum value, which is known as the principle of 'equal probability maximum entropy'. When the parcels are logically grouped, the types of land use increase in the parcel group and the MLU index for the group increases. If the MLU increases after the parcel grouping, we believe that functional complementarity behaviours arise among parcels.
Definition 3. Imitation behaviour:
This kind of behaviour describes the similarity of the components between adjacent parcels. When two parcels are adjacent and both parcels are composed of three parts-residential area, shopping centre, and catering service-then we conclude that there are imitation behaviours between two parcels. Thus, we can construct a vector to describe the composition of a parcel. Each attribute (dimension) in the vector corresponds to a land use type. The value of each attribute is 1 or 0, where "1" indicates that a certain land use type exists in the current parcel, and "0" indicates that it does not exist ( Figure 4). parts-residential area, shopping centre, and catering service-then we conclude that there are imitation behaviours between two parcels. Thus, we can construct a vector to describe the composition of a parcel. Each attribute (dimension) in the vector corresponds to a land use type. The value of each attribute is 1 or 0, where "1" indicates that a certain land use type exists in the current parcel, and "0" indicates that it does not exist ( Figure 4). We introduced the Jaccard coefficient to verify the degree of imitation of adjacent parcels (the similarity between two vectors), which is described as follows: The Jaccard index is a statistic that is used to compare the similarity with the diversity of sample sets. The Jaccard coefficient measures the similarity between two finite sample sets and is defined as the size of the intersection divided by the size of the union of the sample sets. The coefficient can be expressed with Equation (2): where A and B are the composition vectors of two parcels. If both A and B are empty, we define J(A, B) = 1. 0 ≤ J(A, B) ≤ 1.
Identification of Adjacent Relationships among Parcels
Adjacent objects can be more spatially dependent or associated, which can be explained by Tobler's first law geographically and Gestalt principles cartographically, respectively. The grouping process is performed gradually to avoid producing extremely small and meaningless groups, as would result from the simultaneous use of all measures. The first step of grouping is to analyse proximity relationships [50]. Constraint Delaunay triangulation (CDT) is often used for the extraction of skeletons from map patches, as it possesses several highly desirable traits, such as adjacency and regional character [48,51,55].
The relationship between two parcels was expressed by the graph-based method. First, we constructed the CDT for the parcels using a point-by-point insertion algorithm. Second, we classified triangles according to the parcel to which the triangle's vertices belonged. The triangle, with vertices belonging to two different parcels, was named a connecting triangle. If two parcels were connected by connecting triangles, we named the two parcels conflicting parcels, which were evaluated based on the condition of 'whether the three vertices of a triangle belong to the same polygon boundary'. When two parcels consisted of conflicting parcels, we considered them to have an adjacent relationship. The CDT was constructed for all parcels by analysing whether an adjacent relationship existed between any two parcels. Considering that a long and thin triangle easily produces an incorrect assessment of the adjacency relationship between two parcels (refer to Figure 5), it was necessary to eliminate the long and thin triangles in connecting triangles.
This paper implements the redundant marking of long and thin triangles based on an area method [61], according to the ratio of the area of the equilateral triangle to the sum of the squares of the largest sides. The calculation method is expressed as follows: where (l 1 , l 2 , l 3 ) are the lengths of the three sides, and w is the positive area of the triangle. R w is regularity, 0 ≤ R w ≤ 1, and the regularity of the equilateral triangle is 1. When the triangle tends to be long and thin, R w tends towards zero.
The relationship between two parcels was expressed by the graph-based method. First, we constructed the CDT for the parcels using a point-by-point insertion algorithm. Second, we classified triangles according to the parcel to which the triangle's vertices belonged. The triangle, with vertices belonging to two different parcels, was named a connecting triangle. If two parcels were connected by connecting triangles, we named the two parcels conflicting parcels, which were evaluated based on the condition of 'whether the three vertices of a triangle belong to the same polygon boundary'. When two parcels consisted of conflicting parcels, we considered them to have an adjacent relationship. The CDT was constructed for all parcels by analysing whether an adjacent relationship existed between any two parcels. Considering that a long and thin triangle easily produces an incorrect assessment of the adjacency relationship between two parcels (refer to Figure 5), it was necessary to eliminate the long and thin triangles in connecting triangles. This paper implements the redundant marking of long and thin triangles based on an area method [61], according to the ratio of the area of the equilateral triangle to the sum of the squares of the largest sides. The calculation method is expressed as follows: The triangles in the CDT connect parcels that are adjacent (refer to Figure 6). The adjacent relationship between P 1 and P 2 is obtained from the edges of the connecting triangles, such as triangle ABC (AB). The heights (h) of all connecting triangles between two adjacent parcels are calculated and used to calculate the proximity between two adjacent parcels. where ( , , ) are the lengths of the three sides, and w is the positive area of the triangle. is regularity, 0 ≤ ≤ 1, and the regularity of the equilateral triangle is 1. When the triangle tends to be long and thin, tends towards zero. The triangles in the CDT connect parcels that are adjacent (refer to Figure 6). The adjacent relationship between P1 and P2 is obtained from the edges of the connecting triangles, such as triangle ABC (AB). The heights (h) of all connecting triangles between two adjacent parcels are calculated and used to calculate the proximity between two adjacent parcels.
Method for Measuring the Proximity of Parcels
In spatial clustering or pattern recognition, the distance between geographic elements is often calculated using the Euclidean distance, and a classification result is yielded according to the distance between two elements (i.e., adjacent elements are assigned to the same class). Some studies [52,62,63] have attempted to use the centroid to represent a line, a polygon or an elements group, considering that the influence of the size, direction, layout and other factors of the element on the mode structure can be disregarded in the recognition process. For a polygon element, such as a parcel, a major problem with the traditional distance calculation methods (e.g., maximum distance, minimum distance, and centroid distance) is that they only measure the distance between two points or the centroid distance of parcels and cannot adequately describe the similarity of the form between adjacent parcels and the compactness of discrete parcels.
An analysis of the characteristics of a parcel indicates that when two parcels are adjacent and have topological connectivity, their adjacent edges have a similarity of line segments, i.e., the natural extension orientation of adjacent edges is nearly equivalent. The two sets of skeleton points, which comprise the line segments, have the same central tendency and a fuzzy matching relationship; the distance between two line segments is nearly equal everywhere. To express this kind of similarity
Method for Measuring the Proximity of Parcels
In spatial clustering or pattern recognition, the distance between geographic elements is often calculated using the Euclidean distance, and a classification result is yielded according to the distance between two elements (i.e., adjacent elements are assigned to the same class). Some studies [52,62,63] have attempted to use the centroid to represent a line, a polygon or an elements group, considering that the influence of the size, direction, layout and other factors of the element on the mode structure can be disregarded in the recognition process. For a polygon element, such as a parcel, a major problem with the traditional distance calculation methods (e.g., maximum distance, minimum distance, and centroid distance) is that they only measure the distance between two points or the centroid distance of parcels and cannot adequately describe the similarity of the form between adjacent parcels and the compactness of discrete parcels.
An analysis of the characteristics of a parcel indicates that when two parcels are adjacent and have topological connectivity, their adjacent edges have a similarity of line segments, i.e., the natural extension orientation of adjacent edges is nearly equivalent. The two sets of skeleton points, which comprise the line segments, have the same central tendency and a fuzzy matching relationship; the distance between two line segments is nearly equal everywhere. To express this kind of similarity and further extract the proximity between two parcels, we applied the method of the Hausdorff-like distance (HD) to construct the calculation method of proximity, considering that the Hausdorff distance is the distance between two proper subsets in metric spaces [64,65].
The HD from set X to Set Y is a maximin function that is defined as follows: where x and y are points of set X and set Y, respectively, and * is a certain distance norm between feature points x and y, such as the sum norm, Euclidean distance, and maximal norm. A more general definition of HD is as follows: which defines the HD between X and Y, while Equation (4) is applied to HD from X to Y (also referred to as the directed Hausdorff distance). The two distances h(X, Y) and h(Y, X) are sometimes termed as the forward HD and backward HD, respectively, of X to Y. In this paper, the heights of the connecting triangles in two blocks were taken as the HD's distance norm (refer to Equation (6)). Considering that HD is very sensitive to noise points (that is, sensitive to outliers), to solve this problem, the median of the sequence of heights replaces the maximum of these heights in Equation (5), which not only effectively avoids noise interference but also reflects the central tendency of the set of boundary points in parcels.
where triangles(X, Y) represents the connecting triangles between two adjacent parcels, and heightList( * ) is storage for heights of connecting triangles.
Description of Urban Parcel Grouping (UPG) Algorithm
The graph-based grouping method is the most common approach in the grouping of polygons [43,45,[48][49][50]55]. As the parcel proximity calculation method proposed in this paper is a binary operation and satisfies commutative law (i.e., the adjacency relationship is bi-directional), an adjacency matrix of an undirected graph can be established for storing the proximity between parcels. Construction of the parcel adjacency matrix was primarily filled with three types of values (refer to Figure 7a): (1) for truly adjacent parcels, the adjacency matrix was filled according to the proximity between two parcels; (2) when two parcels are non-adjacent, the adjacency matrix was filled with the centroid distance; (3) for the part connected by long and thin triangles, we chose a random constant that was larger than the maximum proximity to fill the adjacency matrix.
The parcel grouping combined some compact-distribution parcels. Under the constraint of the distance threshold, the proximity of parcels is transitive and ensures parcel connectivity within a certain range. The parcel grouping process was an adjacent searching process, as illustrated in Figure 7b. Therefore, this paper intends to build some parcel grouping trees according to the depth-first traversal, form a grouping forest, and so to obtain the result of the parcel grouping.
The proposed algorithm for grouping parcels involved three main steps (refer to Figure 7c): (1) Extract the keys of a hash table, which stores conflicting parcels, and save these keys as a new list named ParcelID. (2) Iterate over the ParcelID. In the loop, we pop the value named idCurrent at the top of the ParcelID and then input the distance threshold, idCurrent, ParcelID, and the parcel's adjacent matrix into the depth-first traversal function to obtain the visited list of parcels. (3) Add the visited list to a hash table named Group_Result and remove it from ParcelID, update the ParcelID and enter the next loop. The algorithm terminates when ParcelID is empty. operation and satisfies commutative law (i.e., the adjacency relationship is bi-directional), an adjacency matrix of an undirected graph can be established for storing the proximity between parcels. Construction of the parcel adjacency matrix was primarily filled with three types of values (refer to Figure 7a): (1) for truly adjacent parcels, the adjacency matrix was filled according to the proximity between two parcels; (2) when two parcels are non-adjacent, the adjacency matrix was filled with the centroid distance; (3) for the part connected by long and thin triangles, we chose a random constant that was larger than the maximum proximity to fill the adjacency matrix. The parcel grouping combined some compact-distribution parcels. Under the constraint of the distance threshold, the proximity of parcels is transitive and ensures parcel connectivity within a With regard to the setting of the distance threshold, a proximity sequence was constructed, that is, the set of proximity values among all conflicting parcels. According to the series of hierarchical models, the entire proximity sequence was divided into several levels, and the minimum value in the current level was selected as a distance threshold. Level-by-level tuning is achieved by hierarchically setting the distance threshold. The numerical series hierarchical models can be calculated as follows: where S represents the proximity list, S 1 to S i belong to the first level, S i+1 belongs to the second level, then S i+1 to S i+j belongs to the second level and S i+1+j belongs to the third level. By analogy, we obtained a hierarchical proximity sequence. The result of level-by-level tuning is illustrated in Figure 8. According to the results, we observed that the number of groups easily changed with the levels. For example, at level 1, each parcel constituted a group (see Figure 8a) or, at level 13, all parcels formed only one group (see Figure 8c). According to Gestalt theory, this was not a very good means of grouping, and could not effectively convey the compactness and proximity mutation. Therefore, an optimum number of groups should be set to make the grouping process more automated and obtain a reasonable grouping result.
According to the results, we observed that the number of groups easily changed with the levels. For example, at level 1, each parcel constituted a group (see Figure 8a) or, at level 13, all parcels formed only one group (see Figure 8c). According to Gestalt theory, this was not a very good means of grouping, and could not effectively convey the compactness and proximity mutation. Therefore, an optimum number of groups should be set to make the grouping process more automated and obtain a reasonable grouping result.
Method of Obtaining the Optimum Grouping Result
We used cluster validation indices to measure whether a structure identified by the grouping analysis was adequate and how appropriately a parcel was clustered. Among the current indices, the silhouette criterion [66] is the most prevalent index for determining an appropriate value of a cluster number. The range of the silhouette value is between 0 and 1. A high value (near 1) indicates that an object is appropriately clustered and is highly unique from other clusters.
Using the method of measuring the silhouette value as a reference, a calculation index for the compactness of the parcels intra-group was proposed, which was based on the parcel grouping method (refer to Section 3.4.1) and the proximity calculation method (refer to Section 3.3.2). We only considered the parcel that was directly adjacent to another parcel (i.e., the conflicting parcels) and used the proximity between the two parcels as an input to the compactness calculation model. When the distance threshold (DT) is set for a parcel that conflicts with parcel i, the proximity is less than the DT, which is referred to as the intra-group distance (INDIS), while the proximity is greater than the DT, which is referred to as the inter-group distance (OUTDIS).
For each parcel i, let Avg_INDIS(i) be the average INDIS between parcel i and all other parcels within the same group. Let Avg_OUTDIS(i) be the average OUTDIS of parcel i to all parcels in any other group, of which i is not a member. We define the compactness as follows: which can also be written as follows: This definition indicates that the Compactness(i) is not consistent with the silhouette criterion. For Compactness(i) to be 1, we require that Avg_INDIS(i) to be 0. As Avg_INDIS(i) is a measure of how dissimilar i is to its cluster, a small value indicates that it is well matched. A large Avg_OUTDIS(i) implies that i is poorly matched to its neighbouring group. Thus, when the compactness is equal to 1, all parcels become separate groups (refer to Figures 8a and 9b). If the compactness is equal to minus one when Avg_OUTDIS(i) is zero, then we observe that all parcels are aggregated into one group (refer to Figures 8c and 9a). In addition to the two special cases, Avg_OUTDIS(i) is always greater than Avg_INDIS(i) according to their definitions (refer to Figure 9c). When a parcel is completely surrounded by other parcels in the same group, it is not directly adjacent to the parcels of other groups, its Avg_OUTDIS is replaced by the minimum OUTDIS of all other parcels within the same group. The average Compactness(i) for all parcels of a group is a measure of how tightly the parcels in the group are grouped. Thus, the average Compactness(i) for all data of the entire dataset is a measure of how appropriately the data have been grouped. To illustrate the relationship between the proximity of parcels and the compactness of parcel groups, we simulated the parcel grouping process (refer to Figure 10). Figure 10a shows five parcels, and the proximity relationship between these parcels is d1 < d2 < d3 < d4 < d5. To clearly identify the compactness of parcel groups, we assumed that d1 was slightly less than d2, d3 was approximately 3 times d2, d4 was approximately 2 times d3 and d5 was approximately 1.5 times d4. The grouping process started from ungrouped parcels (refer to Figure 10a) and gradually increased the proximity threshold to obtain different grouping results (refer to Figures 10b to 10f).
On the basis of Equation (10), we calculated Compactness(Pi) of each parcel (refer to Table 3) and To illustrate the relationship between the proximity of parcels and the compactness of parcel groups, we simulated the parcel grouping process (refer to Figure 10). Figure 10a shows five parcels, and the proximity relationship between these parcels is d1 < d2 < d3 < d4 < d5. To clearly identify the compactness of parcel groups, we assumed that d1 was slightly less than d2, d3 was approximately 3 times d2, d4 was approximately 2 times d3 and d5 was approximately 1.5 times d4. The grouping process started from ungrouped parcels (refer to Figure 10a) and gradually increased the proximity threshold to obtain different grouping results (refer to Figure 10b-f).
On the basis of Equation (10), we calculated Compactness(Pi) of each parcel (refer to Table 3) and drew the curve to describe the relationship between average of Compactness(Pi) and the proximity threshold (refer to Figure 10g). The curve exhibited a declining trend as a whole: (1) when the value of the curve was 1 or close to 1, the corresponding grouping result was ungrouped, that is, numerous parcels become separated groups; (2) when the value of the curve was −1, all parcels aggregate into one group. The curve has a mutation value, which corresponds to the grouping result, as illustrated in Figure 10c. Among all grouping results, Figure 10c showed that the intergroup distance was larger than the intragroup distance, which was consistent with the cognitive habit and satisfies the requirements of compactness and continuity in Gestalt theory. We determined that the reason for the mutation value was that Avg_OUTDIS(i) was larger than Avg_OUTDIS(i). Therefore, to obtain a reasonable parcel grouping result, we should determine the proximity threshold, which has a mutation value in the compactness curve. Through the analysis of Table 3, we also determined that the compactness of the internal parcel grouping increased gradually, as in P1. If the number of such parcels in the grouping result also increases gradually, then the value of the curve may increase. To illustrate the relationship between the proximity of parcels and the compactness of parcel groups, we simulated the parcel grouping process (refer to Figure 10). Figure 10a shows five parcels, and the proximity relationship between these parcels is d1 < d2 < d3 < d4 < d5. To clearly identify the compactness of parcel groups, we assumed that d1 was slightly less than d2, d3 was approximately 3 times d2, d4 was approximately 2 times d3 and d5 was approximately 1.5 times d4. The grouping process started from ungrouped parcels (refer to Figure 10a) and gradually increased the proximity threshold to obtain different grouping results (refer to Figures 10b to 10f).
On the basis of Equation (10), we calculated Compactness(Pi) of each parcel (refer to Table 3) and drew the curve to describe the relationship between average of Compactness(Pi) and the proximity threshold (refer to Figure 10g). The curve exhibited a declining trend as a whole: (1) when the value of the curve was 1 or close to 1, the corresponding grouping result was ungrouped, that is, numerous parcels become separated groups; (2) when the value of the curve was −1, all parcels aggregate into Table 3. Description of relationship between proximity threshold and compactness.
Analysis of Infiltration Behaviour
According to Section 3.2, we used the metrics (Definitions 1 to 3) to analyse the two sets of parcel data (Parcel 1 and Parcel 2, refer to Section 3.1.2) with regard to three aspects: the MLU index, components similarity, and urban domain function.
MLU aims to measure the mixing degree of land use. Due to the complicated land use in the city centre, both datasets showed a higher mixed land use index (refer to Figure 11), which is consistent with the research of Long [29]. We also discovered that in most cases, the single-function regions were adjacent to the mixed function area. Region agglomeration development was observed in mushrooming financial districts, retail malls, and technology centres, which generated more mixed development than planned by the government [67]. The similarity component is a quantitative expression of the imitation behaviours in urban parcels. According to Definition 3 (refer to Section 3.2), in this study, values greater than 0.5 represent a high degree of similarity, which is denoted by the red line in the figures ( Figure 12); 0.3 to 0.5 indicates a moderate similarity, which is denoted by the blue line; and less than 0.3 represents a low degree of similarity or dissimilarity, which is denoted by the grey line. According to the results (refer to Figure 12), Parcel 1 has 33 instances of high-degree imitation behaviours, and Parcel 2 has 192 instances of high-degree imitation behaviours. The higher degree of similarity primarily occurred in clusters, and a large proportion occurred in adjacent parcels, from a global perspective. In addition, a dissimilarity or lower similarity of components existed in parcels that were not directly adjacent. The dominant function was attributed to the joint action between functional complementarity behaviour and imitation behaviour. As the group size of parcels increased, the functional combination mode became more complicated (refer to Figure 13). To some extent, the adjacent parcels The similarity component is a quantitative expression of the imitation behaviours in urban parcels. According to Definition 3 (refer to Section 3.2), in this study, values greater than 0.5 represent a high degree of similarity, which is denoted by the red line in the figures (Figure 12); 0.3 to 0.5 indicates a moderate similarity, which is denoted by the blue line; and less than 0.3 represents a low degree of similarity or dissimilarity, which is denoted by the grey line. According to the results (refer to Figure 12), Parcel 1 has 33 instances of high-degree imitation behaviours, and Parcel 2 has 192 instances of high-degree imitation behaviours. The higher degree of similarity primarily occurred in clusters, and a large proportion occurred in adjacent parcels, from a global perspective. In addition, a dissimilarity or lower similarity of components existed in parcels that were not directly adjacent. The similarity component is a quantitative expression of the imitation behaviours in urban parcels. According to Definition 3 (refer to Section 3.2), in this study, values greater than 0.5 represent a high degree of similarity, which is denoted by the red line in the figures (Figure 12); 0.3 to 0.5 indicates a moderate similarity, which is denoted by the blue line; and less than 0.3 represents a low degree of similarity or dissimilarity, which is denoted by the grey line. According to the results (refer to Figure 12), Parcel 1 has 33 instances of high-degree imitation behaviours, and Parcel 2 has 192 instances of high-degree imitation behaviours. The higher degree of similarity primarily occurred in clusters, and a large proportion occurred in adjacent parcels, from a global perspective. In addition, a dissimilarity or lower similarity of components existed in parcels that were not directly adjacent. The dominant function was attributed to the joint action between functional complementarity behaviour and imitation behaviour. As the group size of parcels increased, the functional combination mode became more complicated (refer to Figure 13). To some extent, the adjacent parcels The dominant function was attributed to the joint action between functional complementarity behaviour and imitation behaviour. As the group size of parcels increased, the functional combination mode became more complicated (refer to Figure 13). To some extent, the adjacent parcels had a complementary dominant function, and most cases involved the imitation of the dominant function. This analysis concluded that the infiltration behaviour primarily occurred in adjacent parcels, which is a trend from discrete distribution to aggregated distribution. The combination of adjacent parcels can effectively increase the MLU index and complement the functions. From the perspective of urban form, the grouping of parcels based on the adjacent relationship is the macroscopic manifestation of infiltration behaviour. In addition, imitation behaviour and functional complementarity behaviour do not exist in isolation. Due to the long-term functional complementarity behaviour in parcels, a large number of interactions among personnel and purchase behaviours have emerged. In the process of seeking convenience within parcels, imitation behaviour is gradually formed in neighbourhoods. Therefore, infiltration behaviour motivates parcel grouping.
Furthermore, the quality of grouping results can be evaluated by infiltration behaviour. Acceptable grouping results should ensure that the influence of infiltration behaviours are considered as much as possible. Specifically, in this study, after the grouping, the degree of mixed land use within the group was improved, the same dominant functions were not separated, and fewer high-degree imitation behaviours were ignored.
Parcel grouping Method Based on the Centroid Proximity
Urban parcels, unlike other urban elements, are usually not distinctly separated. The k-means++ algorithm is capable of clustering this kind of distribution and can ensure a better clustering centre compared with k-means. Considering that the centroid distance can be measured by the Euclidean distance, this study applied the k-means++ algorithm as a comparative algorithm to implement parcel grouping, and optimised the number of groups using the elbow method based on sum of the squared errors (SSE) and the silhouette coefficient. Comprehensively analysing the results of the SSE curve and silhouette coefficient curve, we let Parcel 1 have 8 clusters (K=8) and Parcel 2 have 10 clusters (K=10). The results are illustrated in Figure 14. This analysis concluded that the infiltration behaviour primarily occurred in adjacent parcels, which is a trend from discrete distribution to aggregated distribution. The combination of adjacent parcels can effectively increase the MLU index and complement the functions. From the perspective of urban form, the grouping of parcels based on the adjacent relationship is the macroscopic manifestation of infiltration behaviour. In addition, imitation behaviour and functional complementarity behaviour do not exist in isolation. Due to the long-term functional complementarity behaviour in parcels, a large number of interactions among personnel and purchase behaviours have emerged. In the process of seeking convenience within parcels, imitation behaviour is gradually formed in neighbourhoods. Therefore, infiltration behaviour motivates parcel grouping.
Furthermore, the quality of grouping results can be evaluated by infiltration behaviour. Acceptable grouping results should ensure that the influence of infiltration behaviours are considered as much as possible. Specifically, in this study, after the grouping, the degree of mixed land use within the group was improved, the same dominant functions were not separated, and fewer high-degree imitation behaviours were ignored.
Parcel grouping Method Based on the Centroid Proximity
Urban parcels, unlike other urban elements, are usually not distinctly separated. The k-means++ algorithm is capable of clustering this kind of distribution and can ensure a better clustering centre compared with k-means. Considering that the centroid distance can be measured by the Euclidean distance, this study applied the k-means++ algorithm as a comparative algorithm to implement parcel grouping, and optimised the number of groups using the elbow method based on sum of the squared errors (SSE) and the silhouette coefficient. Comprehensively analysing the results of the SSE curve and silhouette coefficient curve, we let Parcel 1 have 8 clusters (K = 8) and Parcel 2 have 10 clusters (K = 10). The results are illustrated in Figure 14. Compared with the results in Section 4.1, this step evaluated the grouping results with regard to three aspects: 1) average increase of MLU within groups, 2) whether the same dominant functions were separated, and 3) ratio of ignored imitation behaviours to all high-degree imitation behaviours. The evaluation results are presented in Table 4. We found that the grouped parcels could increase the MLU index and achieve functional complementarity. However, this grouping mode disregarded the functional connectivity between parcels. Some clustered high-degree imitation behaviours (refer to Figure 15) and some same dominant functions were separated into different groups. Compared with the results in Section 4.1, this step evaluated the grouping results with regard to three aspects: (1) average increase of MLU within groups, (2) whether the same dominant functions were separated, and (3) ratio of ignored imitation behaviours to all high-degree imitation behaviours. The evaluation results are presented in Table 4. We found that the grouped parcels could increase the MLU index and achieve functional complementarity. However, this grouping mode disregarded the functional connectivity between parcels. Some clustered high-degree imitation behaviours (refer to Figure 15) and some same dominant functions were separated into different groups. (42) According to the grouping results, although the parcels in the same group may satisfy the centroid adjacent, large gaps were observed within a group (refer to Figures 14 to 15). These gaps not only interrupted the proximity and the continuity, which was elaborated in Gestalt theory, but also disregarded the potential semantic relativity of urban parcels and further weakened the spatial correlation in parcels. Based on this discussion, as the morphology of parcels is capable of reflecting semantic relativity in urban space, grouping parcels based on the centroid distance is not sufficient.
Analysis of the UPG Method
First, we calculated the proximity between two conflicting parcels according to the method proposed in Section 3.3. The proximity sequence and the parcel adjacency matrix were formed by constructing CDT and calculating proximity. To obtain a better expression of parcel grouping results, according to Equations (7) and (8) (refer to Section 3.4.1), we built the distance threshold sequence based on the proximity sequence. The values in the distance threshold sequence were brought into the parcel grouping algorithm to obtain grouping results that corresponded to different threshold levels. The compactness that corresponds to each level was also calculated based on Equations (9) and (10) to form a compactness curve. The following figures show the compactness curves of two data sets. According to the grouping results, although the parcels in the same group may satisfy the centroid adjacent, large gaps were observed within a group (refer to Figures 14 and 15). These gaps not only interrupted the proximity and the continuity, which was elaborated in Gestalt theory, but also disregarded the potential semantic relativity of urban parcels and further weakened the spatial correlation in parcels. Based on this discussion, as the morphology of parcels is capable of reflecting semantic relativity in urban space, grouping parcels based on the centroid distance is not sufficient.
Analysis of the UPG Method
First, we calculated the proximity between two conflicting parcels according to the method proposed in Section 3.3. The proximity sequence and the parcel adjacency matrix were formed by constructing CDT and calculating proximity. To obtain a better expression of parcel grouping results, according to Equations (7) and (8) (refer to Section 3.4.1), we built the distance threshold sequence based on the proximity sequence. The values in the distance threshold sequence were brought into the parcel grouping algorithm to obtain grouping results that corresponded to different threshold levels. The compactness that corresponds to each level was also calculated based on Equations (9) and (10) to form a compactness curve. The following figures show the compactness curves of two data sets.
As Figure 16 illustrates, curve (a) is generally declining, and curve (b) declines at first then rises. Both curves began with a significant decline which we named an "unstable region". According to Equation (10), when Avg_INDIS(i) is close to Avg_OUTDIS(i), the compactness of a parcel is close to 0. The reason for the significant declining curves is that there were many parcels like this at the current stage. At this stage, many parcels form a group independently, and only a few parcels are grouped. The grouping results at this stage are inconsistent with the Gestalt theory and meaningless to our study, so we ignored the unstable area in the front of curves. The reason for the rising of curve (b) is that the increasing number of internal parcels, which had increasing Compactness(i) (refer to Equation (10)) with proximity level. Because internal parcels have been grouped, they do not affect the grouping results. As Figure 16 illustrates, curve (a) is generally declining, and curve (b) declines at first then rises. Both curves began with a significant decline which we named an "unstable region". According to Equation (10), when Avg_INDIS(i) is close to Avg_OUTDIS(i), the compactness of a parcel is close to 0. The reason for the significant declining curves is that there were many parcels like this at the current stage. At this stage, many parcels form a group independently, and only a few parcels are grouped. The grouping results at this stage are inconsistent with the Gestalt theory and meaningless to our study, so we ignored the unstable area in the front of curves. The reason for the rising of curve (b) is that the increasing number of internal parcels, which had increasing Compactness(i) (refer to Equation (10)) with proximity level. Because internal parcels have been grouped, they do not affect the grouping results.
Some sudden changes (such as level 16 in two curves) in compactness reveals that the compactness of two adjacent levels have significantly changed. Extracting the mutation characteristics of compactness allows a reasonable expression of grouping that satisfies Gestalt theory to be portrayed (refer to Section 3.4.2). When the curves tended to be steady, we chose the point at which the compactness distinctly changed. We discovered that both datasets obtained good visual expressions at level 16 (refer to Figure 17), which effectively conveyed the compactness and adjacency. Combined with the results of Section 4.1, the MLU indices of the two averages increased by 0.24 and 2.45 (refer to Table 5), which indicated that the functional configuration within groups was improved. Our method maintained the imitation behaviours of the parcel components (refer to Figure 18), with a distinct decrease in the ratio of the ignored imitation behaviours compared with the k-means++ algorithm (refer to Table 4). Based on this analysis, our method not only achieves a good visual expression of parcel grouping, but also ensures that the functional connectivity is not destroyed. Some sudden changes (such as level 16 in two curves) in compactness reveals that the compactness of two adjacent levels have significantly changed. Extracting the mutation characteristics of compactness allows a reasonable expression of grouping that satisfies Gestalt theory to be portrayed (refer to Section 3.4.2). When the curves tended to be steady, we chose the point at which the compactness distinctly changed. We discovered that both datasets obtained good visual expressions at level 16 (refer to Figure 17), which effectively conveyed the compactness and adjacency. Combined with the results of Section 4.1, the MLU indices of the two averages increased by 0.24 and 2.45 (refer to Table 5), which indicated that the functional configuration within groups was improved. Our method maintained the imitation behaviours of the parcel components (refer to Figure 18), with a distinct decrease in the ratio of the ignored imitation behaviours compared with the k-means++ algorithm (refer to Table 4). Based on this analysis, our method not only achieves a good visual expression of parcel grouping, but also ensures that the functional connectivity is not destroyed.
Practical Application of the UPG
In Section 4.3, we further verified the validity of the proposed method using Xicheng District in Beijing. CDT was utilised to construct the spatial proximity relationships; the result is shown in Figure 19b. The compactness curve showed a variation value (refer to Figure 19d), which we referred to as a proximity mutation. As depicted in our algorithm, we used level 22, to which the proximity mutation corresponded, as the input value, and obtained the ideal grouping result (refer to Figure 19c), which consisted of 51 groups. Compared to the MLU index of each parcel before grouping (refer to Figure 19a), the MLU index after grouping increased by 1.68 on average, and the overall growth was 85.71, which indicated that the parcels obtained functional complementarity.
Practical Application of the UPG
In Section 4.3, we further verified the validity of the proposed method using Xicheng District in Beijing. CDT was utilised to construct the spatial proximity relationships; the result is shown in Figure 19b. The compactness curve showed a variation value (refer to Figure 19d), which we referred to as a proximity mutation. As depicted in our algorithm, we used level 22, to which the proximity mutation corresponded, as the input value, and obtained the ideal grouping result (refer to Figure
Practical Application of the UPG
In Section 4.3, we further verified the validity of the proposed method using Xicheng District in Beijing. CDT was utilised to construct the spatial proximity relationships; the result is shown in Figure 19b. The compactness curve showed a variation value (refer to Figure 19d), which we referred to as a proximity mutation. As depicted in our algorithm, we used level 22, to which the proximity mutation corresponded, as the input value, and obtained the ideal grouping result (refer to Figure 19c), which consisted of 51 groups. Compared to the MLU index of each parcel before grouping (refer to Figure 19a), the MLU index after grouping increased by 1.68 on average, and the overall growth was 85.71, which indicated that the parcels obtained functional complementarity. spatial correlation of parcels within the same group. According to 'Technical code for urban road engineering (GB51286-2018)', and combined with the Baidu Map for an approximate distance measurement, we calculated that the width of the road within the parcel group ranged from 8 metres to 20 metres, and the width range showed agreement with the width range of branch roads, i.e., the fourth-level roads in the road grade system. The branch road, which has traffic function and service function, primarily provides a suitable living space, parking space, and necessary public space. The grouping result is consistent with the original intention of our design. The grouping result enables groups to be merged upward into a larger functional region (refer to Figure 20a). The case of the functional region that is partitioned into several groups is named "hard segmentation" in this paper. Hard segmentation disperses the dominant function of a functional region, which may weaken the dominant role of this function after grouping. As illustrated in Figure 20b, a functional region was partitioned and one part was assigned to Group 1 and the other to Group 2. The dominant function (park and plaza) of this region, however, was not dominant; that is, the As we discussed previously, narrow roads are more suitable for residents to walk or bike, which not only improve the mobility of residents but also increase the frequency of parcel interaction. Thus, grouping results should ensure that the roads within the group are narrower and the roads between two groups are wider, which enable parcels of the intra-group to frequently interact and enhance the spatial correlation of parcels within the same group. According to 'Technical code for urban road engineering (GB51286-2018)', and combined with the Baidu Map for an approximate distance measurement, we calculated that the width of the road within the parcel group ranged from 8 m to 20 m, and the width range showed agreement with the width range of branch roads, i.e., the fourth-level roads in the road grade system. The branch road, which has traffic function and service function, primarily provides a suitable living space, parking space, and necessary public space. The grouping result is consistent with the original intention of our design.
The grouping result enables groups to be merged upward into a larger functional region (refer to Figure 20a). The case of the functional region that is partitioned into several groups is named "hard segmentation" in this paper. Hard segmentation disperses the dominant function of a functional region, which may weaken the dominant role of this function after grouping. As illustrated in Figure 20b, a functional region was partitioned and one part was assigned to Group 1 and the other to Group 2. The dominant function (park and plaza) of this region, however, was not dominant; that is, the dominant function in Group 1 is residential area and the dominant function in Group 2 is catering. Therefore, hard segmentation should be avoided as much as possible. The number of groups that partition the functional region was named "number of hard segmentations". We counted the number of hard segmentations and listed these in Table 6. Therefore, hard segmentation should be avoided as much as possible. The number of groups that partition the functional region was named "number of hard segmentations". We counted the number of hard segmentations and listed these in Table 6. To understand how well the grouping result matches the urban land use, we measured the hard segmentation by the amount of overlapping that exists among the grouping results (refer to Figure 19c According to the evaluation results presented in Table 6, we discovered that the hard segmentation ratio of our result is 9.80%, which is considerably lower than the other two methods, i.e., 57.14% and 39.22%. The UPG method can effectively organise the urban parcels data and ensures that the potential semantics in urban morphology are rarely destroyed when fully considering the spatial interaction patterns between two parcels. When given sufficient semantic information, our method can more reasonably infer the urban structure and urban function than other methods. This evaluation illustrates that our methodology, using a finer division, has the potential to yield To understand how well the grouping result matches the urban land use, we measured the hard segmentation by the amount of overlapping that exists among the grouping results (refer to Figure 19c, Figure 21a,b) with Beijing's Urban Master Plan (2016-2035) (http://ghzrzyw.beijing.gov.cn/art/2018/1/9/ art_5096_544304.html). To ensure the validity of the contrast experiment, we performed the k-means++ method with two different settings of cluster number: (1) the optimised cluster number (k = 7) and (2) the same cluster number as group number of UPG (k = 51).
According to the evaluation results presented in Table 6, we discovered that the hard segmentation ratio of our result is 9.80%, which is considerably lower than the other two methods, i.e., 57.14% and 39.22%. The UPG method can effectively organise the urban parcels data and ensures that the potential semantics in urban morphology are rarely destroyed when fully considering the spatial interaction patterns between two parcels. When given sufficient semantic information, our method can more reasonably infer the urban structure and urban function than other methods. This evaluation illustrates that our methodology, using a finer division, has the potential to yield reasonable and rational results, identify the functional connectivity in urban space and generate the development of a new method for characterising urban behaviour.
To understand how well the grouping result matches the urban land use, we measured the hard segmentation by the amount of overlapping that exists among the grouping results (refer to Figure 19c, Figure 21a, Figure 21b) with Beijing's Urban Master Plan (2016-2035) (http://ghzrzyw.beijing.gov.cn/art/2018/1/9/art_5096_544304.html). To ensure the validity of the contrast experiment, we performed the k-means++ method with two different settings of cluster number: (1) the optimised cluster number (k = 7) and (2) the same cluster number as group number of UPG (k = 51). According to the evaluation results presented in Table 6, we discovered that the hard segmentation ratio of our result is 9.80%, which is considerably lower than the other two methods, i.e., 57.14% and 39.22%. The UPG method can effectively organise the urban parcels data and ensures that the potential semantics in urban morphology are rarely destroyed when fully considering the spatial interaction patterns between two parcels. When given sufficient semantic information, our method can more reasonably infer the urban structure and urban function than other methods. This evaluation illustrates that our methodology, using a finer division, has the potential to yield
Discussion
Some existing urban planning theories, such as functional zoning and neighbourhood unit, promote the development of parcels. However, these methods continually expand the size of region.
Fine-grained open parcels are usually more realistic than coarse-grained regions, with sustainable development capabilities and better city vitality. Jan Gehl [68] contended that street life scenes become dull and desolate when large units replace small, vivid units. Thus, considering the grain size of parcels is imperative in future urban planning. This research contributes to providing a new methodology for constructing the urban spatial structure based on fine-grained parcel data, which addresses the deficiencies in other methods of grouping parcels. The proposed parcel grouping method in this study is a method of urban data organisation. Unlike other methods, such as urban function zoning [26,27,37] and blocks aggregation methods [56,57], this method differs from the traditional top-down planning scheme and does not merge the parcels. Therefore, the zoning process can be realised by grouping parcels and then establishing functional zoning within the group, which can reduce the grain size of regions and obtain better zoning results. The bottom-up parcel grouping method designed in this paper was guided by urban morphology, the Gestalt principle, and the MAUP. On the basis of CDT and the Hausdorff distance, the adjacent relationship of parcels was constructed, which enabled the grouping results to take both the spatial planning and the graphical representation into account. To effectively discover the proximity mutation among parcels, we also redefined the calculation of the silhouette coefficient, that is, Compactness, and further optimised the grouping results by analysing the compactness curve. Compared with other urban elements grouping research [48,50,55], our method has fewer defining parameters and is more automatic in selecting the number of groups, without extensive manual intervention.
A city is a complex system. As the basic unit of a city, a parcel reflects the characteristics of the city and is a microcosm of a larger system. The functional connectivity among parcels is the source of the parcel vitality and parcel diversity. This paper described parcel diversity according to the infiltration behaviours of the components between two parcels; they were elaborated and modelled based on three main aspects: dominant function, functional complementarity, and imitation behaviour. Combined with the test results of the infiltration behaviour, our grouping method cannot only improve the degree of mixed land-use in groups but also ensure that the imitation behaviour is not destroyed. To better test the validity, we compared our method and the k-means++ algorithm based on these indicators, which was further demonstrated in Section 4.3 with the hard segmentation ratio. According to the results, our method can either protect the functional region layout under the government's macro-control or obtain a fine-grained grouping result on a human scale.
Roads are the main space for urban public activities and the dividing lines of an urban space. The width of the street not only embodies the compactness of parcels distribution but also potentially reflects the resident trip mode (e.g., the narrower a road is, the more likely travel will be by bicycle or on foot). A narrow road is one of the main reasons for the infiltration of components. The narrow road can reduce car speeds and create a better environment for walking and cycling, which makes the urban area more accessible, creates fewer urban heat islands, lowers the cost of infrastructure development, and so on. Narrow roads increase the density of the gaps and passages in a city and provide space for various urban systems, which connect the various elements in the city as a whole to optimise the urban structure. Our method made full allowance for the role of the narrow road space. According to the grouping result, the widths of the road spaces between the groups were sufficiently wide and primarily consisted of urban trunk roads and expressways. The roads inside each group primarily consisted of branch roads, whose width ranged from 8 m to 20 m.
Conclusions and Further Research
This work has presented a method for urban parcel grouping. The method makes use of the concepts of urban morphology and urban functional connectivity for analysing urban infiltration behaviours and forming parcel groups. Two main research questions raised in the related work section were addressed and solutions proposed: (1) the calculation method of proximity proposed in this study effectively measured the adjacent relationship and compactness between parcels, which ensured a good visual expression of parcel grouping results; (2) the quantified infiltration behaviours were used to evaluate grouping results, which considers urban semantics compared with the geometric method. The effectiveness and practicability of the proposed grouping method have been validated using actual urban parcel data from Beijing and compared with k-means++ method. The results show that our method not only achieves fine-grained grouping results, but also fits with human cognition. It also takes into account infiltration behaviours of urban parcels and preserves urban functions more completely than other methods.
Although we have successfully obtained reasonable parcel grouping results, we discovered that our method has the potential to fall into the local optima (refer to Figure 19c, where some parcel groups are too small and should be combined with adjacent ones), which is a bottleneck for improved results. Therefore, in future research, we can take advantage of the gravitation search algorithm (GSA) or the particle swarm algorithm to improve the current algorithm and achieve the global optimum. Another important issue for future research is that urban design should leverage human behavioural data and human location data, which provides a shortcut for demonstrating the interaction between residents and cities. Thus, we plan to investigate additional sensor data, e.g., about public bicycle-sharing data, heat island data and other semantic signatures to further quantify the problems in urban design and make efforts to transform urban design from 'space oriented' to 'human oriented'. We will build a 'city portrayal' by combining the parcel grouping method in this study with a top-down knowledge engineering approach based on urban form, human geography and urban planning.
Author Contributions: Shuqing Zhang and Peng Wu conceived the original idea for the study, and Shuqing Zhang and Huapeng Li provided the financial support. Peng Wu was responsible for the design of the study, setting up the experiments, and writing the initial draft of the manuscript. Peng Wu, Xiaohui Ding, and Yuanbing Lu conducted the processing and analysis of the data. Patricia Dale polished the language; Shuqing Zhang, Patricia Dale, and Huapeng Li revised the manuscript critically. All authors read and approved the final manuscript. | 2019-07-14T07:01:49.543Z | 2019-06-16T00:00:00.000 | {
"year": 2019,
"sha1": "b8e60a82b04b8e6a3f97f73e765ca8745bc1a501",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/8/6/282/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "226907057f3174e5c8ba75be1815586690f0cefd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
132796142 | pes2o/s2orc | v3-fos-license | DNA Sequence variability analysis of the gD and the UL36 genes of Bovine herpesvirus-1 isolated from field cases in Indonesia
ABSTRACT To investigate the genotype of Bovine herpesvirus-1 (BHV-1) isolated in Indonesia (n = 10), the DNA sequences of fragments of two genes, including the middle third of the gD gene and the downstream end of UL36, were determined using nested PCR. All the samples were classified as BHV-1.2 according to the deduced amino acid sequence of gD. On the other hand, analysis of the nucleotide sequence of UL36 indicated the presence of insertions or deletions (indels) compared to reference sequences and each other, which indels made more diverse than in gD sequence, and the classification of BHV-1 subtype based on the deduced amino acid sequence of UL36 differed from that obtained using the gD protein sequence. These results suggested that while the gD sequence analysis was suited for rough classification of BHV-1 subtype, the UL36 sequence permitted detection of BHV-1 subtype polymorphisms.
Introduction
Since the first outbreaks of infectious bovine rhinotracheitis (IBR) in 1950 in Colorado andin 1953 in California (Schroeder and Moys 1954;Miller 1955;Yates 1982), this disease has continued to spread to worldwide. The first clinical case of IBR disease in Indonesia was reported in the Lampung region (Marfiatiningsih 1982) in 1981, the disease has persisted in this country since then.
The classification of BHV-1 into various subtypes has changed as studies of the virus have provided further details on the viral genome. BHV-1 subtypes 1.1 and 1.2 are very closely related and are classified based on specific symptoms in infected cattle (Muylkens et al. 2007). Molecular methods, including enzymatic restriction fragment length polymorphism (RFLP) (Metzler, Schudel, et al. 1985;Rocha et al. 1998;Takiuchi et al. 2005;Kamiyoshi et al. 2008) subtype-specific monoclonal antibodies (Metzler, Matile, et al. 1985), restriction endonuclease mapping (Brake and Studdert 1985;Engels et al. 1986), and characterization using single nucleotide polymorphism (SNPs) Chase et al. 2017) has been used to differentiate types and subtypes.
The most extensively characterized gene is that encoding glycoprotein D (gD located in US, also known as US6); gD has functions related to viral immunity and antigenicity (Collins et al. 1993). UL36 gene also well-studied recently. UL36 gene was located in UL and forming tegument structure. The protein encoded by UL36 exhibits strong divergence among the various BHV-1 subtypes . In different subtypes of BHV-1, the upstream regions of the UL36 gene harbours insertions/deletions (indels) relative to its counterparts, and these differences are responsible for diversity among BHV-1.2b isolates . Herpesviruses may evolve faster than the host (Thiry et al. 2006), the herpesviral evolutionary rate estimated of 3 × 10 9 per amino acid per year (McGeoch et al. 2000).
The research described here had two related aims, both performed using Indonesian field isolates of BHV-1. The first goal was to characterize the molecular basis of the obvious subtype-specific differences in the sequences of middle third of the gD gene (a region known to contains SNPs) and in the downstream end of the UL36 gene (a region known to contain indels). The second goal was to compare the evolutionary process based on the sequence variations detected in gD and UL36.
Preparation and amplification of DNA
The samples from nasal swabs were prepared per OIE instructions (OIE 2010). The samples from tracheal organ were prepared as described previously (Hidayati et al. 2018). DNA purification using a QIAamp DSP DNA Mini Kit (Qiagen, Hilden, Germany); the resulting nucleic acid was recovered in 200 µL elution buffer consisting of 10 mM Tris-Cl and 0.5 mM EDTA, pH 9.0. DNA fragments were amplified by nested PCR (Vilcek et al. 1994). The sequences of the gD primers were gD- The PCR reaction mixtures in each round were generated using KAPA HiFi HotStart ReadyMix (KAPABiosystem, Wilmington, MA) and consisted of 2.5 mM MgCl 2 ., 0.3 mM each of dNTPs, 1 U of DNA polymerase, 10 pmol each of the forward and reverse primers, and less than 100 ng template DNA per reaction, with the balance of the volume consisting of PCRgrade water. First-round amplification of the gD fragment was performed: initial denaturation at 95°C for 3 min; 35 cycles of denaturation at 98°C/20 sec, annealing at 55°C/15 sec, extension at 72°C/15 sec; and a final extension at 72°C/5 min. Second-round amplification of the gD fragment was performed according to the same programme but with annealing at 59°C. First-round amplification of the UL36 fragment was performed: initial denaturation at 95°C for 3 min; 35 cycles of denaturation at 98°C /20 sec, annealing at 60.7°C/15 sec, extension at 72°C/ 5 sec; and a final extension for 72°C/5. Second-round amplification of the UL36 fragment was performed according to the same programme but with annealing at 57.6°C for 5 sec. PCR reactions were run using a Thermal Cycler Dice Gradient TP600 apparatus (TAKARA, Otsu, Shiga, Japan). The PCR products were examined by agarose gel electrophoresis in Tris acetate-EDTA (TAE) buffer; the resulting gels were stained with ethidium bromide (0.5 µg /mL) and bands were visualized by UV transillumination.
DNA cloning and sequencing
DNA fragments from each second-round PCR were incubated with the enzyme portion of the 10× A-attachment Mix (TOYOBO, Osaka, Japan) to generate overhanging adenines at the 3 ′ -ends. The resulting products were purified using a Fas-tGene Gel/PCR Extraction Kit (Nippon Genetics, Tokyo, Japan) and the DNA fragments were ligated into a plasmid using the pGEM®-T vector system (Promega Corp., Madison, WI); the ligation products were recovered by transformation into E. coli using NEB® stable, high-efficiency cells (New England BioLabs, Ipswich, MA) according to the manufacturer's instructions. Transformants were selected on Luria-Bertani (LB) agar plate containing ampicillin (100 µg/mL).
Nucleotide sequence determination was performed using an Applied Biosystems 3130 Genetic Analyzer (Applied Biosystems, Foster City, CA) according to manufacturer's instructions. The purified plasmids were used as templates. When sequencing using native plasmid failed, sequencing was performed using the linearized plasmids as templates. Specifically, plasmids were linearized by digestion with the restriction enzyme ScaI (TAKARA), which cuts at a single site in the plasmid. When sequencing using the plasmid as template failed, the insert fragment was amplified using the same conditions as the 2ndround PCR reactions described above, but using the cloned plasmid as the template. The resulting amplicon was purified using a FastGene Gel/PCR Extraction Kit prior to use as the template in the sequencing reaction.
In silico alignment and computational analysis
Alignment was analyzed using Clustal W (MegAlign DNAStar Lasergene, version 7; Madison. WI, USA). The prediction of AluI and TaqI restriction sites was performed using SeqBuilder (DNAStar Lasergene, version 7). The prediction of recombination sites was performed using DNA Sequence Polymorphism (version 6; Universitat de Barcelona).
Amplification of gD and UL36 fragments
The sequences were deposited in DDBJ (DNA Data Bank of Japan), and the accession numbers are LC318523-LC318532 and LC368822-LC368831. The gD primers were designed to yield a 511-bp amplicon; the UL36 primers were designed to yield a 211-, 184-, or 157-bp amplicon (Figure 1).
Sequence analysis of the gD and UL36 fragments
The conclusion of the sequence variability of gD and gUL36 fragments summarized in Table S1.
The resulting alignment of the gD fragments among BHV-1.1, BHV-1.2, and the study samples revealed strong sequence similarity, with mean ± SD identities of 98.04 ± 0.32% (BHV-1.1 vs. samples), 98.13 ± 0.42% (BHV-1.2 vs. samples), and 98.34 ± 0.42% (among samples). Specifically, 4 separate point mutations were detected in this region (observed at nucleotide residues 462, 631, 666, and 912). Two of the SNPs (nt 631 and 666) were predicted to result in changes in the amino acid sequence (aa 211 and 222) of the gD glycoprotein when comparing BHV-1.1 and BHV-1.2 (Figure 2(a)). The reassortment of the SNPs, yielding the reassortment of amino acids at the corresponding positions in the gD protein, suggests that homologous recombination has occurred within the gD sequence. The calculated synonymous and nonsynonymous ratio (dn/ds) was 0.22 (dn/ ds < 1). This value indicates negative selection (H0 = dn < ds), meaning that these sequences have been subjected to purifying selection (Traesel et al. 2014). A previous paper (Saepulloh and Adjid 2010) using AluI and TaqI restriction enzymes confirming that the Indonesian isolates grouped with BHV-1.1. In this study, this region contains variations that included AluI and TaqI sites, as well as palindromic and inverted sequences (TCGAAGCTCGACGCAGCT), indicating that the samples should be assigned to subtype BHV-1.2. Notably, other work has revealed that the US region (within which gD is located) exhibits flipping with respect to the direction of replication (Chowdhury and Sharma 2012), although the possible function of this inversion remains unknown (Hammerschmidt et al. 1988). Several laboratories have hypothesized that this inversion plays a role in viral recombination, yielding different isomeric forms of the virus (Sheldrick and Berthelot 1975;Dutch et al. 1992;Schynts et al. 2003).
An alignment of the UL36 fragments from the Indonesian samples (n = 10) with the corresponding sequences from the reference genomes (n = 8) suggested a shift in the position of the deleted region without changing the length of the missing nucleotides. The experimental samples showed mean ± SD identities of 95.28 ± 0.05% (BHV-1.1 vs. samples), 95.39 ± 0.09% (BHV-1.2 vs. samples) and 93.24 ± 0.12% (among samples). The indel in the Indonesian samples (B/1, B/31, B/32 and L/33) corresponded to a loss of 27 nt and exhibiting occasional degeneracy due to point mutations (C to T of nt 7610 and 7662 relative to accession no. AJ004801), while samples (L/5, L/6, L/9, L/10, P-252 and P-357) present of 27 nt (with a copy number 2-5) relative to accession no. AJ004801 (nt 7625-7651) (Figure 1). The 27-nt indel repeat in the BHV-1.1 samples is predicted to encode the peptide DAYPPAPAH (Figure 2(b)). In the BHV-1.2 samples (B/1, B/31, B/32, and L/ 33), the first of the aforementioned nucleotide substitutions (at nt 20 of the first repeat) results in an amino acid substitution of Pro to Leu, yielding the peptide DAYPPALAH (where the underlined L indicates the altered aa sequence compared to that encoded by BHV-1.1). In contrast, the second substitution (at nt 18 of the second repeat) converts a GCC codon to GCT; because of the degeneracy of the code, both codons correspond to Ala, and the deduced peptide is unaltered compared to those encoded by the BHV-1.1 strains.
Phylogram variability of gD and UL36 fragments
Phylogenetic trees for the protein sequences are shown in Figure 3. The tree derived from the gD amino acid sequences (predicted from amplified gD fragments) (Figure 3(a)) suggested that all 10 of the novel Indonesian samples formed a clade separate from the BHV-1.1 reference strain. Specifically, the gD peptides encoded by the gD fragments from these samples were most similar to the BHV-1.2 reference strains, i.e. BHV-1.2 strain B589 from Australia (KM258881.1) and BHV-1.2 strain K-22 from the USA (KM258880.1). The gD peptide encoded by sample L/33 was identical to those encoded by BHV-1.2 reference strains SM023 (an American isolate) and SP1777 (a European isolate). Karlin et al. (1994) suggested that divergence at the genomic and protein levels can differ, given that the DNA sequence emphasizes sequence specificity. However, most studies employing phylogenetic tree construction are based on protein sequence comparisons (Karlin et al. 1994).
In contrast, the tree derived from the of UL36 amino acid sequences (predicted from amplified UL36 fragments) ( Figure 3(b)) exhibited a branching pattern distinct from that obtained with the gD peptides. Specifically, some samples that were affiliated with the BHV-1.2 group in the gD peptide-based tree (marked in Figure 3(a) with red stars) were instead affiliated with the BHV-1.1 group in the UL36 peptide-derived tree (Figure 3(b)). Notably, the UL36-based tree showed that samples L/5, L/6, L/9, L10, P-252, and P-375 sorted with BHV-1.1 isolate 216 II (accession no. KY215944; a strain from India) rather than with isolate NVSL, Cooper strain and complete strain (accession no. JX898220, KU1984801, and AJ004801 respectively; from the USA and Europe). Genetic divergence at gD (US6) and UL36 appeared to differ. The indel in UL36 permitted differentiation of the novel isolates into two groups, while the SNPs in gD suggested that the new viruses formed a single group (based on protein sequence). These findings are supported by those of Karlin et al. (1994), who highlighted the evolutionary divergence of US compared to that of UL. In particular, the tegument protein (UL36) appears to have evolved more rapidly (as inferred from aa sequences) than proteins encoded by the US region (e.g. gD) (Karlin et al. 1994). In the present study, we observed a mixture of point mutations affecting palindromic and inverted sequences in a fragment of the gD ORF, which is located in the US region. However, the fragment of the UL36 ORF examined in this study contains a transposon (Robinson et al. 2008) that may affect the structure and/or function of the UL36 tegument protein (Möhl et al. 2010), with resulting effects on cellular phenotypes (Casacuberta and González 2013).
Conclusions
All 10 novel Indonesian field isolates of BHV-1 were classified as members of the BHV-1.2 subtype based on the deduced amino acid sequence of the protein encoded by an amplified fragment of the gD ORF. In contrast, the sequence of an amplified fragment of the UL36 ORF suggested that the novel strains be grouped differently than indicated by the gD sequence analysis. Given that the variation in UL36 was higher than that in gD, analysis of the UL36 sequence may be suited for detecting diversity among BHV-1 isolates. It concludes that the gD sequence analysis was suited for rough classification of BHV-1 subtype, the UL36 sequence permitted detection of BHV-1 subtype polymorphisms.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-04-26T13:49:47.247Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "a2f81a3faed48bd5dbdac72b8debd54898d7e26d",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09712119.2019.1600521?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "34c5c49db077788215f0d0e1edb226eeb1c8fea1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
247597802 | pes2o/s2orc | v3-fos-license | Intra-articular Administration of Triamcinolone Acetonide in a Murine Cartilage Defect Model Reduces Inflammation but Inhibits Endogenous Cartilage Repair
Background: Cartilage defects result in joint inflammation. The presence of proinflammatory factors has been described to negatively affect cartilage formation. Purpose: To evaluate the effect and timing of administration of triamcinolone acetonide (TAA), an anti-inflammatory drug, on cartilage repair using a mouse model. Study Design: Controlled laboratory study. Methods: A full-thickness cartilage defect was created in the trochlear groove of 10-week-old male DBA/1 mice (N = 80). Mice received an intra-articular injection of TAA or saline on day 1 or 7 after induction of the defect. Mice were euthanized on days 10 and 28 for histological evaluation of cartilage defect repair, synovial inflammation, and synovial membrane thickness. Results: Mice injected with TAA had significantly less synovial inflammation at day 10 than saline-injected mice independent of the time of administration. At day 28, the levels of synovitis dropped toward healthy levels; nevertheless, the synovial membrane was thinner in TAA- than in saline-injected mice, reaching statistical significance in animals injected on day 1 (70.1 ± 31.9 µm vs 111.9 ± 30.9 µm, respectively; P = .01) but not in animals injected on day 7 (68.2 ± 21.86 µm vs 90.2 ± 21.29 µm, respectively; P = .26). A thinner synovial membrane was moderately associated with less filling of the defect after 10 and 28 days (r = 0.42, P = .02; r = 0.47, P = .01, respectively). Whereas 10 days after surgery there was no difference in the area of the defect filled and the cell density in the defect area between saline- and TAA-injected knees, filling of the defect at day 28 was lower in TAA- than in saline-injected knees for both injection time points (day 1 injection, P = .04; day 7 injection, P = .01). Moreover, there was less collagen type 2 staining in the filled defect area in TAA- than in saline-injected knees after 28 days, reaching statistical significance in day 1–injected knees (2.6% vs 18.5%, respectively; P = .01) but not in day 7–injected knees (7.4% vs 15.8%, respectively; P = .27). Conclusion: Intra-articular injection of TAA reduced synovial inflammation but negatively affected cartilage repair. This implies that inhibition of inflammation may inhibit cartilage repair or that TAA has a direct negative effect on cartilage formation. Clinical Relevance: Our findings show that TAA can inhibit cartilage defect repair. Therefore, we suggest not using TAA to reduce inflammation in a cartilage repair setting.
and IL-6, are found in joints with cartilage defects. 40,41 The presence of these mediators can negatively affect cartilage formation 12,17 ; however, it is not only the injury itself but also the surgery involved in cartilage repair strategies that will lead to an inflammatory response in the joint. 53 Yet, the role of inflammation in cartilage repair is not completely understood. In wound healing, inflammation is the first step in the repair process, and in fracture repair, the absence of proinflammatory mediators leads to impaired bone healing. 45 On the other hand, prolonged inflammation results in impaired wound healing and increased scar formation. 11 Furthermore, a selective proinflammatory cytokine inhibitor (IL-1Ra) reduced cartilage degeneration and synovitis in an intra-articular fracture model 24 and knee pain and dysfunction in a clinical study in anterior cruciate ligament (ACL) patients. 26 These findings have led to the view that inflammation is initially needed, but then needs to be resolved to achieve optimal tissue repair.
To reduce inflammation, an anti-inflammatory drug can be used. Triamcinolone acetonide (TAA) is a corticosteroid and a potent anti-inflammatory drug. It is often injected intra-articularly to reduce the symptoms of knee osteoarthritis. 32 Also, it is acknowledged that glucocorticoids promote chondrogenic differentiation of human bone Marrowderived Stromal/stem Cells (MSCs) by enhancing the expression of cartilage extracellular matrix genes. 8 However, there is controversy about its potential catabolic effects on the cartilage, 22 as TAA might increase cartilage loss in situations of knee osteoarthritis, 30 inhibit glycosaminoglycan production, 2,21 and be chondrotoxic to chondrocytes. 9,42 Moreover, Wernecke et al 49 concluded in their systematic review that corticosteroids seem to have a time-and dose-dependent effect on articular cartilage. They suggested that a beneficial effect on cartilage has been described at a low dose and shorter duration, whereas detrimental effects were found with higher doses and longer duration. This highlights the need to understand clearly under what conditions TAA may be beneficial or harmful in endogenous cartilage defect repair.
In the current study, we investigated the effect of the anti-inflammatory drug TAA on endogenous cartilage repair in a murine cartilage defect model. Moreover, we investigated whether the timing of TAA treatment would influence inflammation and cartilage defect repair.
Animals
Male DBA/1OlaHsd mice were purchased from Envigo, part of Jackson Laboratory. After transportation, mice were allowed a 7-day acclimatization period. All mice regained normal behavior within 24 hours after transportation. Mice were housed under specific pathogen-free conditions in groups of 3 or 4 per individually ventilated cage. They were maintained under a 12-hour light-dark cycle at 21°C and fed a standard rodent diet with food and water ad libitum.
The study was carried out following the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Animal protocols and surgical procedures in this study were approved by the Institutional Animal Care and Use Committee (AVD101002016991, AEC 16-691-01; Erasmus MC University Medical Center, the Netherlands).
Experimental Outline
Full-thickness cartilage defects were surgically created as described here on day 0 in the left knee of 10-week-old male DBA/1 mice (N = 80) with a mean weight of 21.3 6 1.8 g. Young mice were chosen because of their capability for consistent healing of the articular cartilage, whereas cartilage could not be repaired in older mice. 10 Cartilage repair was histologically evaluated after 4 weeks. Injections with TAA or saline (control) were administered either 1 day or 7 days postoperatively to investigate the effect of timing of anti-inflammatory therapy, resulting in 4 experimental groups (Figure 1). To obtain further insight in the repair process, cellularity and early tissue formation were assessed after 10 days. All animals received an intra-articular injection on both days 1 and 7: in the experimental groups, 1 day was injection with TAA, and in the control group, both days with saline. This setup gave a 50% reduction of the number of animals in the control group. Mice were randomly allocated to experimental groups. The number of animals per group was determined using a power calculation based on previous results. 47 The time points for injection at 1 day and 7 days postoperatively were chosen based on previous research. Synovitis levels were found to peak after 7 to 14 days after induction by an inflammatory stimulus. 20 The chosen time points therefore represent 2 phases of inflammation, a very early stage of inflammation (1 day after defect creation) and around the peak (7 days after defect creation).
Surgical Procedure
The procedure was performed during daytime in the animal facility operative theater. One hour before surgery, all mice received a subcutaneous injection of buprenorphine hydrochloride 0.05 mg/kg (Temgesic; Indivior Europe Ltd) for analgesia. Mice were anesthetized with an isoflurane/O 2 mixture and positioned on a warmed plate, and a full-thickness cartilage defect was surgically created, as previously described, by 1 experienced surgeon (W.W.) who was blinded to the treatment group. 10,47 In short, a full-thickness cartilage defect of 250 mm in length and 150 mm in width was made in the intercondylar notch along the patellar groove using a 25-gauge hypodermic needle (BD Bioscience). The procedure was regarded as successful when evident bleeding of the subchondral bone was observed. The joint capsule was closed with 2 resorbable 6-0 Vicryl suture (Ethicon; Johnson & Johnson) and checked for patellar stability during flexion and extension motion. Full weightbearing was allowed after recovery from anesthesia. All mice recovered quickly from the anesthesia.
Intra-articular Injection
Intra-articular injections of 25 mg TAA suspended in sterile 0.9% saline (Kenacort; Bristol Myers Squibb) or sterile 0.9% saline (as vehicle control) were infused into the operated knee 1 day and/or 7 days after surgery. The concentration was based on previous rodent studies. 28,35,38 Under isoflurane/O 2 anesthesia with the mouse in a supine position, the knee was fully extended and kept in place by surgical forceps. A small skin cut was made to expose the patellar tendon and joint capsule sutures. A 30-mL precision glass syringe (Hamilton Company) with a 30-gauge needle (BD Bioscience) was used to inject 6 mL into the joint space. In each experiment, all injections were performed by 1 person who was blinded to the treatment group (S.C. for the day 28 experiment and M.W. for the day 10 experiment). A second person (N.K.) assisted the injections by preparing all mice and injection fluids for the author performing the injections to ensure blinding of the injecting person.
Tissue Processing
At the experimental endpoint, animals were euthanized under anesthesia by cervical dislocation. Directly after euthanizing and removal of the skin, both hind legs were dissected below the hip and above the ankle. After removal of the subcutaneous fat and muscles, the knees were fixed in 4% formaldehyde for 7 days, followed by 2 weeks of decalcification in 10% ethylenediaminetetraacetic acid (EDTA), pH 7.4. Samples were further processed by dehydration and infiltration with paraffin. The femoral axis was adjusted to be upright against the embedding surface according to the method by Eltawil et al. 10 Subsequently, 6-mm sections were cut using a microtome (Leica RM-2135; Leica Microsystems). Per knee, 6 sections at a distance of 100 mm were used. As a landmark for section level, a cross section of the growth plate in 4 points was used. Each section was stained with hematoxylin and eosin (Sigma-Aldrich) to assess cell morphology and reconstitution of the osteochondral junction. Additionally, thionine staining (0.04%; Sigma-Aldrich) was used to analyze the filling of the defect and matrix staining intensity, representative of the amount and distribution of glycosaminoglycan. Sections from the day 28 endpoint were immunostained with a primary antibody against the cartilage matrix protein collagen type 2 (mouse anti-human, 1:100 dilution, II-II/II6B3; Developmental Studies Hybridoma Bank). In short, antigen retrieval was performed using pronase 1 mg/mL (Sigma-Aldrich) phosphatebuffered saline (PBS; Sigma-Aldrich), followed by hyaluronidase 10 mg/mL PBS (Sigma-Aldrich). To prevent cross-reaction with mouse antigens, the primary collagen type 2 antibody was preincubated overnight with a biotin-SP F(ab) 2 -labeled goat anti-mouse antibody (No. 115-066-062; Jackson ImmunoResearch Europe). After incubation, an alkaline-phosphatase-avidin-labeled antibody was used (Biogenex Laboratories), which, combined with the Neu Fuchsine substrate, resulted in pink staining. An isotype immunoglobulin G1 monoclonal antibody was used as a negative control. All sections were stained in 1 batch to reduce staining variation between different samples. Slides were imaged using a 40 3 objective on a slide scanner (NanoZoomer C9600-12; Hamamatsu Photonics) and processed using NanoZoomer Digital Pathology Image software (Hamamatsu Photonics). Knees with a patellar
Evaluation of Cartilage Repair
All knees were scored using the validated semiquantitative histological scoring system for articular cartilage repair developed by Pineda et al. 33 The Pineda score contains the following subdomains: filling of the defect, reconstitution of the osteochondral junction, matrix staining, and cell morphology. A score ranging from 0 to 4 is given to the subdomain filling of the defect, matrix staining, and cell morphology. Reconstitution of the osteochondral junction is scored between 0 and 2, resulting in a total histological score ranging from 0 to 14, where 0 is the best repair and 14 is the worst. Scoring was performed independently by 2 authors (M.A.W., N.K.) blinded to the experimental group. Per knee, 3 different representative sections were scored, resulting in an average score per knee. The average of the 2 observer scores was used per animal. Interobserver reliability for cartilage repair scores based on absolute agreement was excellent (interclass correlation coefficient [ICC], 0.96; 95% CI, 0.92-0.98). The filling of the original defect area was measured using NanoZoomer Digital Pathology Image software. Collagen type 2 deposition was measured by the area stained positive for collagen type 2, divided by the total filled area of the defect. The percentages were averaged as previously described.
Evaluation of Joint Inflammation
Joint inflammation was measured by the synovial membrane thickness and Krenn scores 27 on the lateral side of the patellofemoral joint using NanoZoomer Digital Pathology Image software. The lateral side was chosen to be most representative because of the arthrotomy and sutures on the medial side. Per knee, the synovial thickness was measured in 3 different sections at 3 different positions, and the average of these 9 measurements was calculated. Per mouse, the average of 2 observers was used for further statistical analyses.
Statistical Analysis
Pineda scores were considered nonparametrical data. Differences between groups were assessed using a Kruskal-Wallis test with post hoc Dunn nonparametric comparison analysis. Filling of the defect and synovial thickness data were tested for normality using a Shapiro-Wilk test. All data were normally distributed, described as mean 6 SD, and assessed using a 1-way analysis of variance, followed by post hoc Tukey honestly significant difference analysis. Correlation values were calculated using the Pearson rho test in case of normal data distribution. A Spearman rho test was used for nonparametrical data. Interobserver reliability of all measurements is expressed as ICC. A 2-way mixed model based on absolute agreement for single measures was used. An ICC .0.75 is regarded as excellent. 25 All statistical tests were 2-tailed. A P value \.05 was considered statistically significant. SPSS Statistics package for Mac Version 24.0 (IBM Corp) was used for all analyses.
RESULTS
Of 80 animals, 79 reached the endpoint without signs of infection or abnormal behavior. One animal was lethargic and nonresponsive and had ruffled fur 1 day after surgery. Based on these signs and in consultation with the animal board, the animal was euthanized. In some of the operated mice, a limping gait pattern was observed in the first 3 days that resolved in all mice within 7 days. Some mice had a patellar dislocation (determined by histological examination) and were excluded from further histological analyses. Interestingly, the dislocations were not equally distributed over the groups. For the 28-day time point, 6 of 11 mice injected with TAA at day 1 had a patellar dislocation, compared with 1 of 11 mice in the saline control group (P = .02). Also, 6 of 11 mice injected with TAA at day 7 had a patellar dislocation compared with 2 of 11 mice in the saline control group. At the 10-day time point, only 1 mouse (injected with TAA 1 day after creating the defect) had a patellar dislocation, indicating that dislocations most likely developed after this time point.
Intra-articular TAA Injection Reduced Synovial Inflammation and Synovial Membrane Thickening
To evaluate the effect of TAA on local inflammation in the joint, the thickness of the synovial membrane and the Krenn scores were evaluated at days 10 and 28 after surgery. The synovium was thicker in knees that underwent full-thickness cartilage defect induction surgery than in nonoperated knees ( Figure 2). Mice injected with TAA 1 day after surgery had a significantly thinner synovial membrane than control saline-injected mice at day 28 (70.1 6 31.9 mm vs 111.9 6 30.9 mm, respectively; P = .01) (Figure 2), but not at day 10 (124.7 6 20.0 mm vs 131.3 6 22.0 mm, respectively; P = .80) (Figure 2). The Krenn scores of the mice injected 1 day after surgery were lower in mice injected with TAA than in salineinjected control mice at day 10 (P = .017), but not at 28 days (P = .67). Mice injected with TAA 7 days after surgery had a thinner synovial membrane than saline-injected mice at day 10 (103.1 6 25.2 mm vs 131.3 6 22.0 mm, respectively; P = .02) (Figure 2A) and at day 28 (68.2 6 21.86 mm vs 90.2 6 21.29 mm, respectively; P = .26) ( Figure 2B), although this did not reach statistical significance. The Krenn scores in the mice injected 7 days after surgery were significantly lower in the TAA-injected than in the salineinjected mice at day 10 (P \ .0001) and at day 28 (P = .005).
Cell Ingrowth Was Not Inhibited by TAA Administration
At day 10, the defect was filled with undifferentiated cells with a spindle-shaped morphology ( Figure 3A). There was no difference in the area of the defect filled between the saline and TAA groups at day 10 ( Figure 3B). Also, the timing of TAA injection did not affect the filling of the defect ( Figure 3B). In addition, the cell density in the defect area was similar in all groups ( Figure 3C). This suggests that TAA did not affect migration and/or proliferation of cells in the defect.
Cartilage Repair Was Inhibited by TAA Treatment
The effect of TAA injection on cartilage repair was histologically evaluated. A total of 28 days after generation of the full-thickness defect, the tissue filling the defect consisted of chondrocyte-like cells surrounded by thionine-stained matrix ( Figure 4A). The median Pineda cartilage repair score was similar in mice treated after 1 day with TAA and in control mice that received a saline injection (8.9 [IQR, 5.9-8.9] vs 8.3 [IQR, 5.0-8.7], respectively; P . .99) ( Figure 4B). There were no differences between mice injected with saline, TAA at day 1, or TAA at day 7 on subdomains of the Pineda cartilage repair score: restoration of the osteochondral junction, matrix staining intensity, or cell morphology (see Appendix Figure A1, available in the online version of this article). However, a significantly lower percentage of the defect area was filled in mice treated with TAA after 1 day than in control mice injected with saline (76.4% 6 7.0% vs 90.4% 6 7.4%, respectively; Figure 4C). Moreover, a significantly lower percentage of the filled defect area stained positive for collagen type 2 in mice treated with TAA after 1 day than in control mice injected with saline (2.6% 6 1.5% vs 18.5% 6 9.6%, respectively; P = .01) ( Figure 4D and Figure 5, A and B). In mice treated with TAA after 7 days, median cartilage repair scores at day 28 were similar to those of control mice injected with saline (8.3 [IQR, 6.5-11.3] vs 7.0 [IQR, 6-8.4], respectively; P = .44) ( Figure 4E). The percentage of the defect area that was filled was also significantly lower in TAA-injected than in saline-injected control mice (67.8% 6 17.6% vs 85.55% 6 1.5%, respectively; P = .01) ( Figure 4F and Figure 5, A and C). Also, a seemingly lower percentage of the filled defect area stained positive for collagen type 2 in mice treated with TAA after 7 days than in control mice injected with saline (7.4% 6 12.5% vs 15.8% 6 10.1%, respectively; P = .27) ( Figure 4G), although this was nonsignificant because of 1 outlier.
A Moderate Correlation Was Observed Between Synovial Membrane Thickness and Defect Filling
To investigate the relationship between inflammation and cartilage repair, we correlated synovium thickness with defect filling (Figure 6). The thickness of the synovial membrane was moderately associated with the defect filling at day 10 (r = 0.42; P = .02) and day 28 (r = 0.47; P = .01). The thickness of the synovial membrane was not associated with cell density in the filled defect at day 10 (r = 20.08; P = .69). Also, the thickness of the synovial membrane and cartilage repair scores were not associated at day 28 (r = 20.18; P = .35) and day 10 (r = 0.006; P = .97). No associations were found between Krenn score and defect filling, probably because of the limited range and variation of Krenn scores (0-8).
DISCUSSION
Previous in vitro studies have shown that pro-inflammatory factors inhibit cartilage formation. However, there have only been a few studies exploring the role of antiinflammatory treatment in cartilage defect repair in vivo. Therefore, the purpose of this study was to evaluate the effect of TAA, an anti-inflammatory drug, on endogenous cartilage defect repair. Our main finding was that intraarticular injection with TAA in a murine model for endogenous cartilage repair reduced synovial inflammation but also inhibited cartilage repair. After 28 days, we observed less filling of the defect in knees injected with TAA. After 10 days, defect filling was not significantly affected by TAA treatment. This observation might suggest that TAA does not affect the influx or proliferation of cells but negatively influences tissue formation in the defect.
To our knowledge, this is the first in vivo study evaluating the role of TAA in a model of endogenous cartilage defect repair. We observed less filling of the defect and less collagen type 2 deposition after 28 days in knees injected with TAA. Although there are no studies on the role of TAA in cartilage defect repair, TAA has been used to study the effect on cartilage in both healthy and osteoarthritic joints, showing conflicting results. One study found dose-dependent degenerative changes in the cartilage of healthy rabbit knees after 2 to 6 weeks of weekly 3-mg TAA injections. 31 Another study found more cartilage degeneration indicated by higher Mankin scores after injection of the extended-release form of TAA in rat knees that had surgical ACL transection and destabilization of the meniscus. 36 These findings can be considered in line with the reduced deposition of collagen type 2 and the reduced filling of the defect we observed. On the other hand, in a collagenase-induced osteoarthritis rat model, injection of TAA as bolus or as an extended-release formulation had an effect on cartilage degeneration. 34 When Frisbie and colleagues 13 administered 2 intra-articular doses of 12 mg TAA 13 and 27 days after surgical induction of osteochondral fractures in equine carpal bones, less cartilage degeneration, indicated by lower Mankin scores, was found. Interestingly, the effects were most pronounced if TAA was injected in the contralateral (uninjured) joint rather than the diseased joint, indicating that beneficial effects were most pronounced at very low concentrations. The extended presence of TAA in the joint and higher concentrations of TAA in the joint can be detrimental for the cartilage, as seen in the study of Rudnik-Jansen et al, 36 although this may depend on the type of pathology present in the joint.
In our study, mice injected with TAA had less synovial inflammation than saline-injected mice. This is in line with what is known in the literature about rodent arthritis models. 29,34 Furthermore, in some cases TAA injection even reduced the inflammation to the healthy nonoperated situation (as demonstrated by a Krenn score of 1), which is in line with other studies that reported less mononuclear cell infiltration and intimal hyperplasia in response to TAA. 13,27,39 We found a thicker synovial membrane to be associated with more filling of the defect. This association suggests that inflammation has a positive effect on the filling of the defect. In fracture repair research, knockout studies in mice have shown that the absence of inflammation-related proinflammatory molecules, such as TNF-a and cyclooxygenase 2, leads to a delay in bone healing. 15,52 This implies that inflammation is needed to initiate healing. On the contrary, excessive inflammation was shown to inhibit in vitro MSC chondrogenesis in a model with inflammatory factors present from osteoarthritic joints and from traumatically injured joints. 17,48 Reducing the inflammation by inhibition of IL-1a, oxozeaenol, or tofacitinib could partially restore this inhibitory effect on cartilage formation. 17,44 However, cartilage formation was shown to be inhibited when anti-inflammatory compounds such as a protein kinase inhibitor or TNF-a inhibitor were used at an early stage of chondrogenesis. 23,44 This might explain why we observed a decrease in collagen type 2 deposition, as we administered our anti-inflammatory compound at an early stage of new cartilage formation in the defect site. These findings highlight the challenging balance between inflammation and cartilage defect repair. We acknowledge, though, that inflammation is more comprehensive than synovial thickness and mononuclear cell infiltration only. Serological measurements of the synovial fluid and the composition of inflammatory cells inside the synovial membrane over time could help to better evaluate the role of joint inflammation in cartilage repair.
Cartilage repair scores and filling of the defect were slightly worse when the defect was treated after 7 days, albeit nonsignificantly. It is known that the presence of chronic proinflammatory factors can impair chondrogenesis and stimulate degeneration of newly formed cartilage. 12, 46 Saris et al 37 showed in an in vivo study in goats that cartilage repair scores were worse in surgical defects treated late compared with defects treated immediately after induction. After 7 days, the proinflammatory factors may have dropped to a lower level, reaching a more chronic inflammatory phase. 19 However, it is debatable whether TAA is the right tool to inhibit acute inflammation at these time points. Inflammation is a dynamic process that occurs not only between days 1 and 7, but throughout time. A study in horses that were intra-articularly injected with TAA showed that TAA remained present in the synovial fluid up to 14 days, whereas it was undetectable in serum after 48 hours. 5 This means that TAA administered at day 1 might inhibit inflammation up to 14 days and thereby not only the acute inflammation phase but also the chronic phase. 19 Also, the day 7 injection inhibits this phase, resulting in less inflammation after this time point. Therefore, it remains uncertain what the effect is if only the acute phase would be inhibited. Future studies should explore the role of selectively inhibiting the acute inflammation phase and the effect on cartilage repair. Based on our results, we can conclude that TAA inhibited cartilage repair regardless of the time of administration, and neither early nor late administration is advised.
TAA is known to inhibit both tissue outgrowth from ligaments and collagen synthesis in tenocytes, indicating that TAA can impair wound healing in many respects. 43,51 TAA injection in a rat destabilization-induced osteoarthritis model (ACL transection and partial meniscectomy) resulted in increased joint instability and subluxations in the longer term. 36 Although patellar dislocation is relative, frequently occurring after surgical procedures in mice, 47 its frequency was significantly higher in TAA-injected animals (P = .004). Our results indicate that TAA might have a negative effect on wound healing of the arthrotomy site, thus resulting in patellar dislocation. Interestingly, on day 10 only 1 of 24 mice injected with TAA had developed a patellar dislocation, impying that patellar dislocations develop after this time point. For mice in the 10-day group, the endpoint was only 9 or 3 days after TAA injection, and most likely the arthrotomy sutures would still provide some support. Hence, this time point might have been too early to observe effects on wound healing. Another explanation might be that TAA reduced pain in the inflamed joint 1 to 4 weeks after administration. 16 This pain reduction might allow mice in the TAA group to move more freely throughout the study period, causing more patellar dislocations than in the saline-injected group.
For translation of these results to clinical practice, some limitations of our study need to be considered. First, with the model chosen we could only study the effect of reducing inflammation on cartilage repair that relies on the body's endogenous repair capacity. It is unclear what the effect will be on procedures such as articular chondrocyte implantation (ACI) and/or transplantation strategies, although based on our study it seems that TAA mainly affects the production of extracellular matrix. This indicates that TAA could also negatively influence repair using approaches such as ACI. Moreover, for this study we specifically selected DBA mice at 10 weeks of age, as this strain at this age has been shown to be most optimal to study modulation of the endogenous cartilage repair capacity. It is unclear how the results of these young adult mice will translate to a young or middle-aged patient population with a cartilage defect. Additionally, although the dose regimen we tested was consistent and translated from clinical guidelines, we cannot exclude that another dose, timing, and/or frequency would have generated a different effect. The model furthermore differs from the clinical rehabilitation, as the animals were allowed to move freely after surgery. This could potentially affect the stability of the early repair tissue in the defect. In a clinical situation, the patient would be nonweightbearing for the first few weeks after a cartilage repair procedure. Finally, our focus was mainly on cartilage repair and inflammation of the joint, and we have not measured pain. It would, however, be interesting to see if TAA could play a role in the postoperative care to reduce pain after injuring the articular cartilage.
CONCLUSION
Intra-articular injection of TAA reduced synovial inflammation but inhibited cartilage repair. This implies that inhibition of inflammation with TAA might not be the right direction to move cartilage repair forward. The thin line between the level of inflammation required and optimal cartilage defect repair underscores the importance of incorporating rational control of inflammation into cartilage repair strategies. The use of TAA to reduce inflammation after cartilage defect treatment to improve cartilage repair is therefore not warranted. | 2022-03-23T06:19:07.676Z | 2022-03-22T00:00:00.000 | {
"year": 2022,
"sha1": "4ad62247808be643ff7fe0c4960dc247905fe50c",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03635465221083693",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "bfe2467cd3997d4eff548bd60b483cc1aa6fd05e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256437736 | pes2o/s2orc | v3-fos-license | Computational study of the interaction of the psychoactive amphetamine with 1,2-indanedione and 1,8-diazafluoren-9-one as fingerprinting reagents
In this study, we used computational methods to investigate the interaction of amphetamine (AMP) with 1,2-indanedione (IND) and 1,8-diazafluoren-9-one (DFO) so as to understand whether AMP can be detected in latent fingerprints using either of these reagents. The results show that the binding energies of AMP with IND and DFO were enhanced by the presence of amino acid from −9.29 to −12.35 kcal mol−1 and −7.98 to −10.65 kcal mol−1, respectively. The physical origins of these interactions could be better understood by symmetry-adapted perturbation theory. The excited state properties of the binding structures with IND demonstrate distinguishable absorption peaks in the UV-vis spectra but zero fluorescence. Furthermore, the UV-vis spectra of the possible reaction products between AMP and the reagents reveal absorption peaks in the visible spectrum. Therefore, we could predict that reaction of AMP with IND would be observable by a reddish colour while with DFO, a colour change to violet is expected. To conclude, the reagents IND and DFO may be used to detect AMP by UV-vis spectroscopy and if their reactions are allowed, the reagents may then act as a potentially rapid, affordable and easy colorimetric test for AMP in latent fingerprints without destruction of the fingerprint sample.
Introduction
Psychoactive substances, more commonly known as drugs of abuse, are substances that affect mental health. The major drugs of abuse classied by the United Nations Office on Drugs and Crime (UNODC) include cocaine, cannabis, opioids, and amphetamines (AMPs). AMPs are the second most common drugs of abuse 1 with a rise of 309% for seizures from the year 2018 to the year 2019 as per the world drug report 2021. 2 This increase in drug availability presents a serious threat to human health, family harmony, and social stability. 1 Hence, in accordance with the sustainable development goals (SDGs), detection of drugs is essential to protect the society from the drug scourge. 3 Various detection methods and sensors for detecting AMP in matrices such as oral uids, blood, and urine are commercially available. 4 Several researchers have investigated the possibility of detection of drugs in latent ngerprint [5][6][7][8][9][10][11] as the latter is unique to an individual, hence, the presence of drug in the ngermark can be considered as strong evidence in drug cases. 12 Another advantage of the latent ngerprint is that it is non-invasive as compared to matrices such as blood. It is thus a preferred sampling matrix in point of care analysis 13,14 and has been proposed for roadside testing of drugs. 11 AMP can be detected in ngermarks using uorescent probes, 15,16 time-of-ight secondary ion mass spectrometry, 17 lateral ow detection 15 and infrared spectral imaging. 18 These methods are less effective in the presence of powder mixtures in the sample or in the presence of enough uorescent nanoparticles. In addition, mass spectrometry method is destructive and requires different sample preparation for each target analyte. 18 In order to overcome these limitations, there is a need for novel drug detection methods to improve portability, affordability, detection times and limits as well as methods which do not destroy the latent ngerprint samples.
Novel drug detection methods for instance, uorescent lms made of o-carborane derivative of perylene bisimide derivatives (PBI-CB) are capable to detect the drugs AMPs, magu (a combination of AMP and caffeine), 19 caffeine, phenobarbital and ketamine. 20 To support the experimental results, density functional theory (DFT) using the B3LYP/6-31G(d) method revealed that the drugs are suitable quenchers, as their energies of the HOMO orbitals are higher than that of PBI-CB. These differences ensure the hole of the uorophore aer excitation can accept an electron from the drug molecule resulting in the observed uorescence quenching. In addition, DFT studies enabled the prediction of many potential sensors for AMP from the low binding energies which prove that the sensors are effective for detection of the drug. [21][22][23][24][25][26] Excited states properties of the drugs and sensors obtained by time-dependent DFT (TD-DFT) helps to explain the uorescent possibility and provides UV-vis spectra which can be used as guidance for future experimental works on the specic sensors. [26][27][28] In this study, we used computational methods to analyse the possible use of ngerprinting reagents (FRs) to detect AMP. The FRs namely 1,2-indanedione (IND) and 1,8-diazauoren-9-one (DFO), as illustrated in Fig. 1, enable the detection of latent ngerprints on porous surfaces such as paper. 29,30 These FRs react with amino acids in latent ngerprints to form a product which uoresces under light at specic wavelengths, thereby enabling efficient visualisation of the ngerprint-ridge details. However, the presence of drugs in latent ngerprints and their inuence on IND and DFO have not been analysed to date. These reagents may thus be studied as a potential sensor for AMP since it has the benet of being affordable and not destroying the latent ngerprint samples compared to mass spectrometry. 18 Hence, the aim of this study was to use computational methods to investigate the binding effect of AMP on IND and DFO and to identify their subsequent possible detection by UVvis spectroscopy and uorescence. In addition, besides AMP, amino acids are also present in the latent ngerprint sample and they may interfere with the detection of the drug. To account for interferences by amino acids, we explored the binding of alanine (ALA) with AMP, IND and DFO as well as their resulting UV-vis and uorescence spectra. ALA was chosen since it is among the most abundant amino acids in latent ngerprints 31 and it was also considered in previous studies. 30,[32][33][34] At a temperature of 100°C or 160°C, ALA reacts with IND or DFO to form a product which is responsible for uorescent detection of latent ngerprints. 29 AMP present in the sample may also react with IND or DFO under similar conditions. Thus, we also studied the possible reaction products (RPs) of AMP and the FRs as well as their effect on UV-vis and uorescence.
Computational methodologies
The overall procedure followed throughout this article is illustrated in Scheme 1. Firstly, the ground state structures of the FRs, ALA and AMP were optimised. The most stable conformer of AMP was identied by a conformational search performed by varying the dihedral angles along the a and b axes independently, 35,36 by 10 steps with increment of 20°, using the B3LYP/6-31G(d,p) method. 36 The a axis represents the C-C bond while and b axis represents the C-N bond. We then examined the binding possibilities of AMP with the FRs and ALA following which, we used the symmetry-adapted perturbation theory (SAPT) approach to check which of the principal physical contributions mainly inuence the interaction energies. SAPT also helps to understand the nature of interactions that is not possible experimentally. 27 In addition, the possible reaction products (RPs) between AMP and the FRs were investigated according to the known RPs between ALA and the FRs. 33,34 The resulting RPs and binding complexes were subjected to UV-vis spectra simulations in order to verify whether the interactions between AMP and IND or DFO provide signicant changes that could be practically employed during the detection of AMP. Furthermore, the UV-vis and uorescence spectra of RPs of ALA and AMP with FRs were analysed.
All the geometry optimisations and binding energy evaluations were performed in the gas phase by the B3LYP-D3/6-31G(d,p) 37,38 method. The Grimme's D3 dispersion correction was shown to increase the accuracy of non-covalent interactions. 27,[38][39][40][41] The optimised structures were thereaer conrmed to be real minima by performing frequency calculations using the same method. All structures showed positive force constants for all the normal modes of vibration. The frequencies were then used to evaluate the thermal (T = 298 K) vibrational corrections to the enthalpies and Gibbs free energies within the harmonic oscillator approximation and the zero-point vibrational energy (ZPE). The ZPE was included with the total energy in calculations of the binding energy (E b ) which was evaluated by: where, E(AMP + FR) is the total energy of the complex formed between AMP and the FR; E(AMP) is the total energy of AMP; E(FR) is the total energy of the same FR; and E(BSSE) correspond to the basis superposition error (BSSE) as introduced by Boys and Bernardi 42 in the counterpoise approach.
In addition, to better understand the interaction between AMP and the FR, we analysed the decomposition of interaction energy by single-point computations using SAPT. SAPT is a method to calculate non-covalent interactions and the computations were run on the PSI4 program. 43,44 The SAPT0 45,46 approach was considered along with the jun-cc-pVDZ 47 basis set since Tomić and coworkers 27 conrmed this method is accurate to model AMP precursor ephedrine binding with fullerene sensor. SAPT0 is the simplest and most inexpensive SAPT method that essentially treats the monomers at the Hartree-Fock level and appends explicit dispersion terms obtained from second-order perturbation theory to the electrostatic, exchange, and induction terms from HF dimer treatment. 48 SAPT0 does not consider intramolecular correlations but a second order intermolecular interaction which is accurate enough to explain the non-covalent interaction in a dimer.
We performed non-covalent interaction (NCI) analysis of the binding structures, using the Multiwfn program, [49][50][51] to characterise the presence of hydrogen bonds. The NCI plots, which are shown in the ESI in Fig. S9, † enable visualisation of non-covalent interactions between molecular fragments as real-space surfaces. The l 2 symbol represents the interaction type within the NCI framework. For attractive interactions, l 2 is negative, van der Waals interactions have their l 2 values close to zero while steric repulsions have positive l 2 values. The molecules were visualised in 3D using the Visual Molecular Dynamics program. 52 The Gnuplot 4.2 53 program and Ghostscript 54 interpreter were used to generate the 2D plots.
Subsequently, all the species were subjected to TD-DFT computations so as to obtain excited state properties as well as the UV-vis and uorescence spectra. The long rangecorrected CAM-B3LYP 55 functional was utilised as more reliable results are obtained compared to pure B3LYP functional. The 6-31+G(d,p) basis set was employed as the diffuse function enhances the results of binding molecules. 27 The TD-DFT as well as DFT computations were performed using the Gaussian16 56 program running on SEAGrid. 57-60 The GaussView 6 61 program was used to visualise the results and draw the structures. The gures in this article were generated using CYLview 62 while the spectra were simulated using Multiwfn. 51 3 Results and discussion
Optimised geometries
Given that AMP is a relatively large molecule with several rotatable bonds, a conformational search was performed in order to obtain the most stable conformer. The results of the conformational of AMP search are presented in the ESI (Fig. S1 †). The relative stabilities of each conformer are in agreement with those presented by Brause et al. 35 and Bruni et al. 36 The most stable conformer of AMP is in agreement with the experimental solid-state structure 36 and hence, was used for further computations like in similar studies whereby the authors investigated the interaction of AMP with potential sensors. 22,24,26,63,64 Subsequently, we used the optimised geometries of AMP, ALA, IND and DFO to generate their respective molecular electrostatic potential surface (MEP) maps. These maps as shown in Fig. 2 depict the variable charged areas and the decient and rich electron sites of each molecule. The red regions indicate the minimum electrostatic potential (rich electron density) with a negative electrostatic potential; while the blue regions show the highest electrostatic potential. 65 These data can be used as a guide to determine how the molecules interact with one another hence is useful to analyse binding interactions.
Binding structures. We analysed the binding interactions of AMP with the FRs and ALA. The various binding orientations explored according to the MEP maps are presented in the ESI. † Nevertheless, only the optimised structures with the highest binding energies (more negative values) are shown in Fig. 3 with their respective thermodynamic parameters in Table 1.
We Therefore, it can be predicted that the ALA present in a ngerprint sample will not interact with the FRs while a sample of AMP will bind with the FRs.
We also monitored the possible interactions between AMP and ALA since a ngerprint sample containing AMP will also contain ALA. The binding structure is given in Fig. 3 referred to as "AMP-ALA". AMP-ALA has higher binding energy (−14.94 kcal mol −1 ) as well as stronger hydrogen bonds with its minimum bond distance being 1.66Å compared to AMP-FRs and ALA-FRs as discussed above. Subsequently, we investigated the interactions between this AMP-ALA binding structure with the FRs. The resulting highest binding structures are AMP-ALA-IND and AMP-ALA-DFO as depicted in Fig. 3. In general, the hydrogen bonds between AMP and ALA are observed to have decreased upon binding with the FRs especially for the N-H bond which shows a decrease in bond length from 1.66Å in AMP-ALA to 1.52Å and 1.61Å in AMP-ALA-IND and AMP-ALA-DFO respectively. The bond distances of intermolecular bonds between AMP and the FRs also show a decrease when bound to ALA. The minimum bond distance between AMP and IND decreases from 2.88Å to 2.32Å in AMP-ALA-IND while with DFO, a decrease from 2.35Å to 2.22Å is noted.
In addition, it can be observed that ALA binding to AMP has lowered the energy required to interact with the FRs. That is, ALA-AMP interacts stronger with the FRs as compared to only ALA or AMP. The E b of AMP with IND has decreased from −9.29 to −12.35 kcal mol −1 when bound to ALA while the E b of AMP with DFO has decreased from −7.98 to −10.65 kcal mol −1 . These binding energies as well as the DG and DH values are in agreement with literature on potential sensors to detect AMP. 22,24,26,64,66
SAPT
To gain better insights on the interactions between AMP and the FRs, SAPT energy have been computed. SAPT provides a decomposition of the interaction energy into physically The individual components are plotted in Fig. 4 and summarised in Table S3 in the ESI. † The results reveal that the interaction energy typically has the most signicant contributions from the El and Ex repulsion terms with the repulsive Ex term outweighing the attractive El term. Notably, the exceptionally high binding affinity of AMP-ALA can be attributed to its strong El component in addition to the Ex contribution. The Table S3 † are comparable to the binding energies, E b , in Table 1 with a correlation coefficient R 2 value of 0.9.
Optical properties
3.3.1 Absorption properties. The UV-vis spectra of IND, DFO and their binding structures with AMP and ALA were simulated using the TD-DFT/CAM-B3LYP/6-31+G(d,p) method. The spectra involving IND and DFO are illustrated in Fig. 5(a) and S10(a) † respectively. The absorption peaks can be explained by the values of the maximum absorption wavelength (l max ), excitation energy (E x ), oscillator strength (f), and signicant molecular orbital (MO) assignments given in Table S3. † Moreover, frontier molecular orbitals (FMOs) analysis cannot be considered for analysing the character of electronic states owing to the simultaneous non-negligible contributions of multiple MO pair, as can be seen in Table S3. † This difficulty can be eliminated by calculating natural transition orbitals (NTOs), which separately performs unitary transformation for occupied and virtual MOs to nd a compact orbital representation for the electronic transition density matrix so that only one or very few number of orbital pairs have dominant contributions. Therefore, assigning the dominant congurations of the electronic states will depend on NTOs analysis as visualised in Fig. 5(b) and S10(b). † 68 In addition, shis of absorption peaks greater than 5 nm will be considered experimentally detectable based on the report by Tomić et al., 27 in which they predicted the detection of the AMP precursor ephedrine by its potential sensor fullerene.
The UV-vis spectrum of IND ( Fig. 5(a)) exhibits two distinct absorption peaks at l max of 172 and 259 nm at f values of 0.43 and 0.19, respectively. The peaks are theoretically assigned to NTO 38 and 39 having mainly p / p* character. In the presence of AMP, the heights of the absorption peaks are lowered to f values of 0.15 and 0.14 and a notable shi in l max is observed. The rst absorption peak of AMP-IND is located at the l max of 187 nm, which represents a red-shi of 15 nm. However, the shi of the second absorption peak located at the l max of 249 nm, shows a blue-shi of 10 nm. The computational results indicate that shis of the absorption peaks can be easily noticed and employed for the detection of AMP. In the absence of AMP Similarly, the spectra for DFO compared to AMP and ALA binding to DFO structures are presented in Fig. S10(a). † The UVvis spectrum of DFO exhibits two distinct absorption peaks at l max of 218 and 267 nm as well as a shoulder peak at 189 nm in agreement with literature. 69 This shoulder peak is absent in the binding structures. AMP-DFO displays absorption peaks at 219 and 268 nm which are red-shis of only 1 nm compared to the Table 2.
The FRs and the binding structures namely AMP-IND, AMP-ALA-IND, AMP-DFO, ALA-DFO and AMP-ALA-DFO have their value of oscillator strength, f, equal zero. This implies that the mentioned structures will not uoresce. In addition, the negligible f value of 0.0001 for ALA-IND predicts that no emission exists. However, AMP on its own uoresce at a wavelength of 240 nm. Despite this, since AMP is predicted to bind with the FRs, we conclude that AMP cannot be detected using IND or DFO by uorescence.
Reaction products
ALA present in latent ngerprints react with IND or DFO at a temperature of 100°C or 160°C. It is the reaction product formed, which uoresces to reveal the ngerprint details. 29 ALA reacts with IND to form a pink product, which we will refer to as P ALA-IND while the reaction between ALA and DFO yields a reddish product (P ALA-DFO ) under normal light. 29 The presence of AMP in the latent ngerprint sample may also react with IND and DFO at the same temperatures. We hereby study the possible reaction products of AMP-IND (P AMP-IND ) and AMP-DFO (P AMP-DFO ) and their effect on UV-vis spectra as well as on uorescence.
We investigated the possible products which may form due to the reaction of the FRs with ALA according to the reaction between ninhydrin and an amine. 70 In the scheme presented by Sudalaimani et al. 70 only the end product due to reaction between the amines and ninhydrin were studied. Ninhydrin reacts with ALA in a similar reaction pathway as IND and DFO to form end-products which are responsible for uorescent detection of ngerprints. Hence, we studied the possible endproducts of reaction between AMP and IND (P AMP-IND ) and AMP and DFO (P AMP-DFO ) according to the reaction in Fig. 6. The optimised geometries of these RPs are presented in the ESI ( Fig. S11 and S12). † The most stable optimised structures were considered to analyse their optical properties.
Optical properties
Absorption properties. The UV-vis spectra of IND, DFO and their reaction products with AMP and ALA were simulated using the TD-DFT/CAM-B3LYP/6-31+G(d,p) method. The spectra involving IND and DFO are illustrated in Fig. 7(a) and S16(a) † respectively. The absorption peaks can be explained by the signicant MO assignments and as observed in Table S4, † only the absorption peaks in the visible spectrum, that is, having l max higher than 380 nm, occur due the electron transition from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO). For the other absorption peaks in the UV spectra, the character of electronic states results from simultaneous non-negligible contributions of multiple MO pair which is why NTOs were calculated as visualised in Fig. 7(b) and S16(b). † The UV-vis spectra of P ALA-IND , illustrated in Fig. 7(a), has three distinct absorption peaks with heights higher than the two peaks of IND. The rst peak is located at a l max of 188 nm, which is a red-shi of 16 nm compared to the rst peak of IND while the second peak at a l max of 251 nm, is a blue-shi of 8 nm. The third absorption peak at 469 nm is in agreement with experimental literature in which the results showed that P ALA-IND is well excited in the visible spectrum of 420 to 510 nm. 34,71 The experimentally observed pink colour can also be conrmed by the peak at l max of 469 nm absorbing in the blue region of the visible spectrum which occurs due to the transition from the HOMO to LUMO orbitals. Conversely, the UV-vis spectra of P AMP-IND reveals two distinct absorption peaks. The heights of the peaks are lower and show red-shis when compared to the peaks of P ALA-IND . The rst peak of P AMP-IND located at a l max of 253 nm with f = 0.20, is a shi of 2 nm while the second peak located at 511 nm having f = 0.31, is a signicant shi of 42 nm. In fact, the latter peak of P AMP-IND absorbs light in the green region of the visible spectrum hence, P AMP-IND is theoretically predicted to be perceived as a reddish colour under normal light. The notable colour change and red-shis of the absorption peaks of P ALA-IND and P AMP-IND compared to IND can be explained by major electron transition from the HOMO to LUMO characterised by p / p* excitation. These computational results indicate that in addition to the noticeable shis of the absorption peaks and the predicted colour change, IND can be employed for the detection of AMP without interference by ALA.
Likewise for the second reagent DFO, the UV-vis spectra of its reaction products with ALA and AMP are analysed. The UV-vis spectra of P ALA-DFO , illustrated in Fig. S13(a), † has two distinct absorption peaks and a shoulder peak being red-shis compared to the absorption peaks of DFO. The shoulder peak is located at a l max of 233 nm, which is a shi of 44 nm. The rst peak at 257 nm is a shi of 39 nm while the second peak at a l max of 472 nm, is a signicant shi of 205 nm. The absorption peaks as well as the experimentally observed reddish colour are in agreement with literature. 72 The reddish colour is conrmed by to p / p* excitation from the HOMO to LUMO orbitals of the peak at l max of 472 nm which corresponds absorption of the green colour wavelength in the visible spectrum.
The UV-vis spectra of P AMP-DFO , presented in Fig. S13(a), † reveals two distinct absorption peaks and a shoulder peak. The heights of the peaks are lower with f of 0.45 and show red-shis when compared to the peaks of P ALA-DFO (f = 0.90). The rst peak located at a wavelength of 260 nm, is a shi of 3 nm while the second peak located at 568 nm is a signicant shi of 42 nm. The shoulder peak at 319 nm is a shi of 86 nm. The peak at l max of 568 nm corresponds to absorption in the yellow region in the visible spectrum occurring due to p / p* excitation from the HOMO to LUMO orbitals. P AMP-DFO is therefore predicted to be perceived as a violet colour under normal light. These computational results indicate that shis of the absorption peaks could be easily noticed in addition to the predicted colour, hence, DFO can be employed for colorimetric or UV-vis detection of AMP without interference by ALA.
Fluorescence properties of reaction products. The uorescence spectra obtained for both P ALA-IND and P ALA-DFO , in the ESI, † are in agreement with literature. 29,[72][73][74] However, P AMP-IND as well as P AMP-DFO reveal absence of uorescence. This can be explained by comparing the structural features of the ground and excited states of the reaction products (ESI †). For P ALA-IND and P ALA-DFO , the torsion angles increase from F = 22°and −23°to F = 30°a nd −33°upon excitation, respectively while for P AMP-IND (F = −93°to −37°) and P AMP-DFO (F = −32°to 0.5°), the ground state geometries tend to planarity in the excited state. Hence, the uorescence of P ALA-IND and P ALA-DFO can be attributed to the increase in torsion angle upon excitation while absence of uorescence for P AMP-IND and P AMP-DFO is due to reduction of torsion angle. We, therefore, conclude that the presence of AMP in a ngerprint does not affect its uorescence detection by IND and DFO.
Conclusions
The interaction and excited state properties of AMP with IND and DFO was investigated to predict the detection of AMP by UV-vis and uorescence when using either of these reagents. We used the DFT B3LYP-D3/6-31G(d,p) method to compute the binding interactions between AMP and the FRs as well as with ALA which is present in latent ngerprints. The results showed that AMP have low binding energies of −9.29 with IND and −7.98 kcal mol −1 with DFO. These binding energies were enhanced to −12.35 and −10.65 kcal mol −1 , respectively in the presence of ALA. These binding energies are similar to those of AMP binding to other potential drugs in literature. 22,64 Subsequently, we computed the excited state properties of these binding structures using the TD-DFT method. The UV-vis spectra simulated show that the absorption peaks of AMP binding to IND show shis of more than 5 nm. Therefore, it can be predicted that IND can be used to detect AMP by UV-vis spectroscopy. However, detection of AMP using DFO may not be possible since the shis in absorption peaks compared to DFO were less than 5 nm and hence may not be distinguishable experimentally. The uorescence results nevertheless indicate zero uorescence for binding of both reagents with AMP.
Fluorescence detection of ngerprints is due to the product of the reaction between ALA and the FRs. Hence, we also investigated the possible reaction products between AMP and the FRs. The UV-vis spectra simulated reveal large shi in the absorption peaks as well as peaks in the visible spectrum. The colours of the products were thus predicted according to the simulated and experimentally observable colour changes of P ALA-IND and P ALA-DFO . The colours of P AMP-IND and P AMP-DFO would potentially be reddish and violet, respectively. Therefore, IND and DFO may act as a colorimetric test to detect AMP if the reaction is enabled.
Our ndings are helpful for experimentalists and forensic analysts to enable the detection of AMP in ngerprints by using FRs. The FRs IND and DFO do not destroy the sample as they are widely used for ngerprint detection and identifying drugs in a ngerprint using affordable and available reagents would be benecial nancially as well as strong proofs in drug cases.
As future work, the formation of the coloured reaction products between AMP and the FRs needs to be investigated. In addition, experimental work will provide information on the analysis time and accuracy of this method which is a step towards better detection of drugs and hence, deterring the rise in drug abuse and trafficking.
Conflicts of interest
There are no conicts to declare. | 2023-02-01T16:27:20.261Z | 2023-01-24T00:00:00.000 | {
"year": 2023,
"sha1": "d6a82cf5fae896c65b94fdd3478b33b19ed18e03",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1039/d2ra07044h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3e2ac91be403e4f52ed4c57e23b7dafee29a223",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54646178 | pes2o/s2orc | v3-fos-license | Memory and Asset Pricing Models with Heterogeneous Beliefs
The paper discusses the role of memory in asset pricing models with heterogeneous beliefs. In particular, we were interested in how memory in the fitness measure affects stability of evolutionary adaptive systems and survival of technical trading. In order to obtain an insight into this matter two cases were analyzed; a two-type case of fundamentalists versus contrarians and a three-type case of fundamentalists versus opposite biases. It has been established that increasing memory strength has a stabilizing effect on dynamics, though it is not able to eliminate speculative traders’ short-run profit seeking behaviour from the market. Furthermore, opposite biases do not seem to lead to chaotic dynamics, even when there are no costs for fundamentalists. Apparently some (strong) trend extrapolator beliefs are needed in order to trigger chaotic asset price fluctuations.
Introduction
Heterogeneous agent models are present in various fields of economic analysis, such as market maker models, exchange rate models, monetary policy models, overlapping generations models and models of socio-economic behaviour. Yet the field with the most systematic and perhaps most promising nonlinear dynamic approach seems to be asset price modelling. Contributions by Brock and Hommes (1998), LeBaron (2000), Hommes et al. (2002), Chiarella and He (2002), Chiarella et al. (2003), Gaunersdorfer et al. (2003), Brock et al. (2005), Hommes et al. (2005), and Hommes (2006) thoroughly demonstrate how a simple standard pricing model is able to lead to complex dynamics that makes it extremely hard to predict the evolution of prices in asset markets. The main framework of analysis of such asset pricing models constitutes a financial market application for the evolutionary selection of expectation rules, introduced by Brock and Hommes (1997a) and is called the adaptive belief system. As a model in which different agents have the ability to switch beliefs, the adaptive belief system in a standard discounted value asset pricing set-up is derived from mean-variance maximization and extended to the case of heterogeneous beliefs (Hommes, 2006, p. 47). It can be formulated in terms of deviations from a benchmark fundamental and therefore used in experimental and empirical testing of deviations from the rational expectations benchmark. Agents are boundedly rational, act independently of each other and select a forecasting or investment strategy based upon its recent relative performance. The key feature of such systems, which often incorporate active learning and adaptation, is endogenous heterogeneity (cf. LeBaron, 2002), which means that markets can move through periods that support a diverse population of beliefs, and others in which these beliefs and strategies might collapse down to a very small set.
The mixture of different trader types leads to diverse dynamics exhibiting some stylized, qualitative features observed in practice on financial markets (cf. Campbell et al., 1997;Johnson et al., 2003), e.g. persistence in asset prices, unpredictability of returns at daily horizon, mean reversion at long horizons, excess volatility, clustered volatility, and leptokurtosis of asset returns. An important finding so far was that irregular and chaotic behaviour is caused by rational choice of prediction strategies in the bounded-rationality framework, and that this also exhibits quantitative features of asset price fluctuations, observed in financial markets. Namely, due to differences in beliefs these models generate a high and persistent trading volume, which is in sharp contrast to no trade theorems in rational expectations models. Fractions of different trading strategies fluctuate over time and simple technical trading rules can survive evolutionary competition. On average, technical analysts may even earn profits comparable to the profits earned by fundamentalists or value traders.
While recent literature on asset price modelling focuses mainly on impacts of heterogeneity of beliefs in the standard adaptive belief system as set up by Brock and Hommes (1997a) on market dynamics and stability on one hand, and the possibility of the survival of such 'irrational' and speculative traders in the market on the other, several crucial issues regarding the foundations of asset price modelling and its underlying theoretical findings remain open and indeterminate. One of those issues is related to heterogeneity in investors' time horizon; both their planning and their evaluation perspective. Namely, it has been scarcely addressed so far how memory in the fitness measure, i.e. the share of past information that boundedly rational economic agents take into account as decision makers, affects stability of evolutionary adaptive systems and survival of technical trading. LeBaron (2002) was using simulated agent-based financial markets of individuals following relatively simple behavioural rules that are updated over time. Actually, time was an essential and critical feature of the model. It has been argued that someone believing that the world is stationary should use all available information in forming his or her beliefs, while if one views the world as constantly in a state of change, then it will be better to use time series reaching a shorter length into the past. The dilemma is thus seen as an evolutionary challenge where long-memory agents, using lots of past data, are pitted against shortmemory agents to see who takes over the market. Agents with a short-term perspective appear to both influence the market in terms of increasing volatility and create an evolutionary space where they are able to prosper. Changing the population to more longmemory types has led to a reliable convergence in strategies. Memory or perhaps the lack of it therefore appeared to be an important aspect of the market that is likely to keep it from converging and prevent the elimination of 'irrational', speculative strategies from the market. Honkapohja and Mitra (2003) provided basic analytical results for dynamics of adaptive learning when the learning rule had finite memory and the presence of random shocks precluded exact convergence to the rational expectations equilibrium. The authors focused on the case of learning a stochastic steady state. Even though their work is not done in the heterogeneous agent setting, the results they obtained are interesting for our analysis. Their fundamental outcome was that the expectational stability principle, which plays a central role in situations of complete learning, as discussed e.g. in Evans and Honkapohja (2001), retains its importance in the analysis of incomplete learning, though it takes a new form. In the models that were analyzed, expectational stability guaranteed stationary dynamics in the learning economy and unbiased forecasts. Chiarella et al. (2006) proposed a dynamic financial market model in which demand for traded assets had both a fundamentalist and a chartist component in the boundedly rational framework. The chartist demand was governed by the difference between current price and a (long-run) moving average. By examining the price dynamics of the moving average rule they found out that an increase of the window length of the moving average rule can destabilize an otherwise stable system, leading to more complicated, even chaotic behaviour. The analysis of the corresponding stochastic model was able to explain various market price phenomena, including temporary bubbles, sudden market crashes, price resistance and price switching between different levels.
The objective of this chapter is to lay the foundations for a competent and critical theoretical analysis setting the memory assumption in a simple, analytically tractable asset pricing model with heterogeneous beliefs. We shall thus analyze the effects of additional memory in the fitness measure on evolutionary adaptive systems and the nature of consequences for survival of technical trading. In order to examine our research hypothesis adequately, both analytical and numerical analysis will have to be employed and complemented. Therefore, we shall first expand the asset pricing model to include more memory, and then solve it both analytically and numerically. Two cases are going to be analyzed, hopefully sufficiently general to cover some main aspects of financial markets; (1) a two-type case of fundamentalists versus contrarians and (2) a three-type case of fundamentalists versus opposite biased beliefs. Complementing the stability analysis with local bifurcation theory (cf. Awrejcewicz, 1991;Palis and Takens, 1993;Kuznetsov, 1995;Awrejcewicz and Lamarque, 2003), we will also be able to analyze numerically the effects of adding different amounts of additional memory to fitness measure on stability of the standard asset pricing model and survival of technical trading. Thus the analysis of both local and global stability can be performed for different combinations of trader types in the market.
The heterogeneous agents model
The adaptive belief system employs a mechanism dealing with interaction between fractions of market traders of different types, and the distance between the fundamental and the actual price. Financial markets are thus viewed as an evolutionary system, where price fluctuations are driven by an evolutionary dynamics between different expectation schemes. Pioneering work in this field has been done by Brock and Hommes (1997a), who attempted to conciliate the two main perspectives concerning economic fluctuations, i.e. the new classical and the Keynesian view (cf. Hommes, 2006, pp. 1-5), and the underlying rules relating to the formation of expectations. In order to get some insight into possible ways of theoretical analysis to follow, we shall describe a simple, analytically tractable version of the asset pricing model as constructed by Brock and Hommes (1998). The model can be viewed as composed of two simultaneous parts; present value asset pricing and the evolutionary selection of strategies, resulting in equilibrium pricing equation and fractions of belief types equation. We shall also make an indication of where memory in the fitness measure (and in expectation rules) enters the model and how it might affect the analysis.
Present value asset pricing
The model incorporates one risky asset and one risk free asset. The latter is perfectly elastically supplied at given gross return R, where R = 1 + r. Investors of different types h have different beliefs about the conditional expectation and the conditional variance of modelling variables based on a publicly available information set consisting of past prices and dividends. The present value asset pricing part of the adaptive demand system is used to model each investor type as a myopic mean variance maximizer of expected wealth demand, Eh,tWt, for the risky asset: where pt is the price (ex dividend) at time t per share of risky asset, yt is an IID dividend process at time t of the risky asset, zh,t is number of shares purchased at date t by agent of type h, and In order to perform myopic mean variance maximization of expected wealth demand for risky asset of type h, we seek for zh,t that solves: and thus: , where the belief about expected value of wealth at time t + 1, conditional on all publicly available information at time t, for a trader of type h is , . All traders are assumed to be equally risk averse with a given risk aversion parameter a, which is constant over time 1 .
Solving this optimization problem produces quantities of shares purchased by agents of different types, which enables us to seek for the equilibrium between the constant supply of the risky asset per trader z s and the sum of demands: where the fraction of traders of type h out of altogether H types at time t is denoted by nh,t, where , 1 The price of the risky asset is determined by market clearing, which can be seen by rewriting expression (4) in the form: where 2 s a z is the risk premium. The latter is an extra amount of money that traders get for holding the risky asset. Traders will only purchase the risky asset if its expected value is equal or higher than the expected value of the risk-free asset. Since the outcome of the risky asset is uncertain, a risk premium is associated with it.
In the simplest case of IID dividends with mean y and with traders having correct beliefs about dividends, i.e.
, the market price of the risky asset pt at time t is determined by: where a noise term t is included, which represents random fluctuations in the supply of risky shares. Considering a special case with a constant zero supply of outside shares, i.e. z s = 0, we obtain: If we instead consider for a moment the case of homogeneous beliefs with no noise and all traders being rational, the pricing equation simplifies to: In equilibrium the expectations of the price will be the same and equal to the fundamental price. The constant fundamental value of the price of the risky asset p* in the case of homogeneous beliefs is derived from the expression: By imposing a transversality condition on expression (7) with infinitely many solutions we exclude bubble solutions (cf. Cuthbertson, 1996) and expression (8) now has only one solution. We are thus able to derive the fundamental price as the discounted sum of expected future dividends: By simplification of the fundamental price equation for the case of the IID dividend process with constant conditional expectation we thus obtain the standard benchmark notion of the 'fundamental', i.e. * t y p r , to be used in the model hereinafter.
Taking into account the appropriate form of heterogeneous beliefs of future prices, i.e. including some deterministic function fh,t, which can differ across trader types: we restrict beliefs about the next deviation of the actual from the fundamental price, xt, to deterministic functions of past deviations from the fundamental: where L is the number of lags of past information, taken into account. Since the deterministic function in the expectation rule depends on preceding price deviations, it can also be seen as including memory. However, due to rapidly increasing analytical complexity, viz. including more preceding price deviations rapidly increases the dimension of the system, this issue has so far mainly been neglected. In this chapter we are focusing on the memory in the fitness measure and will thus include only one lag in the memory in the expectation rule, i.e.
Taking into account that * t y p r , the equilibrium pricing equation (5) can thus finally be rewritten in terms of deviations from the fundamental price, xt = pt -p*: The particular form of deterministic function in the forecasting or expectation rule is thus what determines different types of heterogeneous agents in an adaptive belief system. In general, we distinguish between two typical investor types; fundamentalists and 'noise traders' or technical analysts. Fundamentalists believe that the price of an asset is defined solely by its efficient market hypothesis fundamental value (Fama, 1991)
Evolutionary selection of strategies
In order to be able to understand the dynamics of fractions of different trader types, we consider the appropriate formulations of realized excess return Rt from expression (1) where Ch represents the costs traders have to pay to use strategy h. Albeit introducing additional analytical complexity, we usually take into account the costs for predictor of particular trader type, since more information-intense predictors are evidently more costly. It is of course convenient to rewrite profits of different types of traders in terms of deviations from the benchmark fundamental: The fitness function or performance measure of each trader type can now be defined in terms of its realized profits. In fact, it can be expressed as the weighted sum of realized profits, i.e. as the sum of current realized profits and a share of past fitness, which is in turn defined as past realized profits: where current realized profits are defined in the following final form: The fitness function can for Uh,0 = 0 also be rewritten in the following expanded form with exponentially declining weights: In case of the equilibrium pricing equation, herein formulated as the sum over trader types of products of a fraction of particular trader type and its deterministic function, the fitnesses enter the adaptive belief system before the equilibrium price is observed. This is suitable for analyzing the asset pricing model as an explicit nonlinear difference equation. Even though nonlinear asset pricing dynamics can be modelled either as a deterministic or a stochastic process, only the latter enables investigation of the effects of noise upon the asset pricing dynamics.
The share of past fitness in the performance measure is expressed by the parameter w; 0 w 1, called memory strength. When the value of this parameter is zero (w = 0), the fitness is given by most recent net realized profit. Due to analytical tractability this is at present, y for the most part, the case in the existing literature on asset pricing models with heterogeneous agents, though not in this chapter. The main contribution of this chapter is that it analyzes the case of nonzero memory in the fitness measure. When the memory strength parameter takes a positive value, some share of current realized profits in any given period is taken into account when calculating the performance measure in the next time period. If the value of memory strength parameter amounts to one then of course the entire accumulated wealth is taken into account.
The expression (14) for the fitness function is somewhat different that the one used in Brock and Hommes (1998), where the coefficient of the current realized profits was fixed to 1.
Namely, if we rewrite the memory strength parameter as to be a specific number of time periods, we obtain the following expression for the fitness function: which is equivalent to taking the last T observations into account with equal weight (as benchmark). When T approaches infinity, the memory parameter approaches 1 and the entire accumulated wealth is taken into account. We thus believe the expression (14) to be a more suitable formulation of the fitness measure than the one used in Brock and Hommes (1998), and in several other contributions.
Finally, we can express fractions of belief types, nh,t, which are updated in each period, as a discrete choice probability by a multinomial logit model: by using parameter , determining the intensity of choice. The latter measures how fast economic agents switch between different prediction strategies; if the value of intensity of choice is zero, then all trader types have equal weight and the mass of traders distributes itself evenly across the set of available strategies, while on the other hand the entire mass of traders tends to use the best predictor, i.e. the strategy with the highest fitness, when the intensity of choice approaches infinity (the neoclassical limit).
Trader fractions are therefore determined by fitness and intensity of choice. Rationality in the asset pricing model is evidently bounded, since fractions are ranked according to fitness, but not all agents choose the best predictor. To ensure that fractions of belief types depend only upon observable deviations from the fundamental at any given time period, fitness function in the fractions of belief types equation may only depend on past fitness and past return. This indeed ensures that past realized profits are observable quantities that can be used in predictor selection.
One might wonder whether the traders' myopic mean-variance maximization is a reasonable assumption, especially when we allow for traders with a longer memory span. This assumption is widely used in modelling in economics and finance, though it would certainly be interesting to let traders plan longer ahead, even with an infinite planning horizon, as in the Lucas (1978) asset pricing model. However, in this kind of model one usually assumes perfect rationality to keep the analysis tractable. So far very little work has been done on infinite horizon models with bounded rationality and heterogeneous beliefs. Furthermore, one can also discuss whether individuals are really able to plan over a long horizon, or whether they might use simple heuristics over a short horizon and occasionally adapt them. After all, memory in the fitness measure is not equivalent to the planning horizon, but rather an "evaluation horizon" used to decide whether or not to switch strategies. There is empirical and experimental evidence that humans give more weight to the recent past than the far distant past, and this is formalized in our model.
Fundamentalists versus Contrarians
The first case we are going to examine is a two-type heterogeneous agents model with fundamentalists and contrarians as market participants. Fundamentalists exhibit deterministic function of the form: and have some positive information gathering costs C, i.e. C > 0. Contrarians exhibit a deterministic function: 2, and zero information gathering costs. It is thus a case of fundamentalists versus pure contrarians. We have the following fractions of belief types equation: For convenience we shall also introduce a difference in fractions mt: Finally, we have the fitness measure equation of each type: In order to analyze memory in our heterogeneous asset pricing model, we shall first determine the position and stability of the steady state and the period two-cycle in relation to the memory strength parameter. We will also examine the possible qualitative changes in dynamics. Then we will perform some numerical simulations to combine global stability analysis with local stability analysis.
Position of the steady state
In our two-type heterogeneous agents model of fundamentalists versus contrarians the equilibrium pricing equation has the following form: A steady state price deviation x is a fixed point of the system, if it satisfies x = f(x) for mapping f(x). In our two-type heterogeneous agents model of fundamentalists versus contrarians we have: where either 0 eq x , or * 1 2 In the former case we get the fundamental steady state, where the price is equal to its fundamental value and the difference in fractions is: Since it follows from expressions (22) and (23) that 1 eq U C and 2 0 eq U when w 1, the steady state difference in fractions simplifies: Since it can be derived that Therefore we can state the following lemma. (27) and thus also in expression (26), memory does not affect the position of this steady state.
Stability of the steady state
In order to analyze stability of the steady state we shall rewrite our system as a difference equation: is a vector of new variables, which are defined as: 1, 1 We therefore obtain the following 5-dimensional first-order difference equation: The local stability of a steady state is determined by the eigenvalues of the Jacobian matrix, which we do not present here due to the spatial limitations. We then compute the Jacobian matrix of the 5-dimensional map. At the fundamental steady state X eq = (0, 0, 0, -C, 0) we obtain the new Jacobian matrix. A straightforward computation shows that the characteristic equation is in our case given by:
Bifurcations and the Period Two-cycle
A bifurcation is a qualitative change of the dynamical behaviour that occurs when parameters are varied (Brock and Hommes, 1998). A specific type of bifurcation that occurs when one parameter is varied is called a co-dimension one bifurcation. There are several types of such bifurcations, viz. period doubling, saddle-node and Hopf bifurcations. The first type has eigenvalue -1 of the Jacobian matrix, the second type has eigenvalue 1 and the third type has complex eigenvalues on the unit circle.
If we take a look at the eigenvalue 1, which we are in our case interested in, we can observe that a saddle-node bifurcation can never occur. Namely, the expression: can never hold, since the left-hand side is a positive constant and the right-hand side is always negative for g < 0, R > 0 and 2 0 eq n . On the other hand, the expression: may be satisfied for 2 0 eq n , since both sides of the expression are then negative. Thus a (primary) period doubling bifurcation may occur in our model for the following -value: Therefore we can state the following lemma. As in the paper of Brock and Hommes (1998), very strong contrarians with g < -2R may lead to the existence of a period two-cycle, even when there are no costs for fundamentalists (C = 0). When the fundamentalists' costs are positive (C > 0), strong contrarians with -2R < g < -R may lead to a period two-cycle. As the intensity of choice increases to = *, a period doubling bifurcation occurs in which the fundamental steady state becomes unstable and a (stable) period two-cycle is created, with one point above and the other one below the fundamental.
When the intensity of choice further increases, we are likely to find a value = **, for which the period two-cycle becomes unstable and a Hopf bifurcation of this period two-cycle occurs, as in Brock and Hommes (1998). The model would then get an attractor consisting of two invariant circles around each of the two (unstable) period two-points, one lying above and the other one below the fundamental. Immediately after such a Hopf bifurcation, the price dynamics is either periodic or quasi-periodic, jumping back and forth between the two circles. The proof of this phenomenon is not straightforward due to the non-zero period points, although the 5-dimensional system (31) -(35) is still symmetric with respect to the origin. We shall thus demonstrate the occurrence of the Hopf bifurcation and the emergence of the attractor numerically in the next section.
Numerical analysis
Our numerical analysis in the case of fundamentalists and contrarians will be conducted for fixed values of parameters R = 1.1, k = 1.0, C = 1.0 and g = -1.5. We shall thus vary the intensity of choice parameter and of course the memory strength parameter w. Four analytical tools will be used 2 ; bifurcation diagrams, largest Lyapunov characteristic exponent (LCE) plots, phase plots, and time series plots.
The dynamic behaviour of the system can first and foremost be determined by investigating bifurcation diagrams. In Figure 1 the bifurcation diagrams for two different values of the memory strength parameter are presented. We can observe that for low values of we have a stable steady state, i.e. the fundamental steady state. As has been proven in Lemma 1, the position of this steady state, i.e. x eq = 0, is independent of the memory, which is clearly demonstrated by the simulations. For increasing a (primary) period doubling bifurcation occurs at = *; the steady state becomes unstable and a stable period two-cycle appears, as proven in Lemma 3. As can be seen from the simulations, this bifurcation value is also independent of the memory. The stability of the steady state is thus unaffected by the memory, as proven in Lemma 2.
If increases further, indeed a (secondary) Hopf bifurcation occurs at = **, as has been claimed in Section 3.3; the period two-cycle becomes unstable and an attractor appears consisting of two invariant circles around each of the two (unstable) period two-points, one lying above and the other one below the fundamental. It is a supercritical Hopf bifurcation, where the steady state gradually changes either into an unstable equilibrium or into an attractor (cf. Guckenheimer and Holmes, 1983;Frøyland, 1992;Kuznetsov, 1995). The position of the period two-cycle is independent of the memory, but it is not independent of the intensity of choice, as can be seen from expression (40). Numerical simulations suggest that the secondary bifurcation value also does not vary with changing memory strength parameter w. For > ** chaotic dynamic behaviour appears, which is interspersed with many (mostly higher order) stable cycles. Such a bifurcation route to chaos was also called the rational route to randomness (Brock and Hommes, 1997a), while the last part of it has been referred to as the breaking of an invariant circle.
Notes: Horizontal axis represents the intensity of choice (). Vertical axis represents deviations of the price from the fundamental value (x) in the upper two diagrams and the value of the largest LCE in the lower two diagrams, respectively. The diagrams differ with respect to the memory strength parameter w; the left one corresponds to w = 0.3, while the right one corresponds to w = 0.9.
Figure 1. Bifurcation diagrams and Largest LCE plots of in case of fundamentalists versus contrarians
By examining largest Lyapunov characteristic exponent (LCE) plots of we arrive at the same conclusions about the dynamic behaviour of the system. It can be seen from Figure 1 that the largest LCE is smaller than 0 and the system is thus stable until the primary bifurcation, which is independent of memory. At the bifurcation value, a qualitative change in dynamics occurs, i.e. a period doubling bifurcation and we obtain a stable period twocycle. Largest LCE is again smaller than 0 and the system is thus stable until the secondary bifurcation. At this bifurcation value, again a qualitative change in dynamics occurs, i.e. a Hopf bifurcation, but the dynamics is more complicated.
For lower values of w the largest LCE after ** is non-positive, but close to 0, which implies quasi-periodic dynamics. After some transient period the largest LCE becomes mainly positive with exceptions, which implies chaotic dynamics, interspersed with stable cycles. In fact, the largest LCE plot has a fractal structure (cf. Hommes, 1998, p. 1258). In the case of w = 0.9 the global dynamics after ** immediately becomes chaotic. Memory thus certainly affects the dynamics after the secondary bifurcation. Since the latter is a period doubling bifurcation, we are talking about period doubling routes to chaos.
Next, we shall examine plots of the attractors in the (xt, xt-1) plane and in the (xt, n1,t) plane 3 without noise and with IID noise added to the supply of risky shares. In the upper left plot of each of the four parts of Figures 2 and 3 we can first observe the appearance of an attractor for the intensity of choice beyond the secondary bifurcation value. The orbits converge on such an attractor consisting of two invariant 'circles' around each of the two (unstable) period two-points 4 , one lying above and the other one below the fundamental value. As the intensity of choice increases, the circles 'move' closer to each other. In the upper right and lower left plot of each of the four parts of Figures 2 and 3 we can observe that the system seems already to be close to having a homoclinic orbit. The stable manifold of the fundamental steady state, (0, ) eq s W m , contains the vertical segment, x eq = 0, whereas the unstable manifold, (0, ) eq u W m , has two branches, one moving to the right and one to the left. Both of them are then 'folding back' close to the stable manifold.
For as Hommes (1998, p. 1254) have proven for the asset pricing model without additional memory, at infinite intensity of choice and strong contrarians, g < -R, that unstable manifold (0, 1) u W is bounded and all orbits converge on the saddle point (0, -1).
In particular, all points of the unstable manifold converge on (0, -1) and are thus also on the stable manifold. Consequently, the system has homoclinic orbits for infinite intensity of choice. In the case of strong contrarians and high intensity of choice it is therefore reasonable to expect that we will obtain a system close to having a homoclinic intersection between the stable and unstable manifolds of the fundamental steady state. This is indeed what can be observed from the lower left plot of each of the two parts of Figures 2 and 3 and it suggests the occurrence of chaos for high intensity of choice. As can be seen from the lower right plot of each of the two parts of Figures 2 and 3, the addition of small dynamic noise to the system does not alter our findings.
Notes: Horizontal axis represents deviations of the price from the fundamental value (xt). Vertical axis represents lagged deviations of the price from the fundamental value (xt-1). The groups of four diagrams differ with respect to the memory strength parameter w; the left group corresponds to w = 0.3, while the right group corresponds to w = 0.9.
Figure 2. Phase plots of (xt, xt-1) in case of fundamentalists versus contrarians
Notes: Horizontal axis represents deviations of the price from the fundamental value (xt). Vertical axis represents the fraction of fundamentalists (n1,t). The groups of four diagrams differ with respect to the memory strength parameter w; the left group corresponds to w = 0.3, while the right group corresponds to w = 0.9. Again, we can observe that memory has an impact on the global dynamics of the system. That is, both the convergence of the system on an attractor consisting of two invariant 'circles' around each of the two unstable period two-points and the 'moving' of the circles closer to each other seem to be happening faster (at lower intensity of choice) when more memory is present in the model. Moreover, at the same intensity of choice we seem to be closer to obtaining a system that has a homoclinic intersection between the stable and unstable manifolds of the fundamental steady state when the memory strength is higher.
Finally, we shall examine time series plots of deviations of the price from the fundamental value and of the fraction of fundamentalists 5 . Figure 4 shows some time series corresponding to the attractors in Figures 2 and 3, with and without noise added to the supply of risky shares. Similarly to the findings of Brock and Hommes (1998), we can observe that the asset prices are characterized by an irregular switching between a stable phase with prices close to their (unstable) fundamental value and an unstable phase of up and down price fluctuations with increasing amplitude.
This irregular switching is of course reflected in the fractions of fundamentalists and contrarians in the market. Namely, when the oscillations of the price around the unstable steady state gain sufficient momentum, it becomes profitable for the trader to follow efficient market hypothesis fundamental value despite the costs that are involved in this strategy. The fraction of fundamentalists approaches unity and the asset price stabilizes. But then the nonzero costs of fundamentalists bring them into position where they are unable to compete in the market; the fraction of fundamentalists rapidly decreases to zero, while the fraction of contrarians with no costs approaches unity with equal speed. The higher the intensity of choice, ceteris paribus, the faster this transition is complete; when approaches the neoclassical limit, the entire mass of traders tends to use the best predictor with respect to costs, i.e. the strategy with the highest fitness.\ Additional memory does not change the pattern of asset prices per se, but it does affect its period. Namely, at the same intensity of choice and higher memory strength the period of this irregular cycle appears to be elongated on average, in such a way that the stable phase with prices close to their fundamental value lasts longer, while the duration of the unstable phase of up and down price fluctuations does not change significantly. The effect of including more memory thus mainly appears to be stabilizing with regard to asset prices. With regard to fractions of different trader types we could say that including additional memory affects the transition from the short period of fundamentalists' dominance to the longer period of contrarians' dominance in the market. This transition takes more time to complete at the same intensity of choice. More memory thus causes the traders to stick longer to the strategy that has been profitable in the past, but might not be so profitable in the recent periods.
Notes: Horizontal axis represents the time (t). Vertical axis in each pair of time series plots first represents deviations of the price from the fundamental value (xt), and then the fraction of fundamentalists (n1,t). The plots on the left-hand side and the right-hand side of the figure differ with respect to the memory strength parameter w; the ones on the left correspond to w = 0.3, while the ones on the right to w = 0.9. We therefore obtain the following 5-dimensional first-order difference equation: 2, 1 3, 1 1, 2, 2 3, 3 2 3 3 3 , 1 , 1 1 1 1, 1, 1 1, 1 2, 1 1, 1 2, 1 (1 ) Our three-type heterogeneous agents model of fundamentalists versus biased beliefs in general can have the following steady state price deviations: We obtain the fundamental steady state for 2 x . This is implied by 1 2 3 0 eq eq eq u u u when w 1 and consequently by 1 2 3 1 3 eq eq eq n n n , originating from the rewritten expression (44).
By performing a generalization we can state the following lemma.
Proof of Lemma 4:
We will prove a more general result for the case with h = 1, …, H purely biased types bh (including fundamentalists with b1 = 0). Proceeding from the non-transformed variables the system is: , steady states of expressions (55) and (57) or expression (58) are determined by: where r = R -1. Since a steady state has to satisfy expression (60), following Hommes (1998, p. 1271), a straightforward computation shows that: where the inequality follows from the fact that the term between square brackets can be interpreted as the variance of the stochastic process, where each bh is drawn with probability nh. Therefore,
Stability of the steady state and bifurcations
The local stability of a steady state is again determined by the eigenvalues of the Jacobian matrix. At the fundamental steady state X eq = (0, 0, 0, 0, 0) the Jacobian matrix exhibits the characteristic equation that is in our case given by: which has the following three solutions, two of them being double: 1 0 The fundamental steady state is stable for 1 , which in our case is limited to the product of eigenvalues 4,5 being smaller than one, i.e. Thus we can state the following lemma. . Memory affects the stability of this steady state by restricting it to the given interval of the parameter value.
Proof of Lemma 5:
From the characteristic equation (62) we can observe five eigenvalues. The first three eigenvalues always assure stability, while the last two eigenvalues limit stability. Given k > 0, b > 0, 0, R > 1 and 0 w 1, the condition for stability in terms of implies . Similarly, the condition for stability in terms of w indicates Memory therefore affects the stability of the steady state as shown.
If we now take a look at the eigenvalues 4,5 of the characteristic equation (62), which are of interest in our case, we can observe that a saddle-node bifurcation would occur for: This can never hold, since 0 and the left-hand side is always non-negative, while 1 R and the right-hand side is always negative. On the other hand, a period doubling bifurcation would occur for: This can never hold either, since 0 and the left-hand side is again always non-negative, while 0 1 w and the right-hand side is either negative or not defined.
The remaining qualitative change of the three discussed in Section 4.3 is the Hopf bifurcation. For this to occur, a complex conjugate pair of eigenvalues has to cross the unit circle. Eigenvalues 4,5 are complex for produces the following interval of values: We therefore state the following lemma. the system exhibits a Hopf bifurcation. Memory affects the emergence of this bifurcation, viz. with more memory the bifurcation occurs later.
As we have just established, in the case of fundamentalists versus opposite biased beliefs increasing intensity of choice to switch predictors destabilizes the fundamental steady state. This happens through a Hopf bifurcation. We can thus conclude, as did Brock and Hommes (1998) for the simpler version of the model, that in the presence of biased agents the first step towards complicated price fluctuations is different from that in the presence of contrarians. This fact does not change when we take memory into account.
Proof of Lemma 6:
When increases, terms with in the expressions for the eigenvalues 4,5 increase as well, and one of the eigenvalues has to cross the unit circle at some critical * Since the memory strength parameter is present in the expression for *, memory affects the emergence of this bifurcation; the higher the value of this parameter, the higher the bifurcation value.
Numerical analysis
Our numerical analysis in the case of fundamentalists and opposite biased beliefs will be conducted for fixed values of parameters R = 1.1, k = 1.0, b2 = 0.2 and b3 = -0.2. We shall thus vary the memory strength parameter w and the intensity of choice parameter . The same four analytical tools will be used than in Section 3.4.
Dynamic behaviour of the system can again first and foremost be determined by investigating bifurcation diagrams. From Figure 5 we can observe that for low values of we have a stable steady state, i.e. the fundamental steady state. As has been proven in Lemma 4, the position of this steady state, i.e. x eq = 0, is independent of the memory, which is clearly demonstrated by the simulations. For increasing a bifurcation occurs at = *, which is a Hopf bifurcation; the steady state becomes unstable and an attractor appears, consisting of an invariant circle around the (unstable) steady state. It is again a supercritical Hopf bifurcation, where the steady state gradually changes either into an unstable equilibrium or into an attractor.
The bifurcation value varies with changing memory strength parameter, as given by expression in Lemma 6. As can also be seen from Figure 5 at higher memory strength the bifurcation occurs later. For > * complex dynamical behaviour appears, which is interspersed with stable cycles. As we have already discovered in Section 4.2, irrespective of the amount of additional memory that is taken into account such a (bifurcation) route to complicated dynamics is different from that in the presence of contrarians, where we observed period doubling route to chaos (rational route to randomness).
By examining largest Lyapunov characteristic exponent (LCE) plots of we arrive at more precise conclusions about the dynamic behaviour of the system. It can be seen from Figure 5 that the largest LCE is smaller than 0 and the system is thus stable until the bifurcation. At the bifurcation value a qualitative change in dynamics occurs, i.e. a Hopf bifurcation. The dynamics is somewhat more complicated. Namely, we can observe that the largest LCE after = * is non-positive, but mainly close to 0, which implies periodic and quasi-periodic dynamics, i.e. for high values of the intensity of choice only regular (quasi-)periodic fluctuations around the unstable fundamental steady state occur. An important finding is that the predominating quasi-periodic dynamics does not seem to evolve to chaotic dynamics and the route to complex dynamics is indeed different from the routes examined so far.
Notes: Horizontal axis represents the intensity of choice (). Vertical axis represents deviations of the price from the fundamental value (x) in the upper two diagrams and the value of the largest LCE in the lower two diagrams, respectively. The diagrams differ with respect to the memory strength parameter w; the left one corresponds to w = 0.3, while the right one corresponds to w = 0.9. Next, we shall examine plots of the attractors in the planes, determined by (xt, xt-1) and (xt, n1,t).
In the upper left plot of each of the two parts of Figure 6 we can first observe the appearance of an attractor for the intensity of choice beyond the bifurcation value. The orbits converge to such an attractor consisting of an invariant 'circle' around the (unstable) fundamental steady state. The attractor obtained in the (xt, n1,t) plane is somewhat different. Namely, the unstable steady state dissipates into numerous points and evolves into a 'loop' shape, as shown in Figure 7.
As the intensity of choice increases, the dynamics remains periodic or quasi-periodic; in case of past deviations of prices from the fundamental value and fractions of biased beliefs the invariant circle slowly changes its shape into a '(full) square' (see Figure 6), while in case of fractions of fundamentalists the loop slowly changes into a 'three-sided square' (see Figure 7).
For high values of intensity of choice we seem to obtain (stable) higher period cycles; in the case of past deviations of prices from the fundamental value and fractions of biased beliefs we seem to attain a stable period four-cycle, while in the case of fractions of fundamentalists it is difficult to obtain any solid indications based solely on numerical simulations due to Notes: Horizontal axis represents deviations of the price from the fundamental value (xt). Vertical axis represents lagged deviations of the price from the fundamental value (xt-1). The groups of four diagrams differ with respect to the memory strength parameter w; the left group corresponds to w = 0.3, while the right group corresponds to w = 0.9. Figure 6. Phase plots of (xt, xt-1) in case of fundamentalists versus opposite biases Notes: Horizontal axis represents deviations of the price from the fundamental value (xt). Vertical axis represents the fraction of fundamentalists (n1,t). The groups of four diagrams differ with respect to the memory strength parameter w; the left group corresponds to w = 0.3, while the right group corresponds to w = 0.9. convergence problems for very high values of intensity of choice. In the latter case we can observe stable period four-and six-cycles, however (see lower right plot of each of the two parts of Figure 7). Indeed, Brock and Hommes (1998) proved for the case of exactly opposite biased beliefs and infinite intensity of choice in their simpler version of the model without additional memory that the system has a stable four-cycle attracting all orbits, except for hairline cases converging on the unstable fundamental steady state. Additionally, they discovered that for all three trader types average profits along the four-cycle equal b 2 .
Again, we can observe that the memory has an impact on the dynamics of the system. Namely, both the convergence of the system on an attractor and the further development of such an attractor seem to be dependent on the value of the memory strength parameter. The precise impact of memory is somewhat more difficult to establish due to the dependence of the bifurcation value on memory strength and the subsequent need to choose higher intensities of choice with higher memory strength in order to demonstrate different nature of attractors of the system. However, we can still establish that at the same intensity of choice (after the bifurcation value) the system apparently needs less additional memory in order to develop a specific stage of an attractor or even a (stable) higher period cycle.
Finally, we shall examine time series plots of deviations of the price from the fundamental value and of the fractions of all three types of traders. Figure 8 shows some time series corresponding to the attractors in Figures 6 and 7. We can observe that opposite biases may cause perpetual oscillations around the fundamental, even when there are no costs for fundamentalists, but can not lead to chaotic movements. Furthermore, as has already been indicated by the appearance of stable higher period cycles for high intensities of choice, in a three-type world, even when there are no costs and memory is infinite, fundamentalist beliefs can not drive out opposite purely biased beliefs, when the intensity of choice to switch strategies is high.
Hence, according to the argumentation of Hommes (1998, p. 1260), the market can protect a biased trader from his own folly if he is part of a group of traders whose biases are 'balanced' in the sense that they average out to zero over the set of types. Centralized market institutions can make it difficult for unbiased traders to prey on a set of biased traders provided they remain 'balanced' at zero. On the other hand, in a pit trading situation unbiased traders could learn which types are balanced and simply take the opposite side of the trade. In such situations biased traders would be eliminated, whereas a centralized trading institution could 'protect' them.
Additional memory does not change the pattern of asset prices and trader fractions per se, but it does affect its period. Namely, at the same intensity of choice and higher memory strength the period of these cycles appears to be elongated on average, in a way that both the negative and the positive deviation of the price from the fundamental value last longer. The same is valid for fractions ob biased traders, while in the case of fractions of fundamentalists the prolongation of the period of the irregular cycle appears in the form of less frequent 'spikes', which is understandable, since more persistent deviations of prices from the fundamental imply more space for biased traders and less chance for appearance of the fundamentalists. More memory causes the traders to stick longer to the strategy that has been profitable in the past, but might not be so profitable in the recent periods; therefore the system approaches purely quasiperiodic dynamics when the memory strength increases at given intensity of choice.
Notes: Horizontal axis represents the time (t). Vertical axis in each set of time series plots represents deviations of the price from the fundamental value (xt), and the fractions of fundamentalists (n1,t), optimistic biased beliefs (n2,t) and pessimistic biased beliefs (n3,t). The plots on the left-hand side and the right-hand side of the figure differ with respect to the memory strength parameter w; the ones on the left correspond to w = 0.3, while the ones on the right to w = 0.9.
Concluding remarks
In a market with fundamentalists and contrarians the fundamental steady state is the unique steady state of the system, which arises for low values of intensity of choice. Memory affects neither the position of this steady state nor its stability. For increasing intensity of choice a primary bifurcation, i.e. a period doubling bifurcation occurs; the steady state becomes unstable and a stable period two-cycle appears. Both the primary bifurcation value and the position of the period two-cycle are independent of the memory. For further increasing intensity of choice a secondary bifurcation, i.e. a supercritical Hopf bifurcation, occurs; the period two-cycle becomes unstable and an attractor appears consisting of two invariant circles around each of the two (unstable) period two-points, one lying above and the other one below the fundamental. For high intensity of choice chaotic asset price dynamics occurs, interspersed with many stable period cycles. Such a bifurcation route to chaos is often called the rational route to randomness.
In case of strong contrarians and high intensity of choice it is reasonable to expect that we will obtain a system that is close to having a homoclinic intersection between the stable and unstable manifolds of the fundamental steady state, which indicates the occurrence of chaos. There exists a certain limited interval of memory strength values, for which at a given intensity of choice we are more likely to obtain such a system with more additional memory in the model. A rational choice between fundamentalists' and contrarians' beliefs triggers situations that do not reach fruition due to practical considerations and are thus unattainable, 'castles in the air', as Brock and Hommes (1998, p. 1258) would put it. As a consequence we obtain market instability, characterized by irregular up and down oscillations around the unstable efficient market hypothesis fundamental price. Additional memory lengthens on average the period of this irregular cycle and mainly appears to be stabilizing with regard to asset prices.
In a market with fundamentalists and opposite biases the fundamental steady state is also the unique steady state of the system, arising for low values of intensity of choice. Memory does not affect the position of this steady state, but does affect its stability. For increasing intensity of choice a supercritical Hopf bifurcation occurs; the steady state becomes unstable and an attractor appears. Memory affects the emergence of this bifurcation; the higher the memory strength, the higher the bifurcation value. More memory thus has a stabilizing effect on dynamics. For high intensity of choice the dynamic behaviour is more complex. However, irrespective of the amount of additional memory such a route to complicated dynamics is different from that in the presence of contrarians, for after the bifurcation value only regular (quasi-)periodic fluctuations around the unstable fundamental steady state occur. Consequently, an important finding is that the predominating quasi-periodic dynamics does not seem to evolve to chaotic dynamics.
After the incidence of the bifurcation the higher value of the memory strength parameter causes the dynamics to be less periodic and more quasi-periodic; the dynamics therefore converges on purely quasi-periodic behaviour with increasing memory strength. Opposite biases may cause perpetual oscillations around the fundamental, even without costs for fundamentalists, but can not lead to chaotic movements. Furthermore, in a three-type world, even when there are no costs and memory is infinite, fundamentalist beliefs can not drive out opposite purely biased beliefs, when the intensity of choice to switch strategies is high. Hence, following the argumentation of Hommes (1998, p. 1260), the market can protect a biased trader from his own folly if he is part of a group of traders whose biases are balanced.
In conclusion, both our analytical work and our numerical simulations suggest that biases alone do not trigger chaotic asset price fluctuations. Sensitivity to initial states and irregular switching between different phases seem to be triggered by trend extrapolators; in our case by contrarians. Apparently, some (strong) trend extrapolator beliefs are needed, such as strong trend followers or strong contrarians, in order to trigger chaotic asset price fluctuations. A key feature of our heterogeneous beliefs model is that the irregular fluctuations in asset prices are triggered by a rational choice in prediction strategies, based upon realized profits, viz. the observed deviations from the fundamentals are driven by short-run profit seeking. We can also talk about rational animal spirits that, according to Brock and Hommes (1997b), exhibit some qualitative features of asset price fluctuations in the actual financial markets, such as the autocorrelation structure of prices and returns.
Author details
Miroslav | 2018-12-07T17:30:01.004Z | 2006-08-15T00:00:00.000 | {
"year": 2012,
"sha1": "77baf6310f6e81cfdf1b0c35be0c1d83203eaa86",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/40423",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1ae970bb6ec46c9f21fff99786da04805bf777ba",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
567467 | pes2o/s2orc | v3-fos-license | Effect of soy flour on nutritional, physicochemical, and sensory characteristics of gluten‐free bread
Abstract The aim of this study was to assess the effect of soy flour on nutritional, physicochemical, and sensory characteristics of gluten‐free (GF) bread. In this study, corn flour was replaced with soy flour at different levels 5%, 10%, and 15% to produce a more nutritionally balanced GF bread. Physical and chemical properties, sensory evaluation and crust and crumb color were measured in bread samples. The results of evaluations showed that protein content of soy flour‐supplemented GF bread significantly increased from 9.8% to 12.9% as compared to control along with an increased in fat (3.3%–4.1%), fiber (0.29%– 0.38%), and ash (1.7%–2.2%) content. Moisture (27.9%–26.5%) and carbohydrate (58.3–52.3) content decreased with the incremental addition of soybean flour. The highest total score of sensory evaluation was for the bread sample containing 15% soybean flour. The evaluation of crust and crumb showed that bread samples with 15% soy flour were significantly darker than the other bread samples. In conclusion, adding higher levels of soybean flour into GF bread can improve bread quality, sensory characteristics, and nutritional properties of bread. Nutritional status in patients with celiac disease (CD) can be improved through the produce GF bread in this way.
Many GF products were not enriched and often were prepared with refined GF flour or starch (Thompson, 1999). GF food staffs, which have not been fortified, are poor sources of fiber, iron, folate, thiamine, riboflavin, niacin, and protein (Thompson, 1999). Enriched or fortified GF products improve the quality of GF diet (Jideani & Onwubali, 2009). Soybean could be an essential part of functional foods, as well as it could be used for enhancement of product quality (Ahmad et al., 2014). Soybean also contains up to 45% protein (Islam, Chowdhury, Islam, & Islam, 2007) with a digestibility value of 91.41% (Zhao et al., 2014) and as a good source of vitamins and mineral supplies adequate amount of different amino acids required for repairing the damaged body tissues. Soy consumption is associated with decrease in certain disease including diabetes, atherosclerosis, and cancer (Ahmad et al., 2014;Mohammadi Sartang, Mazloomi, Tanideh, & Rezaian Zadeh, 2015).
Soybean proteins include all the essential amino acids that are important for health. Soybean protein is about four times of wheat, six times of rice grain and it is also rich in Ca, P and Vitamins A, B, C, and D (Islam et al., 2007;Serrem, Kock, & Taylor, 2011). Fortified cereal with soy protein, especially when mixed with proper ratio, is one of the best sources of protein (Wadud, Abid, Ara, Kosar, & Shah, 2004). Soybean flour has been used to improve protein quality and shelf life of bread (Mohamed, Rayas-Duarte, Shogren, & Sessa, 2006;Sanchez et al., 2004). Also, some studies have shown that adding soy flour (0.5%) to GF flour improves the quality of the bread (Sanchez, Osella, & Mdl, 2002). On the other hand, Iranian diet is mostly dependent on bread as major energy source (Rostami, Malekzadeh, Shahbazkhani, Akbari, & Catassi, 2004). The percentage of carbohydrate and fat in this type of diet has been evaluated 66% and 22% of total energy, respectively (Posner, Quatromoni, & Franz, 1994). Therefore, the main challenge for food scientists about GF products is production of high-quality GF bread (Rostami et al., 2004). So, this study aimed to determine the effect of adding different percentages of soy flour on nutritional, physicochemical and sensory characteristics of GF bread to produce bread with optimum characteristics.
| Loaf specific volume
Loaf specific volume (SLV) is considered as one of the most important criteria in evaluating bread quality since it provides quantitative measurements of baking performance (Boye, Zare, & Pletch, 2010;Tronsmo, Faergestad, Schofield, & Magnus, 2003). SLV was expressed as the volume/mass ratio of bread. Weight (g) of bread was determined after cooling for 60 min according to the methods described in AACC (2000). Also, bread volume was determined using millet seed displacement method 1 hr after taking away from the oven as
| Moisture content
Moisture content was determined after storage for 24 h at room temperature (25 ± 2°C) according to the method described in AOAC (2000).
| Ash content
The ash was determined by burning the known weights of the samples in a muffle furnace as recommended by the AACC (2000).
| Crude protein
The percentage of protein was determined by Kjeldahl method as recommended by the AOAC (1995). The conversion factor of nitrogen to protein was 6.25.
| Crude fat
The crude fat was determined by extracting a known weight of sam-
| Crude fiber
Crude fiber was determined as recommended by the AACC (2000).
| Carbohydrate content
The available carbohydrate was measured by the difference method (the percent crude protein, fat, fiber, and ash minus percent dry matter) (FAO, 2003).
| Sensory evaluation
Bread samples prepared by different levels of soy flour were evaluated by 30 taste-testing panel judges comprising of workers with more than 10 years of experience in baking and teachers, scientific officers and students of the School of Nutrition and Food Sciences affiliated by Shiraz University of Medical Sciences. The bread samples were served as slices including the center points, at the same time. The panelists were requested to evaluate the bread on the basis of acceptance of its color, texture, taste, and overall quality on a 5-point hedonic scale which ranged from 1 (dislike extremely) to 5 (like extremely) for each organoleptic characteristic.
| Statistical analysis
The experiments were performed in a randomized design and performed at least in triplicates. Analysis of variance (ANOVA) and Kruskal-Wallis Test were used to study the differences between samples. Duncan's multiple range test (p < .05) was used to determine the significances within treatments. Statistical analysis of the data was performed using the SPSS software (SPSS, Inc., USA).
| Physicochemical properties of gluten-free bread with different levels of soy flour
Bread samples were prepared with 0, 5, 10, and 15% soy flour and subsequently compositions of the bread were determined and the results were presented in Table 1.
| Moisture and ash content
Although there was no significant difference in the moisture content, the highest moisture content was observed in control bread (27.9%) which is in agreement with the other studies (Banureka & Mahendran, 2011;Farzana & Mohajan, 2015;Olatidoye & Sobowale, 2011). The moisture content decreased gradually with the incremental addition of soy flour (27.9%-26.5%). This might be due to the fact that soy flour contain higher amount of solid matters with high emulsifying properties compared to corn flour. This findings show that the fortification of GF bread with soy flour could produce a more shelf stable bread due to its lower moisture content (Jimoh & Olatidoye, 2009).
It was seen that the highest ash content was in the sample containing 15% soy flour (2.2%) and the lowest in control bread (1.7%).
| SLV
In accordance with other studies (Islam et al., 2007;Sanchez et al., 2002), there was a reduction of SLV caused by soybean flour addition (Table 1). This difference was significant between bread samples with 15% soy flour and other bread samples. It gradually decreased with increasing level of soy flour in bread formulation (The results varied from 1.6 to 2.7 cc/g). Higher specific volume could be because of large bubbles that destroy crumb structure. Soy protein, as a water-binding factor with stabilizing property which is unaffected during baking process, may modify this effect by preventing merger of bubbles in the crumb (Sanchez et al., 2002).
| Protein content
In line with other studies (Abioye et al., 2011;Ayo, Ayo, Popoola, Omosebi, & Joseph, 2014;Islam et al., 2007;Olaoye, Onilude, & Idowu, 2006), the protein content of different bread samples, from 9.8% to 12.9%, gradually increased with increasing level of soy flour as shown in Also, some studies have shown that food proteins could influence on quality and functional properties of food goods, so soy protein may improve GF bread quality (Gerrard, 2002).
| Fat content
It was found that the fat content of soy bread samples were more than with increase in soybean flour from 0% to 15% (Table 1). This is due to the fact that the fat content of soy flour is higher in comparison to corn flour (Abioye et al., 2011;Akpapunam et al., 1997;Islam et al., 2007). Soybean is an edible oil source with about 20%-24% fat content (Reddy, 2004). Like other vegetable, it is rich in unsaturated fat (61% polyunsaturated fat and 24% monounsaturated fat). Also, soybean is rich in polyunsaturated fatty acids such as linoleic and linolenic acid, which are necessary for human health (Hegstad, 2008).
| Crude fiber
In consistent with other studies (Ayo et al., 2014;Farzana & Mohajan, 2015;Ndife, Abdulraheem, & Zakari, 2011), crude fiber content was improved from 0.29% to 0.38% by rising the soy flour content from 0% to 15%. The crude fiber includes the cellulose components. The soy flour may contain higher amount of this type of fiber than that of corn flour.
| Sensory characteristics of GF bread with different levels of soy flour
The effect of soy flour on sensory characteristics of soy bread samples (color, taste, flavor, texture, and overall acceptability) were measured by the panel judges and the results are presented in Table 2.
In this study, with regard to taste, texture, color, and overall acceptability, the sensory characteristics score of bread containing 15% soy flour, compared to 0, 5, and 10 percentage of soy flour, were found to be the highest.
The taste is the most important factor which affects the acceptability of an edible product (Banureka & Mahendran, 2011;Farzana & Mohajan, 2015). Although not significant, there was an increase in score for taste from 4.15 to 4.35 by increasing in the soy flour percentage.
The score for color of GF bread samples changed from 4.25 to 4.55. The highest score (4.55) was found for bread containing 15% soy flour. The score for color increased with the increase in the level of soy flour which was not significant. The color of the GF bread samples improved from creamy to brown. The darker color of GF bread samples with soy flour may be due to the presence of yellow pigment in the soybean flour and Maillard reaction during processing (Banureka & Mahendran, 2011;Olatidoye & Sobowale, 2011).
With the increase in substitution of soy flour to the GF bread samples, the crust texture increased from 3.45 to 4.1. The bread containing 15% soy flour had the highest score (4.1) and the bread containing 5% soy flour had the least score (3.3). The score of crust texture improved with the increase in the level of soy flour which was statistically significant (p = .001). It has been shown that appearance of bread is an important sensory parameter (Hoseney, 1994 containing 5% soy flour had the highest score (4.05) and the GF bread containing 15% soy flour had the least score (3.95). This may be due to the beany flavor of soy flour (Akubor & Ukwuru, 2003). Overall acceptability is one of the important factor in sensory evaluation (Banureka & Mahendran, 2011;Farzana & Mohajan, 2015). Bread containing 5% soy flour had the lowest overall acceptability (3.95 ± 0.82) and the highest overall acceptability was calculated for bread containing 15% soy flour (4.45 ± 0.68). This difference was not statistically significant.
At the 15% level of soy flour substitution, the bread had higher scores for all the sensory characteristics except flavor. Some studies have shown that addition of 10% or 15% soy flour to other flour produce acceptable products (bread or biscuit) (Awasthi et al., 2012;Banureka & Mahendran, 2011;Farzana & Mohajan, 2015;Jimoh & Olatidoye, 2009). Thus, incorporation of soy flour more than 15% did not produce acceptable products.
Color together with texture and aroma, contributes to consumer preference. It is influenced by physicochemical parameter of dough (Ahmad et al., 2014). GF breads often have low quality, undesired taste and flavor and poor crust and crumb characteristics (Thompson, 1999). Zarkadas et al. (2006) have reported that despite of increasing availability of GF foods in recent years, there is a difficulty in finding good-quality GF foods for most of CD patients. Soybean flour has been used in bread in previous studies (Abioye et al., 2011;Akpapunam et al., 1997;Islam et al., 2007;Sanchez et al., 2004). Some authors have found that soy could improve the crumb, bread volume, and absorption properties of the bread (Moore, Schober, Dockery, & Arendt, 2004;Sanchez et al., 2004). (2013) found that the visual color is directly related to acceptance and taste. GF breads usually tend to have a light crust color, so the darkening of the crust color due to soy addition is appropriate (Gujral & Rosell, 2004).
| CONCLUSION
Adding soy flour can improve quality and nutritional properties of wheat bread (Islam et al., 2007). The results of this study showed that adding 15% soy flour to the GF bread formulation, improved bread quality, sensory characteristics, and nutritional properties of bread. Therefore, in order to prevent major CD complications, such as growth failure and weight loss, through a healthy diet, consumption GF bread containing 15% soy flour could be beneficial. | 2018-04-03T03:08:09.106Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "e1426a7de3726ff95a0d6419d708c8c4ab5ad92d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.411",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1426a7de3726ff95a0d6419d708c8c4ab5ad92d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
268609906 | pes2o/s2orc | v3-fos-license | Impact on life expectancy was the most important information to clients when considering whether to take action for an overweight or obese dog
obesity-related conversations within companion animal practice. SAMPLE Dog owners recruited using snowball sampling. METHODS A cross-sectional online questionnaire was distributed to dog owners. A discrete choice experiment was used to determine the relative importance, to participating dog owners, of information about selected weight-related attributes that would encourage them to pursue weight management for a dog when diagnosed as overweight by a veterinarian. RESULTS A total of 1,108 surveys were analyzed, with most participating dog owners residing in Canada. The most important weight-related attribute was life expectancy (relative importance, 28.56%), followed by the timeline for developing arthritis (19.24%), future quality of life (18.91%), change to cost of food (18.90%), and future mobility (14.34%). CLINICAL RELEVANCE Results suggest that dog owners may consider information relating to an extension of their dog’s life as the most important aspect of an obesity-related veterinary recommendation. By integrating dog owner preferences into discussions between clients and veterinary professionals about obesity, there is the potential to encourage more clients to engage in weight management efforts for their overweight or obese dog.
Given the implications for pet health and well-being, client communication regarding obesity is a professional responsibility for veterinary professionals. 8owever, broaching the subject of obesity has been perceived as a sensitive and challenging task for veterinary professionals when engaging with clients who have pets with excess weight. 9A study 10 exploring veterinary professionals' perceptions of pet weightrelated communication in practice found that many did indicate that they may avoid an obesity-related conversation with some clients, and the majority acknowledged the difficulty of treating obesity in pets.It has been suggested that the perceived delicate nature of discussing pet weight may lead to hesitancy and challenges for veterinarians when attempting to address the issue, due in part to concerns about offending or alienating clients and potentially straining the clientveterinarian relationship. 9Participating veterinarians in
OBJECTIVE
To determine dog owner preferences for information communicated during veterinarian-client obesity-related conversations within companion animal practice.SAMPLE Dog owners recruited using snowball sampling.
METHODS
A cross-sectional online questionnaire was distributed to dog owners.A discrete choice experiment was used to determine the relative importance, to participating dog owners, of information about selected weight-related attributes that would encourage them to pursue weight management for a dog when diagnosed as overweight by a veterinarian.
a focus group study 11 discussed a prevailing belief that many pet owners are reluctant to engage in weight conversations when their pet is overweight.Observational research in veterinary practices suggests that obesity-related conversations between veterinary professionals and clients are often brief, with limited weight management recommendations provided to clients. 12Furthermore, obesity advocates have suggested that clients often struggle to accept or act upon weight-related recommendations for their pets, resulting in a potential disconnect between veterinary recommendations and client adherence. 9ssessing clients' readiness to change in relation to managing their pet's weight has been proposed as a potential way to better understand and more effectively engage clients when veterinary professionals are addressing pet obesity and weight management. 13The stages of change include precontemplation, contemplation, preparation, action, and maintenance, and these stages are a conceptual framework for understanding clients' process when intentionally modifying established behaviors. 14Preliminary research suggests that a notable number of pet owners with an overweight or obese pet may be in the early stages of precontemplation or contemplation, where readiness to change is low. 15A recognition of clients' current stage of change related to managing their pet's obesity may provide veterinary professionals with greater insight into the most effective method for tailoring communication to their client in a way that encourages a client's progress toward the action and maintenance stages.Identifying what information from the veterinary team about a pet's overweight or obese condition may encourage clients to transition from an earlier stage of change into action is potentially a significant factor in clients' uptake of weight management recommendations.
Previous research in human medicine has demonstrated the positive impact of clinicians' understanding of patient preferences on patient outcomes. 16As a result, marketing tools like discrete choice methods (DCMs) that elucidate consumer preferences have gained traction in medical research and have recently been applied in veterinary research. 17,18Discrete choice methods are a collection of methods that can be used to quantify the underlying preferences that drive the choices respondents make.Discrete choice methods go beyond simple methods, such as ranking or rating methods, by forcing respondents to make insightful trade-off decisions, as they would in the real world, to uncover their true preferences more accurately.These quantified preferences can then be used to predict future choices for product or service options that are described but not yet available, which may assist in the development and design of new products or services. 19n the human medical field, DCMs have been used to identify preferences for treatment and weight management plans among individuals with severe and complicated obesity, informing the development of more appropriate strategies for effectively managing this condition. 20Similarly, in veterinary medicine, DCMs have been successfully employed to identify the key preferences that influence dog owners' choices when selecting antiparasitic products, thereby assisting veterinarians in making treatment recommendations that consider general preferences of pet owners. 18Additionally, DCMs have proved instrumental in evaluating the impact of factors such as cost, ease of administration, and drug importance on dog owners' relative preferences for antimicrobial treatments. 17Discrete choice methods offer other opportunities to explore client preferences in the delivery of veterinary care, including preferences for information that would assist a client that has an overweight or obese pet to take action.
This study aimed to apply DCMs to identify information from the veterinary team that is most important to dog owners in relation to an overweight or obese dog that would encourage them to take action in pursuing weight management.
Methods
A cross-sectional, questionnaire-based study of pet owners was conducted between February 9 and April 18, 2023, via an online survey platform (Sawtooth Software).The study protocol was reviewed and approved by the University of Guelph Research Ethics Board (REB#22-09-019).The CHERRIES 21 and ESTI-MATE 22 checklists were utilized to ensure proper survey and discrete choice reporting, respectively.
Questionnaire design
A pet owner questionnaire was developed in Lighthouse Studio, version 9.15.9 (Sawtooth Software).The anonymous questionnaire was offered in both English and French, and participants provided consent by clicking "Agree" to participate.Participants who consented to participate were randomly assigned to 1 of 3 independent studies being conducted through the same recruitment process.Only participants who indicated ownership of at least 1 dog were randomized into the dog obesity study reported in this paper.
The questionnaire for the dog obesity study was divided into 3 sections.The first section gathered demographic information about participants, the second section was a discrete choice experiment (DCE) to identify the most important information for participating dog owners, and the final section elicited participants' prior experience with receiving an obesity diagnosis for a pet from a veterinarian.
Demographic information collected included preferred language (English, French), gender ( The DCE was developed to identify the information that would encourage a participant to pursue weight management for their dog when diagnosed as overweight by a veterinarian.Participants were presented with the following hypothetical scenario: "Imagine that you have taken your dog to the veterinarian for their annual wellness exam.The veterinarian mentions your dog is overweight and makes a recommendation to have your pet lose weight.You will be presented with 3 sets of information your veterinarian could include.Given the information presented by the veterinarian, what set of information would encourage you to act on your veterinarian's recommendation?"Each task block that followed the hypothetical scenario presented participants with 3 different sets of information (Figure 1).Each set was described using the same 5 dog weight-related attributes (types of weight-related information).Each weightrelated attribute was further described by different levels (characteristics of attributes; Table 1).
Figure 1-Example of a discrete choice task.Participants were presented with 12 such tasks.Attributes are listed on the left and remained constant across tasks, while the levels presented in columns within each of the 3 sets varied across tasks to create diverse sets of information for participants to select from.The "none" option was available across all 12 tasks.
Attribute Level
Impact Table 1-Summary of attributes and levels included in the choice tasks included in the present study, using a hypothetical scenario, to identify the information that would encourage a participant to pursue weight management for their dog when diagnosed as overweight by a veterinarian.
Following the first task block, the process was repeated 11 times, where the 3 sets presented to participants were systematically varied to create different combinations of information participants could consider.Participants then selected, from each of the 12 task blocks, the set of information that would most encourage them to pursue weight management for their dog.
Participants could also select a fourth "none" option if none of the 3 sets presented would encourage the participant to pursue weight management.
The third section of the survey was a dichotomous question that asked participants whether any veterinarian had previously expressed concerns about their pet(s) being overweight (yes or no).
Lighthouse Studio was utilized to generate 300 versions of the discrete choice exercises.This approach aimed to achieve a high level of experimental design efficiency (Supplementary Material S1), allowing for the least number of questions to be presented to respondents while gaining information regarding choice.
Selection of attributes and levels for the DCE tasks
Five attributes related to dog weight management were identified, and appropriate levels were assigned to each attribute based on existing literature.In accordance with the criteria established by Norman et al, 23 the development of attributes and levels followed a specific framework.Our primary objective was to ensure the plausibility of all levels and their combinations. 23Furthermore, we considered the appropriate range of variation within an attribute, ensuring that respondents had the opportunity to make informed trade-offs between the different levels while also avoiding an overly extensive spread that would result in participants disregarding options at the lower levels of specific attributes. 23revious research has highlighted the correlation between weight and life span in dogs, 6,24 providing a basis for the suggested levels relating to the impact on life expectancy attribute.
The inclusion of QOL stems from evidence that shows its decline in obese dogs and subsequent improvement after successful weight loss interventions. 5he attribute of future mobility was incorporated to highlight the adverse effects of excess body weight on a dog's ability to move freely. 25Arthritis was chosen as a relevant chronic condition to complement mobility measures and due to the well-established association between obesity and pet body condition as risk factors for arthritis, specifically in dogs. 6hange to cost of food was included as an attribute to reflect pet owners' willingness to pay for weight management.Further, cost per day was incorporated to compare different presentations of price.The price differences were determined using a commercial pet food finder tool. 26A commercially available weight management diet (Beneful Healthy Weight with Real Chicken Dry Dog Food; Purina) and veterinary therapeutic weight loss diet (Pro Plan Veterinary Diets Dry Food OM Select Blend Overweight Management Dry Canine Formula; Purina) were selected to determine appropriate levels for the change to cost of food (ie, + $0.19/d and + $1.60/d, respectively) based on a month's supply for a medium-sized dog (40 lb; body condition score, 7/9) on a weight loss plan, relative to a selected base dog food available at grocery stores (Alpo Cookout Classics Adult Dry Dog Food; Purina).
Participant recruitment
Snowball sampling via social media platforms (Facebook, Instagram, Twitter, LinkedIn) was used to recruit pet owner participants.A post including the link to the survey and a statement encouraging sharing of the link was posted to 4 personal social media pages belonging to research team members and colleagues.Thirty-six public and private pet-oriented organizations, including humane societies, veterinary clinics, veterinary colleges, pet stores, a commercial pet-food company, pet-oriented social media influencers and pet-related newsletters, blogs, and clubs, were also contacted for permission to distribute the survey link, and 15 participated in survey distribution.
Throughout recruitment, the study was presented as an investigation into veterinary client preferences for information during veterinarian-client-patient interactions.Individuals eligible to participate were required to be at least 18 years of age, own a pet, and possess English and/or French language proficiency.
The opportunity to win a CA$50 Amazon Gift Card was offered as an incentive (odds of winning, 1:100).At the end of the survey, participants were forwarded to a second online survey, housed in Qualtrics, to enter the random draw for the incentive.
Statistical analysis
Respondents that did not report seeing a veterinarian in the past year, provided nonsensical responses, or selected "none" for all options were excluded from analysis.To assess data quality, a time cutoff of 40% of the median total time (3.38 minutes) taken by overall respondents to complete the survey was determined.Respondents who fell below this time cutoff were excluded from analysis. 27In addition, an individual-level root likelihood (RLH), measuring the internal consistency among respondents, was used to assess respondents who appeared to provide random or less thoughtful responses. 28Subsequently, a minimum RLH threshold (0.35) was determined by identifying responses that fell below the 95th percentile cutoff based on RLH scores derived from randomly generated data.Descriptive statistics were calculated for demographic data in Stata, version 16 (StataCorp).Mean, median, SD, and range were calculated for continuous variables, and frequencies were calculated for categorical variables.All analysis of choice data was performed in Lighthouse Studio, version 9.15.9 (Sawtooth Software), and Excel, version 16.49 (Microsoft Corp).
For analysis of DCE data, a Markov chain Monte Carlo hierarchical Bayesian model was developed to estimate parameters in the form of part-worth utilities (values representing the relative preference for levels within each attribute).This model used effects-coding to allow for the comparison of the effect of each level with the average effect of all levels within an attribute.Variables were assessed as main effects.Model parameters (ie, part-worth utilities) were estimated using 10,000 burn-in iterations followed by 10,000 draws, which were averaged to calculate the parameter estimates, producing the final model.Part-worth utilities were probability scaled (values were rescaled to sum to 100) to facilitate ease of interpretation and enhance the intuitive understanding of relative preference strength.
Root likelihood and percent certainty were used to assess model fit (Supplementary Material S1).
From the results of this model, attribute importances (values representing the relative importance of each attribute) were calculated from the difference in the part-worth utilities of levels in an attribute over the total differences in ranges for each attribute's levels, resulting in percentages that add to 100.
Participant demographics
A total of 1,591 participants met inclusion criteria and completed the study relating to dog obesity.Following exclusions (Figure 2), 1,108 questionnaires were included in the final analysis.Of these 1,108 participants, the majority were residents of Canada (61.3% [679/1,108]) and identified as women (63.6% [705/1,108]), and the participants had a mean age of 37.6 years (SD 13.8 [median, 33 years; range, 18 to 88 years; Table 2]).About half of participants indicated they had previously experienced being told by a veterinarian that their pet was overweight (50.2% [556/1,108]), and the remainder indicated never having been told their pet was overweight by a veterinarian (49.8% [552/1,108]).
DCE analysis and relative attribute importances
The relative importance of each attribute related to veterinarian-provided obesity-related information, in order of most important to least important, was impact on life expectancy (28.56%), timeline for developing arthritis (19.24%), future QOL (18.91%), change to cost of food (18.90%), and future mobility (14.34%).Relative preferences of the levels within each attribute varied (Figure 3).
Discussion
The prevalence of obesity in dogs continues to rise 1 despite the efforts of the veterinary profession, and it has been suggested that veterinary professionals experience apprehension about broaching the topic of weight. 9In addition, little is known about client preferences for receiving weight-related information or what information may encourage them to pursue weight management for their pet.The present study found that participating dog owners prioritized the impact on life expectancy as most important among the attributes of obesity-related information presented.These results emphasize the need for veterinarians to closely consider including a discussion with clients about the potential impact of obesity on a dog's life span in both preventive and weight management conversations.
In a recent study 12 examining pet weight-related communication between veterinary professionals and clients, it was observed that clients were provided with information regarding the importance of weight management for their pet's overall health and well-being in less than a third of the obesity-related interactions studied.In particular, the study found the impact of obesity on a pet's life expectancy was mentioned by veterinarians in only a few of the obesity-related conversations, highlighting an opportunity for veterinary teams to raise this potential benefit of weight management when communicating with clients.By integrating the information clients value into these discussions and emphasizing the positive impact of reaching or maintaining an ideal body weight on life span, veterinary professionals may more effectively support clients in taking action to prevent or address their pet being overweight.
The timeline for developing arthritis was identified as the second most important attribute in the present study.One study 29 has found the most crucial aspect of effective osteoarthritis management and prevention lies in achieving and sustaining a lean body condition through appropriate nutrition and feeding practices.This lifelong study investigating diet restriction and aging in dogs revealed that maintaining an ideal body condition score through restricted diets significantly delayed the onset of late-life diseases, particularly osteoarthritis.Veterinary professionals can leverage these findings in both their preventive weight conversations and obesity-related conversations with clients, emphasizing the direct link between weight management and joint health.By doing so, veterinary professionals may have the opportunity to proactively improve the overall well-being and mitigate the risks of obesityrelated conditions for dogs.
Conversely, the attribute of future mobility was found to be the least important to dog owners relative to other tested attributes.The relatively higher preference for the timeline for developing arthritis over future mobility as a significant attribute is interesting and could potentially be explained in part by the ambiguity surrounding the concept of mobility.Although mobility and arthritis are indicators of a dog's vitality, the term arthritis likely captures the potential pain experienced by dogs in terms of their future health and well-being.In addition, arthritis is also a common disease in humans, and participants are likely to be familiar with its chronic nature and associated adverse effects, including pain.It is therefore possible these participants had an elevated concern for their dogs' suffering or discomfort related to arthritis as a result of attributing a human-like experience of arthritis to their pet.Observational research indicates that when veterinary professionals do provide clients with information about the benefits of weight loss for their pet, mobility and arthritis are the most frequently mentioned reasons for a pet to lose weight. 12Continuing to educate clients about the link between arthritis and mobility, as well as the impact of even modest weight loss on improved mobility for dogs, 30 may support clients in noticing positive, gradual changes in their pet early on in weight management programs, which may in turn promote long-term adherence.
Future QOL emerged as the third most important attribute in comparison to the other test attributes considered in the present study.Participants expressed a preference for veterinary recommendations that were linked to a future QOL ranging from very good to excellent, compared to no change to very good.Despite this, participants exhibited a notable preference for weight-related information concerning the timeline for developing arthritis rather than future QOL as a whole.This relatively lower importance for future QOL could potentially be attributed to the inherent challenges associated with the definition of QOL, given its multifaceted and subjective nature. 31While many individuals likely possess a general understanding of QOL, the lack of a standardized measurement methodology and consistent definition may introduce additional complexities for clients in grasping the term's true essence. 31his may suggest that veterinary professionals need to communicate more tangible measures relating to QOL (eg, mobility, change in activities, attitude) that are observable and important to an individual client.The use of QOL scales is commonly employed in endof-life assessments for dogs, providing guidelines for clients and veterinarians to monitor an animal's welfare while implementing palliative care or hospice plans for dogs with life-limiting diseases. 32Veterinarians may consider adopting a similar approach when dealing with clients who have overweight or obese dogs, assisting them in assessing the impact of their Unauthenticated | Downloaded 03/31/24 06:16 AM UTC pet's weight on their overall QOL.Otherwise, owners may attribute their dog's pain and mobility issues to factors like aging or other causes rather than excess weight.Developing a plan to monitor the impact of a dog's weight on their QOL provides an opportunity for the veterinary team to work collaboratively with clients to improve their dog's well-being.This also underscores an opportunity for veterinary professionals to proactively initiate preventive weight conversations during a pet's early life stages, while also educating and empowering clients to assess their dog's QOL and teaching them effective techniques to identify factors that signify early changes in QOL.
Change to the cost of food was identified as participating dog owners' fourth most important attribute of the 5 attributes explored in this study.Within the change to the cost of food attribute, participants preferred diet recommendations with a cost increase of $0.19/d compared to baseline, followed by a preference for no change in price.These results suggested that participating dog owners may not have considered cost the most important factor when deciding whether to address their pet's weight and that they are open to at least a certain amount of cost increase in addressing the weight of their pet.It is also possible that the cost-per-day presentation is not typical of how owners think of the cost of feeding their pet, in contrast to the total up-front cost for a single purchase of a diet.Preferences identified by DCE studies are not intended to be interpreted outside the context of the attributes and levels used in an individual study.Therefore, further research is needed that explores the role of a pet owner's willingness to pay in taking action to address their pet's overweight or obese condition.Specifically, future research on pet owners' cost tolerance should include examining their willingness to pay within the context of the pet owner's understanding of the potential benefits gained from weight management for their pet.
A lack of communication regarding the benefits of therapeutic weight-loss diets appears evident among pet owners, as a recent survey 1 discovered that the use of therapeutic weight loss diets ranked among the lowest of several options for pet owners in terms of both preferred methods for pet weight loss and perceived effectiveness.However, veterinarians rely on therapeutic weight loss diets as the main evidence-based treatment to assist overweight pets in achieving a healthy body composition. 1 In addition, many studies suggest that successful weight loss is unlikely with most over-the-counter (eg, grocery store) commercial diets, 33 and if feeding is too restricted, this could result in nutritional deficiencies for dogs. 34Therefore, it is crucial for veterinary professionals to educate their clients during obesity-related veterinary-client conversations about the effectiveness of therapeutic diets in reducing weight for dogs with obesity, and the associated benefits of an increased life span and delayed onset of arthritis.Veterinary teams should also consider the incorporation of shared decision-making with clients when developing weight management plans or recommending dietary changes, as participants in recent qualitative research associated a lack of options when discussing nutrition with veterinarians' financial motivations for making their recommendation. 35Openly discussing options with clients and the differences in cost and value for each may help veterinary teams mitigate clients' perception of a financial motive and respects the clients' autonomy to make an informed decision for their pet.
As with all studies utilizing DCE methodology, it is important to interpret the results of this research within the context of the attributes and levels chosen for the study.While the selection of the attributes and levels was based on prior research, the inclusion of only the most crucial and relevant attributes was aimed at reducing respondent burden during the discrete choice tasks by presenting manageable sets of information to choose from.It should also be noted that the preferences for information determined in this study can only be considered relative to the other presented attributes and not extrapolated to suggest that the attributes most preferred by dog owners here would maintain the same importance if compared to other possible attributes not considered as part of the present study.Furthermore, findings of the present study are not intended to convey whether an attribute should or should not be discussed with clients.Although findings identified the relative importance of the attributes studied, all the attributes included in the present study are worthy of including in weight management conversations between veterinary professionals and clients, and some, such as the cost of care, are instrumental to achieving informed consent. 36nother limitation of this study lies in the hypothetical nature of the conjoint scenarios, which might not fully capture real-life decision-making processes for dog owners.In addition, the quantitative nature of the survey does not provide information about the motivations or reasoning behind participants' responses, a limitation that could be explored through future qualitative research involving focus groups or interviews with pet owners.In DCMs, using a mixedmethods approach by incorporating qualitative research methods is recommended and allows for gathering essential contextual data alongside quantitative preference data. 37Although the present study captured a large number of participants speaking both English and French, a potential selection bias stemming from pet owners' willingness and ability to participate in an online survey may limit representativeness of the findings to the broader dog-owning population, particularly outside Canada and the US.Furthermore, other factors that influence clients' decisions to pursue weight management warrant exploration to best support the veterinary profession in addressing pet obesity with clients.
As a result of the present study, veterinary professionals should consider incorporating information about potential life span extension and delayed onset of arthritis into discussions with clients regarding dog obesity.The inclusion of communicating information to clients about other attributes related to weight management for dogs with obesity such as impact on Unauthenticated | Downloaded 03/31/24 06:16 AM UTC QOL, cost, and mobility may also aid veterinary teams in encouraging clients with an overweight or obese dog to take action to address their pet's weight.By recognizing and understanding what clients value in terms of benefits for their dog, veterinary professionals can further explore and tailor their communication with clients to support clients to manage their pet's weight.Furthermore, veterinary professionals should focus on enhancing client education from early in a dog's life about the benefits of a healthy weight, particularly the potential impact on life expectancy, as it may support clients' motivation to maintain an ideal body weight for their pet.
Change to cost of food No change + $0.19/d + $1.6/d
Figure 2 -
Figure 2-Flowchart of reasons for exclusion of questionnaire respondents.
Figure 3 -
Figure 3-Probability-scaled part-worth utilities of the main effects model, arranged from attributes of greatest to least preference according to respondents.A higher part-worth utility value indicates a comparatively stronger preference relative to other test values within each attribute.
Table 2 -
Demographic characteristics of questionnaire respondents. | 2024-03-23T06:18:55.836Z | 2024-03-19T00:00:00.000 | {
"year": 2024,
"sha1": "c9dae0288d13fd8d40fc036ebff861cde6067ef9",
"oa_license": "CCBYNC",
"oa_url": "https://avmajournals.avma.org/downloadpdf/view/journals/javma/aop/javma.23.12.0697/javma.23.12.0697.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dd409359f8aec52ab36b03567ca546639d18faed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248231602 | pes2o/s2orc | v3-fos-license | Hospital frailty risk score predicts adverse events in spine surgery
The Hospital Frailty Risk Score (HFRS) is derived from routinely collected data and validated as a geriatric risk stratification tool. This study aimed to evaluate the utility of the HFRS as a predictor for postoperative adverse events in spine surgery. In this retrospective analysis of 2042 patients undergoing spine surgery at a university spine center between 2011 and 2019, HFRS was calculated for each patient. Multivariable logistic regression models were used to assess the relationship between the HFRS and postoperative adverse events. Adverse events were compared between patients with high or low frailty risk. Patients with intermediate or high frailty risk showed a higher rate of reoperation (19.7% vs. 12.2%, p < 0.01), surgical site infection (3.4% vs. 0.4%, p < 0.001), internal complications (4.1% vs. 1.1%, p < 0.01), Clavien–Dindo IV complications (8.8% vs. 3.4%, p < 0.001) and transfusion (10.9% vs. 1.5%, p < 0.001). Multivariable logistic regression analyses revealed a high HFRS as independent risk factor for reoperation [odds ratio (OR) = 1.1; 95% confidence interval (CI) 1.0–1.2], transfusion (OR = 1.3; 95% CI 1.2–1.4), internal complications (OR = 1.2; 95% CI 1.1–1.3), surgical site infections (OR = 1.3; 95% CI 1.2–1.5) and other complications (OR = 1.3; 95% CI 1.2–1.4). The HFRS can predict adverse events and is an easy instrument, fed from routine hospital data. By identifying risk patients at an early stage, the individual patient risk could be minimized, which leads to less complications and lower costs. Level III – retrospective cohort study The study was approved by the local ethics committee (20-1821-104) of the University of Regensburg in February 2020.
Introduction
Due to the ageing of the world population, the number of degenerative spinal diseases requiring surgical intervention grows steadily. Geriatric patients demand rapid therapeutic Communicated by GERMANY. success and preservation of their quality of life [1,2]. Along with the surgical advances, spine surgery in older patients has increased over time. [3,4]. As geriatric patients have higher perioperative complication rates, it is important to have an estimation of the patient's individual risk, due to the substantial biological differences in people of the same age [5][6][7]. Internal concomitant diseases such as cerebrovascular diseases, heart-lung diseases, and kidney problems, which predominantly occur in geriatric patients, can also have a negative influence on the postoperative results [8]. Besides the quality aspect, poor outcomes and adverse events with prolonged hospitalization are also a socio-economic burden for the public health system all over the world [9]. Thus, practitioners are forced to further outcome optimization and risk stratification due to socio-economic aspects. Reducing complications means perioperative recognition and optimization of modifiable risk factors [10,11].
Therefore, frailty is becoming increasingly important in the surgical field to identify people at high risk of poor outcomes [12][13][14]. Physical frailty has been defined in 2013 by members of a consensus group of delegates from 6 major international, European, and US societies as: "A medical syndrome with multiple causes and contributors that is characterized by diminished strength, endurance, and reduced physiologic function that increases an individual's vulnerability for developing increased dependency and/or death" [15]. Many tools have been developed to stratify risk or frailty. The Hospital Frailty Risk Score (HFRS), published and validated in 2018 by Gilbert et al., acts as well or often even better than existing tools for risk stratification or frailty assessment. The great advantage of the HFRS to other systems is that it can be derived at any time from the existing data of the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) codes, and thus everywhere where the ICD-10 coding system is used the HFRS can be applied without additional data acquisition. In addition, the HFRS can be implemented in most hospital information systems at low cost, avoiding the interoperator variability and implementation burden associated with manual scores [16].
A recent review found 8 studies showing frailty associated with higher risk of morbidity and mortality in patients receiving spine surgery [17]. Only one of them used the HFRS, but without analyzing adverse events in detail [18]. Our study group has already shown the advantage using HFRS to predict adverse outcomes in hip and knee arthroplasty, but there is not enough information regarding the effect in predicting adverse outcomes in spine surgery [11,19].
The aim of our study was to evaluate the utility of the HFRS as a predictor for postoperative adverse events after spine surgery.
Study design and study population
In this retrospective study, a dataset derived from the department's spine surgery registry and the hospital information system was used. The study was approved by the local ethics committee (20-1821-104) of the University of Regensburg. All methods were carried out in accordance with the ethical standards of the Declaration of Helsinki 1975. As it is retrospective study the need of an informed consent was waived by the ethical committee of the University of Regensburg. From the database of our high-volume orthopedic university department with an extra spinal unit, all patients who underwent lumbar spine surgery (fusion/decompression) for degenerative reasons between 2011 and 2019 were included. Exclusion criteria were former spine surgery, infection, trauma, and tumors or patients with incomplete data files.
The investigation used a power calculation of the primary end point of reoperation within 90 days after spine surgery. As with revision procedures, all surgeries that required anesthesia and which were done in the OR, such as deep wound infects, screw loosening or displacement and cerebrospinal fluid (CSF) leak were included. The hypothesis was tested with a significance level of 5%. The expected difference in complications was set conservatively to 7% referring to a previous study [11]. To achieve a power of 80% using 2-sample chi-square test (nQuery Advisor 7.0, Statistical Solutions Ltd, Cork, Ireland), a sample size of 620 in the low-risk group and of 124 in the high-risk group assuming a 5:1 ratio was calculated. Complications and transfusion were set as secondary end points. As in the former studies, complications were categorized into surgical (surgical site infection), internal (myocardial infarction, acute heart failure, cardiac arrhythmias, pneumonia, renal failure, electrolyte imbalance), and other complications (collapse, thrombosis, pulmonary embolism, urinary tract infection, delirium, stroke) [11].
The Clavien-Dindo classification was also used to categorize complications, which divides complications into 5 groups upon the therapy need for correction [20]. A grade I complication is one in which any deviation from the normal postoperative course without the need for drug treatment or surgical, endoscopic, and radiological intervention. Grade II complications require specific pharmacological treatment. Grade III complications result in the necessity of surgical, endoscopic, or radiological intervention. Grade IV complications are defined as life-threatening events that require treatment in intensive care units. When a patient dies its graded V [20].
Data collection
Principal and secondary diagnostic codes including the corresponding ICD-10 codes were entered by professional clinical coders in the hospital information system (ORBIS; Agfa Healthcare) and then extracted and double-checked by physicians using the information medical records information.
Other clinical data recollected from the register were age, gender, length of stay, transfusion, transfer to the intensive care unit, reoperation, readmission, and complications.
Calculation of the HFRS
The HFRS was calculated retrospectively with help from the ICD-10 codes that were entered at admission and from previous stays. The HFRS derives from 109 ICD-10 codes that were identified as characteristic of a cluster of frail individuals. Different points were awarded to each code and summed up to a maximum possible score of 173.2 points, depending on how strong each ICD-10 code predicted membership in the cluster of frail patients. Gilbert et al. classified frailty into three risk categories, low HFRS under 5-point, intermediate HFRS between 5 and 15 points, and high HFRS above 15 points. Weighting factors and ICD-10 codes for the HFRS according to the literature are provided in the appendix (Appendix 1) [16].
Statistics
Continuous data was stated as mean (standard deviation) for statistical analysis. Two-sided t-test was applied for group comparisons. For categorical data were absolute and relative frequencies, which were then compared by chi-square tests between groups. The hypothesis of the study was tested with a significance level of 5%. To assess whether HFRS is a significant predictor of reoperation, readmission, and complications while controlling for demographic variables known to be associated with adverse surgical outcomes such as age, sex, and the American Society of Anesthesiologists classification (ASA) a multivariable logistic regression analysis was performed [21]. For the statistical analysis IBM SPSS Statistics 25 (SPSS Inc, Chicago, IL) was used.
Results
During the studied period from January 2011 to December 2019, 2042 patients were identified who underwent spine surgery. In Table 1, the demographic characteristics of the study group are presented. The mean HFRS in the study group was 1.5 ± 2.2. As low risk (HFRS < 5), 93% (1895/2042) of the patients were categorized 7.1% (145/2042) as intermediate risk , and 0.1% (2/2042) as high risk (HFRS > 15). In Fig. 1, the HFRS distribution in the study group is shown. The intermediate and high-risk groups were pooled for further analysis, due to the limited number of high-risk patients in the study group. With increasing the HFRS, the number of all measured adverse events increased (Figs. 2 and 3). The relation between fusion and non-fusion was 64-36%. All fusion were done from posterior (TLIF or PLIF). There was not significant difference between these 2 groups regarding the examined parameters.
The overall revision surgery rate in all groups within 90 days after spine surgery was 12.2% (260/2042). In the intermediate/high frailty risk group the risk of revision surgery within 90 days was 19.7%, that means 7.5% higher than in the low risk of frailty group (12.2%).
Regarding the Clavien-Dindo grade IV complications, there was a complication rate of 3.8% (78/2042) in total. The complication rate in the intermediate/high frailty risk group was 8.8%, that means 5.4% higher than in the low risk of frailty group (3.4%).
Furthermore, the overall transfusion rate was 2.2% (45/2042). In the intermediate/high frailty risk group, the transfusion rate was 10.9%, that means 9.4% higher than in the low risk of frailty group (1.5%).
Internal complications in total were 1.3% (27/2042). The internal complications were 4.1% and thus 3% higher in the intermediate/high frailty risk group than in the low risk of frailty group (1.1%).
Taking the occurrence of other complications (thromboembolisms, apoplexy, delirium, syncope, and collapse) into account, the total rate was 2.6% (53/2042). The complication rate in the intermediate/high frailty risk group was 12.9%, 11.1% higher than in the low risk of frailty group (1.8%).
In total, the surgical site infection rate was 0.6% (13/2042). The rate was 3.0% higher in the intermediate/ high frailty risk group (3.4%) than in the low risk of frailty group (0.4%) ( Table 2). I n a d d i t i o n , t h e s u rg i c a l t i m e ( m i n u t e s ) (148 ± 132/118 ± 107; p = 0.001) and length of stay (LOS, days) (13 ± 10/8 ± 6; p < 0.001) were significantly longer in the intermediate/high frailty risk group than in the low risk of frailty group.
Transfusion
The HFRS was an independent risk factor for transfusion (OR = 1.29; 95% CI 1.
Gender
Gender showed no association with all the parameters measured.
Discussion
This study aims to validate the HFRS regarding adverse events after primary spine surgery. To reduce adverse events and associated treatment costs, in the last years, several risk assessment tools focusing on comorbidities have been introduced and evaluated [10,11]. Ideally, these tools allow doctors and patients to carry out a risk assessment in advance of an operation in order to understand risks associated with the operation and facilitate obtaining informed consent.
Recently, the concept of frailty has become increasingly important in terms of risk stratification and outcome prediction [21,22]. Although there are some other studies, which investigated the concept of frailty as a predictor of surgical outcome in patients undergoing spine surgery, to the best of our knowledge, the current study is the first to investigate the relation of HFRS and Clavien-Dindo classification, internal (cardio-pulmonal, thrombosis, transfusion, etc.), other complications (thromboembolisms, apoplexy, delirium, syncope and collapse) or surgical site infections in spine surgery [23].
Frailty is defined as a multidimensional geriatric syndrome that can be triggered by only minor disruptive factors and leads to sudden changes in the state of health [15]. The loss of physical and mental reserves is typical [24]. In the literature, poor outcomes after surgery and higher rates of adverse postoperative events are associated with increased frailty [11]. In 2018, Gilbert et al. published the HFRS with the idea of cumulative deficits in frailty. He intended to use routinely collected administrative data to develop a screening tool for frailty that can be used anywhere at low cost. In its validation study, the HFRS showed good agreement with popular frailty scales (i.e., Fried Phenotype, Rockwood's Frailty Index). The goal of identifying patients at higher risk for mortality, longer length of stay, and readmission was achieved.
In our study, the patients were treated as described by Gilbert et al. and divided into 2 cohorts: a cohort with a low risk of frailty and a cohort with intermediate or high risk of frailty were formed. In the original study, the mean age was 84 years and mean HFRS was 9 [16]. In our orthopedic setting and due to mostly elective surgery, the mean HFRS was 1.5 and mean age was 60. 7.2% of the patients were in the intermediate or high risk of frailty group. Despite these different characteristics and a younger patient cohort, a group with higher risk of adverse outcomes could be identified. In comparison with another study of our group including patients with hip and knee arthroplasty (mean HFRS 0.9, 4.7% intermediate or high risk), the spine cohort had a higher mean HFRS [11]. The reason might be that despite the elective character of orthopedic operations, some spinal operations are more urgent than hip or knee arthroplasty and thus patients in worse conditions.
In 2020, Hannah et al. published a correlation of HFRS with age, gender, ASA classification, Elixhauser Index, revision surgery, fusion rate, median number of segments fused, estimated blood loss and length of surgery.
In our study, the rate of revision surgery within 90 days was 1.6× higher for the intermediate/high risk of frailty patients. Hannah et al. saw also a good prediction for surgical complications, but only a weak effect on 30-day readmission. The explanation might be that in patients with higher risk of frailty, surgical complications occur within the initial hospital stay and revision surgery occurs immediately without discharge in between. This explanation is supported by our results that the LOS in the intermediate/high frailty risk group is significantly longer. The multivariable logistic regression showed an independent association of HFRS, surgical time, and ASA classification with the revision surgery rate.
ASA classification and long surgical time also seem to be better predictive parameters for complications requiring an ICU stay (Clavien-Dindo IV), than HFRS. Although the high and intermediate risk of frailty group had a 2.5× higher risk for Clavien-Dindo IV complications, there was no association in the multivariable logistic regression analysis. Hannah et al., in comparison, described an improved accuracy of predicting ICU stays in the logistic regression analysis for HFRS [18]. The reason might be the rate of 11.7% compared to 4.7% in our cohort for higher and intermediate risk of frailty patients. Different studies for knee and hip arthroplasty confirmed our results; that there is, on the one hand a higher risk for ICU stay, but no independent association of the HFRS [11]. Although, in a different frailty scoring system, the modified Frailty index (mFI) based on the Canadian Study of Health and Ageing Frailty Index, the multivariable analysis showed that the preoperative mFI and ASA classification of ≥ III had a significantly increased risk of leading to Clavien IV complications and death, the HFRS might be less suitable for prediction of ICU admission as potentially life-threatening comorbidity [23]. Bruno et al. also confirmed in 2019 that there is only a limited predictive value of the HFRS in ICU patients [25].
In the high and intermediate risk of frailty group, the transfusion rate was 7.2× higher, an independent association could only be seen for the ASA classification, but not for HFRS. Identifying these patients at an early stage seems to be important. Recent studies have shown that a patient blood management strategy 4 weeks before surgery is effective to decrease the rate of transfusion in risk patients [26].
The most important finding of this study is that the HFRS showed an independent association with non-life-threatening complications, but with the need to treat, such as cardio-pulmonal or renal complications, thromboembolisms, apoplexy, delirium, syncope and collapse. In comparison, there was no association for these parameters for ASA classification. Despite the fact that ICD-10 codes for cardiovascular and pulmonary diseases are not included in the HFRS, the rate for cardio-pulmonal complications was 3.7× higher and for thromboembolisms, apoplexy, delirium, syncope and collapse even 7× higher in the high and intermediate risk of frailty group [8]. To improve the predictive value of HFRS, it should be modified in future studies using specific ICD-10 codes [27].
Regarding the surgical site infection rate, HFRS is also independently associated. In the intermediate and high risk of frailty group, the infection rate is 8.5× higher than in the low-risk rate. As in spinal surgery the incidence of infection is higher than in general orthopedic operations, the HFRS can be a simple predictor. Recently, a scoring system was published with the single aim to identify the risk patients to reduce complex and expensive infection treatments and thereby reducing costs in the healthcare system [28].
Our findings show that using the ASA classification, severe complications with the need for ICU treatment could be predicted, but by using the HFRS and not the ASA classification, many complications without the need of an ICU could be avoided, by identifying those patients and giving them special attention and treatment before surgery. Particularly, "milder" complications can often be easily and simply avoided by patient education and optimizing modifiable risk factors preoperatively, but if not, they are the reason for prolonged hospital stay and negative socio-economic effects [29].
There are several limitations like in most database studies. The HFRS derives from ICD-10 codes, therefore the accuracy of coding is a potential source of bias. Additionally, the only available data came from the hospital information system and spine registry. Therefore, parameters such as body-mass index or psychosocial factors, known as outcome influencing, had to be excluded because of the lack of data. In addition, the parameter "length of stay" has been seen in the context of the German health care system with usually a longer length of stay internationally compared.
The preoperative counselling and preoperative assessment process has an implication on the distribution in the study group. This may have a bearing on the recommendations/ conclusions from/of the study due to fairly small numbers in the intermediate/high risk group and the retrospective nature of the study.
Another limitation is that due to the routine data, the number of levels of fusion or decompression could not exactly be determined and therefore, the severity of the surgery might be confounder. On the other hand, that means that the results can be transferred to a wide range of spine operations and not only to a group defined by strict in-and exclusion criteria.
Despite these limitations, this study is the first to demonstrate the predictive value of HFRS in patients with spine surgery.
The strength of our study is the single center design that guarantees standardized operative workflows and postoperative treatment protocols for all spine operations. Another possible bias by using different fusion systems could be avoided by the supply of only one manufacturer. That means a reduction of possible confounding factors.
Conclusion
In our study, we were able to show that the HFRS can predict adverse events such as reoperation, complications and surgical site infections after spine surgery. As an inexpensive instrument that is fed from routine data in the hospital database, it can identify risk patients at an early stage. Thus, the individual patient risk could be minimized in advance through optimization of modifiable risk factors. This reduces the socio-economic burden and increases patient safety. | 2022-04-19T13:40:54.232Z | 2022-04-18T00:00:00.000 | {
"year": 2022,
"sha1": "af9dd2d7ab308eb0219ec2f83de5ebfd8f85864e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00586-022-07211-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "af9dd2d7ab308eb0219ec2f83de5ebfd8f85864e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53061947 | pes2o/s2orc | v3-fos-license | Oil production monitoring and optimization from produced water analytics ; a case study from the Halfdan chalk oil field , Danish North
Produced water analysis is a direct source of information to the subsurface processes active in an oil field. The information is, however, complex and requires a multidisciplinary approach and access to multiple data types and sources to successfully unlock and decode the processes. We apply data analytics on a combined data set of water chemistry and oil and gas production data measured in the production stream from five wells in the Halfdan field. The field is produced applying extensive water injection to ensure the most efficient water sweep of the reservoir. Relationships between daily production data and water chemistry are examined with Principal Component Analysis (PCA), and systematics with respect to predictability of daily changes in the oil production from water chemistry are examined with partial least square (PLS) regression models. For each well, the water chemistry provides a high degree of predictability with respect to daily oil cut in the production stream. The results have potential for application within prediction of sweep efficiency, by-passed oil and for prediction of water break-through. Full potential, however, depend on successful implementation of water chemistry-oil production analytics into other data domains such as seismic (4D) data and well work-over data.
INTRODUCTION
Today reservoir monitoring is challenged by its expense and technical difficulty. Incorporation of new and relevant data sets are cumbersome implying infrequent updating of the knowledge base. By improving the reservoir management there will be opportunities to accelerate or increase production and improve operational efficiency. If one can link data from the reservoir, wells, and facilities monitoring and sensing devices to the subsurface model the obtained information can be valuable for making business decisions. The aim is to create smart oil fields by developing automated systems in a cross-disciplinary collaboration between geoscientists, engineers, and other domain specialists. One opportunity is to obtain real-time history matching in order to monitor changes in key physical reservoir parameters and from that implement the necessary changes to optimize field performance. For instance, the design process must establish how to handle ever-increasing levels of water production.
A key ingredient in establishing real-time reservoir management is to increase the efficiency of data utilisation and sharing, and this includes understanding the chemical reactions and phase changes associated with reservoir multiphase flow conditions. Oil and gas production in the Danish North Sea began in 1972 and is projected to be substantial in terms of domestics needs until 2035 (Danish Energy Agency 2017). Production occur from highly porous but low permeable chalk reservoirs in which water flooding of the reservoirs has proven to be the key to enhance oil recovery.
In 2015, the oil production from Danish fields was 9.1x10 6 m 3 , however this volume was dwarfed compared to the volumes of water handled on the installations either as coproduced water or as injected water (Fig. 1). As the fields have matured, the volume of water to be handled has increased dramatically with a consequently high demand of energy needed for handling these large volumes, which may exceed 90% for some fields.
INTRODUCTION
Today reservoir monitoring is challenged by its expense and technical difficulty. Incorporation of new and relevant data sets are cumbersome implying infrequent updating of the knowledge base. By improving the reservoir management there will be opportunities to accelerate or increase production and improve operational efficiency. If one can link data from the reservoir, wells, and facilities monitoring and sensing devices to the subsurface model the obtained information can be valuable for making business decisions. The aim is to create smart oil fields by developing automated systems in a cross-disciplinary collaboration between geoscientists, engineers, and other domain specialists. One opportunity is to obtain real-time history matching in order to monitor changes in key physical reservoir parameters and from that implement the necessary changes to optimize field performance. For instance, the design process must establish how to handle ever-increasing levels of water production.
A key ingredient in establishing real-time reservoir management is to increase the efficiency of data utilisation and sharing, and this includes understanding the chemical reactions and phase changes associated with reservoir multiphase flow conditions. Oil and gas production in the Danish North Sea began in 1972 and is projected to be substantial in terms of domestics needs until 2035 (Danish Energy Agency 2017). Production occur from highly porous but low permeable chalk reservoirs in which water flooding of the reservoirs has proven to be the key to enhance oil recovery.
In 2015, the oil production from Danish fields was 9.1x10 6 m 3 , however this volume was dwarfed compared to the volumes of water handled on the installations either as coproduced water or as injected water (Fig. 1). As the fields have matured, the volume of water to be handled has increased dramatically with a consequently high demand of energy needed for handling these large volumes, which may exceed 90% for some fields.
INTRODUCTION
Today reservoir monitoring is challenged by its expense and technical difficulty. Incorporation of new and relevant data sets are cumbersome implying infrequent updating of the knowledge base. By improving the reservoir management there will be opportunities to accelerate or increase production and improve operational efficiency. If one can link data from the reservoir, wells, and facilities monitoring and sensing devices to the subsurface model the obtained information can be valuable for making business decisions. The aim is to create smart oil fields by developing automated systems in a cross-disciplinary collaboration between geoscientists, engineers, and other domain specialists. One opportunity is to obtain real-time history matching in order to monitor changes in key physical reservoir parameters and from that implement the necessary changes to optimize field performance. For instance, the design process must establish how to handle ever-increasing levels of water production.
A key ingredient in establishing real-time reservoir management is to increase the efficiency of data utilisation and sharing, and this includes understanding the chemical reactions and phase changes associated with reservoir multiphase flow conditions. Oil and gas production in the Danish North Sea began in 1972 and is projected to be substantial in terms of domestics needs until 2035 (Danish Energy Agency 2017). Production occur from highly porous but low permeable chalk reservoirs in which water flooding of the reservoirs has proven to be the key to enhance oil recovery.
In 2015, the oil production from Danish fields was 9.1x10 6 m 3 , however this volume was dwarfed compared to the volumes of water handled on the installations either as coproduced water or as injected water (Fig. 1). As the fields have matured, the volume of water to be handled has increased dramatically with a consequently high demand of energy needed for handling these large volumes, which may exceed 90% for some fields. Abstract: Produced water analysis is a direct source of information to the subsurface processes active in an oil field. The information is, however, complex and requires a multidisciplinary approach and access to multiple data types and sources to successfully unlock and decode the processes. We apply data analytics on a combined data set of water chemistry and oil and gas production data measured in the production stream from five wells in the Halfdan field. The field is produced applying extensive water injection to ensure the most efficient water sweep of the reservoir. Relationships between daily production data and water chemistry are examined with Principal Component Analysis (PCA), and systematics with respect to predictability of daily changes in the oil production from water chemistry are examined with partial least square (PLS) regression models. For each well, the water chemistry provides a high degree of predictability with respect to daily oil cut in the production stream. The results have potential for application within prediction of sweep efficiency, by-passed oil and for prediction of water break-through. Full potential, however, depend on successful implementation of water chemistry-oil production analytics into other data domains such as seismic (4D) data and well work-over data.
Keywords: Big Data, History Matching, Reservoir Simulation Optimization and Management, Production Monitoring, Automation and Optimization.
INTRODUCTION
Today reservoir monitoring is challenged by its expense and technical difficulty. Incorporation of new and relevant data sets are cumbersome implying infrequent updating of the knowledge base. By improving the reservoir management there will be opportunities to accelerate or increase production and improve operational efficiency. If one can link data from the reservoir, wells, and facilities monitoring and sensing devices to the subsurface model the obtained information can be valuable for making business decisions. The aim is to create smart oil fields by developing automated systems in a cross-disciplinary collaboration between geoscientists, engineers, and other domain specialists. One opportunity is to obtain real-time history matching in order to monitor changes in key physical reservoir parameters and from that implement the necessary changes to optimize field performance. For instance, the design process must establish how to handle ever-increasing levels of water production.
A key ingredient in establishing real-time reservoir management is to increase the efficiency of data utilisation and sharing, and this includes understanding the chemical reactions and phase changes associated with reservoir multiphase flow conditions.
Oil and gas production in the Danish North Sea began in 1972 and is projected to be substantial in terms of domestics needs until 2035 (Danish Energy Agency 2017). Production occur from highly porous but low permeable chalk reservoirs in which water flooding of the reservoirs has proven to be the key to enhance oil recovery.
In 2015, the oil production from Danish fields was 9.1x10 6 m 3 , however this volume was dwarfed compared to the volumes of water handled on the installations either as coproduced water or as injected water (Fig. 1). As the fields have matured, the volume of water to be handled has increased dramatically with a consequently high demand of energy needed for handling these large volumes, which may exceed 90% for some fields.
Apart from being a waste product the produced water carries important information on reservoir dynamics and recovery processes (Schovsbo et al. 2016(Schovsbo et al. , 2017. The produced water can originate from natural water zones in the reservoir or from the water injection water, and its origin and relationship to oil and gas recovery is important for any field development, production monitoring and history matching. We here present a case study from the Halfdan field in the Danish part of the North Sea ( Fig. 2) with the aim to establish the first principles governing oil production monitoring and optimization from produced water analytics.
Chalk reservoirs and water flooding
The reservoir rock formed during the Cretaceous-Lower Palaeogene period (62-145 million years ago) and is composed of chalk consisting of the remains of calcareous microorganism shells (Hjuler and Fabricius 2009). Chalk is very porous (25-45%) but has low permeabilities (0.5-2 mDarcy) and thus production has been challenging.
Initially, the chalk fields were produced from vertical wells by compaction drive in which the fluid expansion caused by pressure relief was the main driver for production. However, since 1986 water injection was initiated ( Fig. 1) to give pressure support and to sweep oil from injector well to the producer thereby greatly enhancing oil recovery.
The Halfdan field
The Halfdan field ( Fig. 2) was discovered in 1998 and had first oil produced in 1999. The field is developed in an alternating pattern of km-long multistage horizontal producer and water injector wells aimed at maximum water sweep efficiency by applying the Fracture Aligned Sweep Technology (FAST) concept, developed by Maersk Oil (Lafond et al. 2010). Several first moves with respect to technology implementation have been made for the field with respect to optimisation (Calvert et al. 2014(Calvert et al. , 2016Wherity et al. 2014). The key for success in these studies has been to link well data representing performance over many km and stimulation zones with seismic data revealing the spatial geometry.
Fig. 2. Southern part of the Danish North
Sea showing the ten producing fields included in this study.
Regional produced water chemical analysis
For a regional characterisation of the produced water types, 314 water sample analyses were included from ten producing chalk fields in the southern part of the Danish North Sea (Fig. 2). The samples represent a selection of all available chemical measurements from the fields aimed to give a representative overview of the types of water produced from the fields. For characterisation, samples analysed for Na + , K + , Ca 2+ , Mg 2+ , Sr 2+ , Ba 2+ , Cl -, and SO4 2were used. No information on methods or sampling protocols for the specific samples was available.
Production data from the Halfdan field
Production data from five wells (named well A to E) from the Halfdan field was selected to represent different scenarios with respect to temporal and spatial variation in water chemistry and oil production. Well A, B and C are positioned central in the field and well A and B share the same water injector well. The periods studied are up to 1 st of January 2013 and include the first 9.2 to 11.4 years of production.
Production data include average daily oil, gas and water production and 390 analysed samples of produced water with a somewhat irregularly sample frequency. Calculated variables include: Production days calculated as numbers of days from first production, gas to oil ratio (GOR) calculated as the gas to oil volume ratio x 1000 and the oil fraction in the production stream calculated as oil production rate divided by the total fluid rate (sum of oil and water production rates). The production data was combined with water chemical analysis so that data sets obtained on the same day were combined with each other.
Principal Component Analysis (PCA)
PCA transforms a matrix of measured data (N samples, P variables), X, into sets of projection sub-spaces delineated by Principal Components (each a linear combination of all P variables), which display variance maximised interrelationships between samples and variables, respectively (Martens and Naes 1989;Höskuldsson 1996;Esbensen 2012, Esbensen et al. 2015. PCA score plots display groupings, or clusters, between samples based on compositional similarities, as described by the variable correlations (shown with accompanying loading plots), and also quantify the proportion (%) of total data-set variance that can be modelled by each component. All data analyses in this work are based on auto-scaled data [X-X(avr)/std].
Partial Least Squares (PLS) regression
PLS regression replaces the classical multiple linear regression and allows direct correlations to be modelled between y and the multivariate X data, compensating for debilitating co-linearity between x-variables, (Martens and Naes 1989;Höskuldsson 1996;Esbensen 2012). PLS regression models are used extensively in science, technology and industry for prediction purposes where the critical success factor is proper validation (Esbensen and Geladi 2010). Both PCA and PLS result in informative score plots, loading plots (PLS: loading-weights) and prediction validation plots, which are the prime vehicles for detailed interpretation of complex data relationships. PLS components are based on [X,y] covariance optimisation, but the scientific interpretation of the derived scores and loading-weights plots follows procedures which are identical to the PCA (c.f. Esbensen et al. 2015). Validation was based on a test set prepared before modelling: The data for each well was sorted with respect to production day before being randomly split into two independent data sets, i.e. the training versus the test set, securing a realistic prediction performance validation (Esbensen 2012;Esbensen and Geladi 2010).
Modelling (PCA and PLS) was performed in the software package Unscramble® 10.5 from CAMO.
Regional water types in Danish fields
In the PCA model of the regional water chemistry database, the first two PCA axes resolve 77% of the total data variance (Fig 3). The three main clusters of variables in the PCA-1 versus PCA-2 diagram are Ba 2+ characterized by high positive PCA-1 loadings, a clustering of SO4 2-, Mg 2+ and K + characterized by high negative PCA-1 and PCA-2 loadings and a clustering of Cl -, Na + , Sr 2+ characterized by high negative PCA-1 and positive PCA-2 loadings (Fig. 3B).
The clustering of variable reflects different signatures of formation water as exemplified from calculation of average compositions of samples selected within the PCA-1 versus PCA-2 sample score plot (Fig. 3A). Formation Water 1 (FW1) is characterised by high Ba 2+ concentrations and low overall ionic strength (Table 1). This water type is present in the Valdemar, Roar and Tyra fields (see Fig. 2 for location). Formation Water 4 (FW4) and is characterised by high salinity, medium SO4 2concentration and no Ba 2+ . This water type is most clearly expressed in fields above salt domes such as the Kraka field (Fig. 3). An additional water type (SW) is characterised by high SO4 2+ , K + and Mg 2+ concentrations (Table 1). This water type is present in the Dan, Halfdan, Gorm and Skjold fields and is interpreted to be the result of decades of extensive water flooding performed by the operator (cf. Schovsbo et al. 2016). In the case of the Halfdan field the formation water composition ranges between the end-members FW1 and FW4 (Fig. 3A). Average compositions of these local end-members (c.f. Schovsbo et al. 2017) are presented in Table 1 and include a Formation Water 2 (FW2) that is characterised by medium to low salinities and medium high Ba 2+ concentrations and a Formation Water 3 (FW3) that is characterised by high Ca 2+ and medium high salinities and medium to low SO4 2concentrations (Fig. 3A). This type is also present in the Dan field. Circles outline three main sample groupings (1-3) discussed in the text.
Relationship between production and water chemistry on the Halfdan field
In the PCA model of the combined production related data and water chemistry data set for five Halfdan wells the first two PCA axes resolve 71% of the total data variance (Fig 4).
In the loading plot (Fig. 4B) oil and gas production, oil fraction and GOR cluster together with Ba 2+ at positive PCA-1 and PCA-2 loading values. Water production cluster with production days, SO4 2-, K + and Mg 2+ at negative PCA-1 and intermediate positive and negative PCA-2 loadings. Cl -, Na + , Ba 2+ and Sr 2+ cluster together and plot with high positive PCA-1 and negative PCA-2 loadings (Fig. 4B).
The clustering of variables are as expected from the general understanding that high oil and gas production is associated with production of formation water, which tends to occur early in the production history of the well. High water production occurs later in the well history and this water resembles seawater reflecting production of injected water (Fig. 4B).
The sample score plot of the two first PCA axes show three main groupings (Fig. 4A). Group 1 consists of well A and B and a few samples from well C and D is characterized by positive PCA-1 and PCA-2 score values. Group 2 consist of the remaining parts of well C and D and is characterized by negative PCA-1 and positive PCA-2 scores. Group 3 include well E and is characterized by high negative PCA-2 scores (Fig. 4A).
The different groupings reflect different relationships between well performance with respect to oil and gas production and chemical composition of the produced water. Group 1 is characterised by high oil and gas production. The water production is low and characterised by high Ba 2+ and typical of FW2. This zone can also be termed the "sweet spot" in the production. Group 2 and 3 reflect a production mode characterised by low oil fraction and water resembling either SW i.e. injected seawater (SO4 2-, Mg 2+ , K + ) or a saline formation water, FW3, (Cl -, Na + ) respectively.
Prediction of oil fraction in the production stream
The different relationships between well performance and water chemistry can also be illustrated in a PLS-regression model aimed at predicting the oil fraction from the water chemistry and the duration of the production (Fig. 5). Overall the PLS model (Fig. 5) resembles the PCA model presented in Fig. 4. The prediction of the PLS model gave a reasonable satisfactory validation results (slope 0.80; r 2 = 0.80 for PLS component 3, Fig. 5). Negative correlation between Cland oil fraction is present in well E and negative correlations between days in production, SO4 2-, Mg 2+ , K + and oil fraction is seen for the remaining wells.
It is noteworthy that samples from well A and B plot closely together in contrast to well C and D that plot along the full range of PLS-1 values with the majority of the samples from Well D plotting with high negative values (Fig. 5A). This well also plot with much lower positive PLS 1 values than well A, B and C suggesting a lower overall performance with respect to high oil fraction than well A, B and C. In these wells the samples with high positive PLS-1 values represent early production in the well characterised by high oil fraction and the group with low negative PLS-1 values represent mid to late production representing low oil fraction. The shift is sudden (few intermediate values) likely reflecting influx of injected seawater via fractures.
For individual groups of wells with similar performance PLS models, using full chemical variables and duration of production, predicts oil fraction with a much more satisfactory validation result than for all wells. This is exemplified with well group A, B and C and well group D IFAC OOGP 2018Esbjerg, Denmark. May 30 -June 1, 2018 and E prediction versus reference plot in Fig. 6D and H. In well group A, B and C (slope 0.88; r 2 = 0.89, PLS component 1), oil fraction model is primarily carried by positively correlated Na + , Cland Sr 2+ and negatively correlated SO4 2-, K + and Mg 2+ , but several other composition variables also have minor, but significant influence (Fig. 6B). In well group D and E (slope 0.85; r 2 = 0.86, PLS component 2), the oil fraction model is primarily carried by positively correlated Ca 2+ and negatively correlated to production days.
Here other composition variables have minor, but yet significant influence with K + appearing to have least influence on the correlation (Fig. 6F).
Factors influencing the produced water composition
The data presented in this paper stems from chemical analysis of produced waters. The aim of these specific chemical analyses is to determine the accurate concentration of the ions in the water. Some uncertainty lies within the chemical analysis, but the main uncertainty in the data originates from the quality of the samples. The chemical composition of the produced water is influenced by a variety of effects. The water is directly affected by a wide selection of injected chemicals, e.g. from squeeze events, well clean-ups, re-stimulation and scale inhibitors. Back flow of these injected chemicals is expected to affect the chemical composition of the produced water. "Process water", typically occurring within the first few years of production, is especially affected (Schovsbo et al. 2017).
The samples with process water signatures are identified during the PCA analysis. Typically, they behave as outliers when compared to the rest of the samples. Once identified, the samples are normally characterised by unusually high concentrations of Ca 2+ and K + . Hence, the data analysis also functions as a data quality check.
We know from sampling protocols that chemicals are added to the produced water prior to analysis. One of the most common chemicals to add is acetic acid. This is amongst other reasons done to avoid bacterial growth. Obviously, this will affect the chemical composition of the analysed water. As a minimum, the Clconcentration is found to be larger than what was in the untreated sample.
Additionally, precipitation during transport and storage due to changed pressure and temperature conditions may come into play. Also, uncertainties in the performed analyses are present. Currently, we are investigating these effects and their impact by applying new measuring techniques and by launching new sampling protocols. The new results will be compared to the old to ensure data reliability.
Regional water types
Produced waters in the Danish North Sea exhibit a considerable compositional range with salinities from less than 85% to 330% compared to present day North Sea seawater salinity of 21 000 ppm ( Table 1). The chalk formed in normal marine conditions and its initial pore water composition was likely comparable to present day values (Warren et al. 1994). The highly saline water present in fields above salt domes likely reflects original pore water being mixed with brines from the dome. The highest salinities thus reflect a higher degree of fluid communication by fracture flow and/or chemical diffusion within the field.
The presence of low salinity water, here defined as water with less than seawater Cllevels, suggests that some reservoirs were flushed in order to reduce the ionic strength from its original level. The fields with this component are present in the northern most part of the study area (Fig. 2). From here salinity increases towards south in the order (low to high) Valdemar/Roar-Tyra-Tyra SE-Halfdan (Fig. 3). This may suggest that the low salinity water originated North of Valdemar perhaps within the geological area called Tail End Graben known to be one of the kitchen areas for oil generation (Petersen et al. 2016). The low salinity water may reflect original fresh water within non-marine deposits or may be derived from water liberated during clay transformations (c.f. Osipov et al. 2003).
The two formation water end-member (FW2 and FW3, see Fig. 4) present in the Halfdan field occur in different parts of the field. The FW3 type is present on the southern flank of the field towards the Dan field and is clearly related to the presence of salt dome water (Schovsbo et al. 2017) whereas the low salinity type (FW2) appear to be local end-member in the compositional continuum that extends north to the Valdemar/Roar fields. Within the Halfdan field, this suggests that local gradients in compositions exist and that each well location will represent a mix of the formation water produced along the long horizontal well track in contrast to all wells having same discrete compositions. For modelling purposes, care thus has to be taken to establish the initial water composition at each well site instead of applying fixed compositions.
PLS-regression model of well performance
In order to illustrate the relationship between water chemistry and production performance in the Halfdan wells, we have used prediction of oil fraction in the production as a reference. We could also have used the prediction of oil production rate, which also would have provided valuable insights into the production drivers. The main difference between the two variables is, however, minor and therefore we have focussed on establishing the first principles in the relationships between water chemistry and oil fraction in the production stream.
In the PLS-regression models the number of days in production has been included in the X data. This parameter has a high impact on the predictability of the oil fraction, especially because the model with this parameter can compensate for temporal changes in the production. If the parameter "days in production" is not included in the PLSregression then dedicated models for early versus later production will provide more optimal predictions.
Water types and oil production drivers
There is a marked difference and fundamentally different relationship between oil and gas production and water chemistry between the five Halfdan wells. Well A and B represent wells in the core part of the field characterised by high oil production rates and high oil fractions in the production stream. These well are characterised by efficient water flooding in which the oil fraction is inversely correlated to the appearance of injected seawater (Fig. 6B). In addition the correlation between production days with the oil fraction is less profound and has a low predictive value.
The produced formation water may originate from the oil zone itself; liberated "squeezed out" due to relative compaction as pressure is lowered; or is produced by frictional drag from within the oil stream. As pressure is reduced, water from deeper levels is also expected to flow due to compaction (Fig. 8). Well C also represent a central positioned well. However this well experienced severe water breakthrough of injected water early in its production history. This well can be modelled together with well A and B (Fig. 6A).
For Well D and E the oil fraction is strongly dependant on production days (Fig. 6F). Well E represents a well from a flank position of the field. In this well the produced water IFAC OOGP 2018Esbjerg, Denmark. May 30 -June 1, 2018 (FW3) does not show any indications that injected seawater is produced as the oil fraction is lowered, instead, formation water is produced as the oil fraction is lowered (Fig. 8). The production of formation water will also lead to a pressure drop promoting some compaction of the chalk. Ca 2+ has a positive correlation to the oil fraction (Fig. 6F). This may reflect water originating from both within the oil column and from the water leg. The key for success for the operation of the Halfdan field has been to link well data representing performance over many km and stimulation zones with seismic data that gives the spatial geometry (c.f. Calvert et al. 2014Calvert et al. , 2016. In order to obtain the full potential of the methods described in this paper, a successful implementation of water chemistry-oil production analytics should be transferred into other data domains such as seismic (4D) data and well workover data.
Macroscopic sweep efficiencies are affected by a variety of variables (Table 2) including the geology, i.e. the inherited rock properties related to the depositional environments such as pelagic versus reworking, and the existence of natural or artificially created fracture network that will create short circuit fracture connections. These will overall lead to reduced recovery and possibly also to bypass of pay.
The data analytics may be the first step to a smart oil where digital oilfield workflows combine business process management with advanced information technology and engineering expertise to streamline and, in many cases, automate the execution of tasks performed by crossfunctional teams. Integration of all relevant data available listed above for the given and analogue fields
Analytics, Data integration
Different business objectives in different departments, including a combination of disciplines involved in reservoir characterization, must be combined into common goals.
Merging the static and dynamic features of a reservoir is the vital link between earth science and production engineering. Monitoring fluid flow with 4D seismic techniques requires close collaboration between the disciplines of structural and stratigraphic geology, fluid flow simulation, rock physics, and seismology.
Analysis of various data sources should be used to continuously update and establish an accurate model of the reservoir system and from that obtain the ability to predict the consequences of implementing possible, alternative strategies. This can reduce the uncertainty associated with history matched models by verifying that the selected model is consistent with all the available data.
In other words we are dealing with a Big Data challenge, where we need to combine various data sources characterized by different levels of Volume, Velocity, Veracity, and Varieties in order to create Value. This should be achieved by analysing these data, updating the reservoir model, making predictions and recommendations, and finally implementing the recommendations, subject to management approval.
CONCLUSIONS
The present study confirms that multiple parameters control production. Produced water chemistry data can be used advantageously in direct PLS prediction to determine key production drivers.
The database can be extended to include more of the comprehensive data available from the fields. Based on an augmented data set, it is in principle a simple task to refine this pilot study to investigate the more general limits of the feasibility demonstrated. | 2018-10-06T23:14:35.108Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "a182119d6cdb19966aa5800d4bd88ca2b35ba2a5",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2018.06.378",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "02293834c72039f834326590df6d4bf34fdfbb70",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
221140791 | pes2o/s2orc | v3-fos-license | A Four-Step Method for the Development of an ADHD-VR Digital Game Diagnostic Tool Prototype for Children Using a DL Model
Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder among children resulting in disturbances in their daily functioning. Virtual reality (VR) and machine learning technologies, such as deep learning (DL) application, are promising diagnostic tools for ADHD in the near future because VR provides stimuli to replace real stimuli and recreate experiences with high realism. It also creates a playful virtual environment and reduces stress in children. The DL model is a subset of machine learning that can transform input and output data into diagnostic values using convolutional neural network systems. By using a sensitive and specific ADHD-VR diagnostic tool prototype for children with DL model, ADHD can be diagnosed more easily and accurately, especially in places with few mental health resources or where tele-consultation is possible. To date, several virtual reality-continuous performance test (VR-CPT) diagnostic tools have been developed for ADHD; however, they do not include a machine learning or deep learning application. A diagnostic tool development study needs a trustworthy and applicable study design and conduct to ensure the completeness and transparency of the report of the accuracy of the diagnostic tool. The proposed four-step method is a mixed-method research design that combines qualitative and quantitative approaches to reduce bias and collect essential information to ensure the trustworthiness and relevance of the study findings. Therefore, this study aimed to present a brief review of a ADHD-VR digital game diagnostic tool prototype with a DL model for children and the proposed four-step method for its development.
INTRODUCTION
Attention-deficit/hyperactivity disorder (ADHD) is a disorder characterized by excessive hyperactivity and greater impulsive and inattentive behavior than in peers of the same age group, resulting in disturbances in the patients' daily functioning, such as studying and interacting with the environment. Moreover, children with ADHD are known to be targets of bullying and become scapegoats for several unwanted situations, which may eventually cause feelings of isolation and trigger other mental disorders, such as behavior disorders, mood disorders, or Internet addiction disorders, in the future (1). Previous studies have found that the prevalence rate of ADHD ranges from 3-15%, making this condition the most common mental disorder among children of elementary school age group (1,2). In Indonesia, the national prevalence of ADHD in the elementary school age group is currently unavailable. However, a study by Suryani et al. (3) found that nearly 26% of the children from first to sixth grades were diagnosed with ADHD at 27 elementary schools selected randomly in Jakarta (3).
In the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), ADHD is categorized as a neurodevelopmental disorder highlighted by deviations of behavior and emotional, cognitive, and psychosocial development (4). Studies have shown that the psychopathologies of ADHD are related to executive dysfunction, resulting in children's inability to control their emotion and behavior. Moreover, studies using fMRI BOLD, PET, or SPECT have also revealed less activity in the dorsolateral prefrontal cortex (DLPFC) in children with ADHD (5)(6)(7). Another study related to genetic evaluation found that ADHD was associated with dopamine and nor-epinephrine transporter gene polymorphism (8)(9)(10).
Chiatric interviews, observation of the child, and interviewing children and parents based on the diagnostic criteria of the DSM-5 or International Classification of Diseases-10/11 (ICD-10/11). During the mental state examination, several symptoms might or might not be exhibited, depending on the children's familiarity with and adaptability to the interview and observation setting. Thus, the results of psychiatric examinations to establish diagnosis of ADHD mostly come from the parents. Furthermore, ADHD can co-exist with other neurodevelopmental disorders, such as intellectual disability, autism spectrum disorder, or other specific learning disability and mood disorder like depression or pediatric bipolar disorder. Therefore, it is necessary to develop an ADHD diagnostic tool that is highly sensitive and specific to ADHD clinical symptoms, not only to justify the ADHD diagnosis but also to create a non-threatening environment for children. Several studies have shown that virtual reality (VR) can be used as a diagnostic tool for ADHD because of its ability to create a virtual environment that is more enjoyable and able to collect better behavioral, inattention, and distractibility data than ordinary neuropsychological tests (11,12). Most studies used continuous performance tests (CPTs) or other neuropsychological tests for standardized testing (13-15). In addition, the ADHD-VR diagnostic tool development did not include a machine learning application (artificial intelligence application) enabling computer systems to automatically learn and improve from experience without being explicitly programmed. Machine learning is capable of receiving some data (input data), training a model on the data, and finally using the trained model to make predictions on new data (output data) that have more accurate diagnostic value (16).
Due to advancements in technology, diagnostic tools for children with ADHD that is sensitive and specific to ADHD clinical symptoms can be improved by merging a child's needs and modern technologies, such as VR, with a deep learning (DL) model. In recent years, DL has become quite valuable in the machine learning used in many routine applications, such as VR digital games. DL is a subcategory of machine learning that processes data through many layers of artificial neurons (convolutional neural networks/ConvNets) between the input and output data. These processes are said to be similar to those of the human brain (17). When such networks receive multiple data, their neurons individually respond within each layer (e.g., convolution) to define the data into a single value significant for diagnostic purposes. Therefore, all the data pass through each layer in the ConvNets sequentially until the input and output data can be used to generate final judgment forms. DL has become increasingly relevant in many fields of artificial intelligence besides VR digital games (18). Thus, a DL model can be applied to develop a more accurate ADHD-VR digital game diagnostic tool that is not only sensitive and specific to ADHD clinical symptoms but also creates a playful experience. In addition, the ADHD-VR digital game diagnostic tool prototype for children using a DL model can be of benefit in places where mental health resources are low, as it can provide diagnostic capabilities in standard health services or telepsychiatry consultation and thus permit the delivery of management recommendations. Therefore, this study presents a brief discussion of an ADHD-VR digital game diagnostic tool prototype for children using a DL model and the proposed fourstep method for its development to improve the completeness and transparency of reports of diagnostic tool accuracy.
AN ADHD-VR DIGITAL GAME DIAGNOSTIC TOOL PROTOTYPE FOR CHILDREN USING A DL MODEL Diagnostic tools are essential in modern medical practices in child and adolescent psychiatry, as they can be used to confirm a diagnosis, provide more evidence to eliminate one diagnosis versus another, provide more evidence for parents, or rule out a differential diagnostic. Although psychiatric examination or other self-rating questionnaires are the main approaches used in ADHD diagnostic procedures, laboratory testing or imaging examinations are used as well, especially to rule out a general medical condition or to identify any medical conditions that mimic ADHD. Nowadays, advancements in biological psychiatry have suggested several hypotheses based on the biological markers for ADHD; however, this area of investigation is still developing. Meanwhile, although there is no device that offers the desired diagnostic accuracy for ADHD clinical symptoms, several devices have been developed to investigate the neuropsychology and electrophysiology of the brain or to perform imaging examinations (19)(20)(21)(22).
Nevertheless, the concept of games for health is another important area that should be considered. Under this concept, a game is a system where players engage in conflicts that they need to solve. Hence, a game can be enjoyed by children because it is challenging, consists of rules that stimulate creativity, contains specific goals that need to be completed by the players, provides a feedback system telling them how are they doing, and yields quantifiable outcomes (23,24). VR digital games based on a DL model are one of the best approaches; it could be used as a source of data that can yield diagnostic value in DL technology. In addition, VR digital games create a playful environment with stimuli that are not directly confrontational and recreate experiences that would be impossible in the real world with high realism (25). Thus, VR digital games with a DL model are a promising diagnostic tool for children with ADHD in the near future.
In the last decade, new model of CPTs have been developed using VR technology. VR, one of the most cutting-edge technologies used in the computer science field, was originally developed by Ivan Sutherland and colleagues in 1960 and has grown rapidly since the last decade. Since its beginnings and translations into various areas, several definitions have been proposed for VR, such as real-time interactive graphics with 3D models combined with a display technology that immerses the user in the model world while allowing its direct manipulation. On the other hand, VR has also been defined as the illusion of participation in a synthetic environment rather than external observation of such an environment. VR relies on three-dimensional, stereoscopic head-tracker displays, hand/ body tracking, and binaural sound. VR is an immersive, multisensory experience. Lastly, VR was described by Cruz-Neira in 1993 as "Virtual reality refers to immersive, interactive, multisensory, viewer-centered, 3D computer generated environments and the combination of such technologies that are required to build the objective of a virtual world" (25,26).
Compared to other media, VR requires greatly heightened immersive techniques; they increase productivity and retention and are easily understood, especially for coding, development, and training. Similarly, VR enables greater spontaneity in users, which strengthens the effect of interactivity. Therefore, VR can be used to perform tasks more easily and with greater comprehension. There are two core components of VR functional aspect: immersion and interactivity. Furthermore, use of a DL model for machine learning can allow more vivid VR experiences as diagnostic tools with more accurate prediction (17). The experiences of a VR user can be uncovered by measuring the presence, realism, and level of reality. Presence is defined as a complex mental feeling of "being there" that involves the sensation and perception of physical awareness as well as the likelihood of interacting and reacting as if the child were in the real world. Equivalently, the realism levels correlate to the degree of presumption that the child has about the real experience. For instance, if the appearance of VR is similar to the real world, the VR user's expectations will be those for the presumption of reality, thereby enhancing the experience inside the VR world. Similarly, the higher the level of reality in connection with the virtual improvement, the higher the authenticity of the VR user's experience and the more accurate the prediction of diagnostic results (25,26).
An ADHD-VR digital game diagnostic tool prototype using a DL model can be designed simplistically with minimal distractions. For example, the child can control virtual hands to grab, release, throw, and place objects. Furthermore, the child can point a laser from his/her index finger for item selection. The environment can be also limited, limiting physical movement, so that children only need a small physical space to operate the VR digital game, allowing them to perform actions while standing or sitting. The more minimized (calm and easy environment) the VR, the less motion sickness they might experience, helping the child maintain a constant reference to the activity. Furthermore, a VR digital game with a DL model interface would surround the child with various tools having diagnostic value. Children can grab layers from a toolbox to their left or right and place them within a "working space," a defined region in front of the child containing several sequences of layers within which children can insert, remove, and rearrange layers. After the child grabs a layer from the toolbox, a new layer of the same type will spawn to take its place. In this manner, more than one layer of a given type can be present within a DL model. The child can define the toolbox from left to right and train it by pressing or selecting a button to the left or right of the "working space". These operations allow for the simple construction and modification of a functional ConvNet. While computing, a display in front of the child reports the status of playing (17,27). Upon completion, the display reports the accuracy of the diagnosis from the results of DL computed against a standardized testing set; in this way, the ADHD clinical symptoms are based on DSM 5 or ICD 10/11 or other ADHD standardized symptom scales. Therefore, the input and output data that come from a VR digital game using a DL model are much more precisely transformed into a single value for diagnostic purposes. Therefore, VR digital games using a DL model will likely be at the forefront of diagnostic tools for ADHD in the coming years.
In recent years, several studies of ADHD-VR diagnostic tools have been conducted showing that VR is a promising diagnostic tool for ADHD despite not including machine learning or DL applications, and CPT could be of benefit as standardized testing. Neguț et al. (13) developed a classroom VR-CPT (inside the classroom) with ADHD children as the research participants; however, the research only focused on the inattention domain, and the results revealed that the children with ADHD needed more time to complete the response task of VR than conventional CPT, and the difference was statistically significant. Diaz-Oruega et al. (28) developed the AULA Virtual Reality Continuous Performance Test (AULA VR-CPT) to measure inattention in children with ADHD compared to conventional CPT; the result revealed a positive correlation between the AULA VR-CPT and conventional CPT. Moreover, Diaz-Oruega's study also found the AULA VR-CPT to be capable of distinguishing between attention inability among children with ADHD who were taking medication and those who were not in terms of attention, impulsivity, and motoric activity. Another study by Fang et al. (15) applied discriminant analysis of the Virtual Reality Medical Center system (VRMC) of correct items, incorrect items, total time, and accuracy to show that they were significantly associated with the levels of impulsion/hyperactivity and ADHD index as measured by Conners' Parent Rating Scale (CPRS) and CPT. Aceres et al. (14) presented a VR-CPT test (Nesphola Aquarium) for adolescents and adults that was designed to measure executive function. The study used the Adult ADHD Self-Report Scale (ASRS) as the ADHD symptom scale. The results showed that the VR test Nesphola Aquarium significantly predicted current and retrospective ADHD symptoms (14). A meta-analysis study by Parsons et al. (29) concluded that virtual reality classroom CPT was effective in assessing attention performance in children with ADHD. Moreover, it could also differentiate attention differences between children with ADHD and typically developing children; however, it was unclear whether the differences were directly related to the VR classroom environment or other factors. Thus, there is a need for better designed and adequately powered studies to investigate VR, especially with machine learning as a diagnostic tool for children with ADHD.
DISCUSSION AND FUTURE DIRECTION: THE PROPOSED FOUR-STEP METHOD FOR ADHD-VR DIGITAL GAME DIAGNOSTIC TOOL PROTOTYPE FOR CHILDREN WITH DL MODEL DEVELOPMENT
Diagnostic tool development is at risk of bias, especially with respect to accuracy. Generally, major sources of bias originate in unclear methodological design, such as research subject recruitment, data collection, the execution or interpretation of the test, or data analysis. As a result, the estimates of sensitivity and specificity of the tool being compared against the reference standard can be flawed, deviating systematically from what would be obtained under ideal circumstances (30). Biased results can lead to improper recommendations about tools, negatively affecting patient outcomes or healthcare services. Diagnostic tool accuracy is not a fixed asset of a tool. A tool's accuracy in identifying patient diagnosis typically varies between settings and patient groups and depends on prior testing (31). These sources of variation in diagnostic tool accuracy are relevant for ADHD-VR digital game diagnostic tool prototypes for children using DL model development because the diagnostic tool may be used in hospital settings, primary care settings, or school settings to diagnose ADHD. Consequently, the risk of bias and concerns about applicability are the two key factors that a diagnostic tool needs to address to minimize design-related bias in studies (30,32,33).
Therefore, the proposed four-step method is meant to reduce these biases and collect essential information, including the study design and research subjects, to judge the trustworthiness and relevance of the study findings. Furthermore, the proposed fourstep method also serves as a mixed-method study that combines qualitative and quantitative methods to aid the development of diagnostic tools in psychiatry (34). The strengths that can be generated from the proposed four-step method for ADHD-VR diagnostic tool prototypes for children using a DL model are, first, that it guides a systematized approach to diagnostic tool development, confirming that the research findings can be translated into clinical utility in a timely manner. Second, it could prevent premature dissemination of the findings into the extensive clinical use. Both of the above strengths serve to ensure a balance between patient care needs and economic concerns.
In first step, the qualitative part uses the Delphi technique with a focus group discussion (FGD) method to collect information to build a theoretical concept of an ADHD-VR digital game diagnostic tool for children using a DL model. The FGD would include several mental health professionals, such as child psychiatrists, psychiatrists, developmental psychologists, teachers, parents, computer science experts, and game designers. The first round of the FGD will discuss the clinical symptoms of ADHD based on the DSM 5, ICD-10/11, and their own perspectives. The next discussion would concern how to translate these symptoms into the VR digital game and use machine learning or a DL conceptual framework to transform the input and output data into a single ADHD diagnostic value to support ADHD diagnosis. The second round of the FGD will discuss the concept of a VR digital game diagnostic tool prototype for children using a DL model based on the results of the previous round. The first and second round of FGD may be conducted several times to reach agreement within the group. In addition, the agreement provides a replication concept by either the same or different collaborating groups. From the first and second FGD results, computer science experts and VR digital game designers can begin work to establish the prototype of an ADHD-VR digital game diagnostic tool using a DL model. In this step, the third-round FGD (final round) must include other independent experts and children under the age of 12 years. In this final round, the goals are not only to reach agreement on an ADHD-VR digital game diagnostic tool prototype using a DL model for children but also to confirm whether the diagnostic tool is suitable for use with children. In this way, content validation is achieved.
The second step is a feasibility study to lay the foundation of the plan and to reduce or eliminate problems that limit the successful delivery of the accuracy study of the ADHD-VR digital game diagnostic tool prototype using a DL model in a clinical setting (the third step). Based on the United Kingdom's National Institute for Health Research Evaluation, Trials and Studies Coordination Centre (35), a feasibility study is the stage of research before the third step of a study in order to answer the question, "Can a diagnostic tool accuracy study be performed?" area feasibility study is conducted to estimate important parameters needed to design the main study; a feasibility study does not evaluate the outcome of interest. Moreover, it is not an analytical study designed to test a hypothesis. Instead, it is usually focused on gathering data for use in optimizing diagnostic tool modifications (36). The feasibility study is generally an open-label study (for example, it does not include a control group) and is conducted at a single site with fewer than 25 children with ADHD. In this step, research subjects are asked to try out the ADHD-VR digital game diagnostic tool prototype and report their experiences individually as feedback to improve and modify the diagnostic tool. Meanwhile, the input and output data of ADHD-VR digital game diagnostic tool prototype can be generated into the DL model to finally yield diagnostic value. The more the data generated into the DL model, the more accurate the diagnostic result would be. The results of this step are used to evaluate the ADHD-VR digital game diagnostic tool prototype for children with ADHD, and it is necessary to make changes to arrive at the feasible and accurate diagnostic tool to be tested in the next step.
The third step is a diagnostic tool accuracy study (a validity and reliability study) to identify the performance of the ADHD-VR digital game diagnostic tool prototype using a DL model for children. In this step, the results are sensitivity, specificity, likelihood ratio, and positive and negative predictive values. Therefore, the research subjects are children with ADHD and a control group of healthy children; if possible, comparisons can be made with other children whose diagnoses commonly mimic ADHD clinical symptoms, such as autism spectrum disorder, intellectual disability, disruptive behavior disorder, and depressive disorder. The gold standard or standardized testing is the ADHD diagnosis based on the agreement of a number of experts using an ADHD standardized clinical symptom scale or standardized psychiatric examination guidelines for ADHD based on DSM 5 or ICD-10/11 criteria. Although the nature of the behavioral and emotional symptoms of ADHD is heterogeneous, it would be expected that the ADHD-VR digital game diagnostic tool with DL model would be sensitive and specific enough to identify the respective patterns and produce an accurate diagnostic value. Therefore, the number of the research subjects should be adequate for these purposes. In a diagnostic study, the predetermined value for both sensitivity and specificity must be at least 0.70, and it must be indicative that the probability of the tool detecting a true-positive or a true-negative is at least 70%. Based on the recommendations by Bujang and Adnan (37), if we estimate the ADHD prevalence in Indonesia at 10%, the total number of research subjects in this step would be 310 children, including 31 children with ADHD, to achieve a minimum accuracy of 80.7% so as to detect a change in the sensitivity from 0.70 to 0.90, where a statistically significant level is 0.048. The minimum sample size adequate to detect a change in the value of specificity from 0.70 to 0.90 is 34, including 3 children with ADHD. Furthermore, other important issues that must be considered in this step are defining the clinical characteristics of the ADHD sub-types identifiable by psychiatric examination guidelines or ADHD standardized scales, the duration of ADHD, the severity of the illness, and medication effects.
The fourth step is multicenter clinical trials of the ADHD-VR digital game diagnostic tool prototype for children using a DL model. Multicenter clinical trials should pave the way toward standardization of the diagnostic tool procedures for cost effectiveness and impact on both short-term and long-term clinical uses. While the earlier steps included smaller samples of comparison subjects that are usually locally formed, Step 4 needs to develop larger normative databases that can eventually be used to examine an individual's data. The development of such databases can be challenging and will require collaboration among research groups concerned with the specific tool that is being developed. The results of this step will be used to re-evaluate the usefulness of the diagnostic tool prototype before it is generalized for clinical use.
There are several issues that still need to be carefully examined whenever applying the proposed four-step method diagnostic tool development, such as the fact that qualitative information can be judgmental and biased; therefore, the qualitative data should be collected from many sources until it is condensed. Moreover, the proposed four-step method is originally designed for mixed-method studies. Thus, it needs a careful plan especially to describe all aspects of research, including the research subjects for the qualitative and quantitative parts; the timing (the sequence of qualitative and quantitative parts); and the plan for integrating data. However, it affords us the opportunity to ensure that the study findings are grounded in participants' experiences, providing more and more varied information than can be obtained from strictly quantitative research, and is able to support multidisciplinary team research by encouraging the interaction of quantitative, qualitative, and mixed-methods scholars.
CONCLUSION
ADHD is one of the prevalent neurodevelopmental disorders in a clinical setting. VR and machine learning, such as DL technologies, hold great promise for application in human health diagnostic tools in the near future, especially in areas with low health and mental health resources, such as general practitioners, child psychiatrists, general psychiatrists, psychologists, and behavior pediatricians. Therefore, by utilizing an ADHD-VR digital game diagnostic tool prototype for children with a DL model in standard health services or tele-psychiatry consultation, ADHD can be diagnosed and early management can be delivered, thereby reducing the impact of the illness. In addition, it can provide parents with much clearer evidence for an ADHD diagnosis. In addition, the proposed fourstep method, which is based on mixed-method research concepts, holds promise as an approach to develop an optimal diagnostic tool because it is designed to improve the completeness and transparency of reports of diagnostic tool accuracy studies.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Ethic Committee of the Faculty of Medicine University of Indonesia, number: KET-503/UN2.FK/Etik/ PPM.00.02/2019. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
TW contributed to the design of the manuscript, literature review, manuscript draft and preparation, and writing the manuscript. NW contributed to the literature review and manuscript preparation. RI, FK, RB, and BD conceptualized and reviewed the manuscript.
FUNDING
This paper was funded by a "Hibah Q1Q2" grant from Universitas Indonesia Research Funding 2019. The funder had no role in study design, data collection, and data analysis, decision to publish, or manuscript preparation.
ACKNOWLEDGMENTS
The Child and Adolescent Psychiatry Division, Department of Psychiatry, Faculty of Medicine Universitas Indonesia -Dr. Cipto Mangunkusumo, General Hospital, Jakarta-Indonesia, began a research project on an ADHD-VR digital game diagnostic tool prototype for children using a machine learning (DL) model in 2019 and followed the proposed four-step method in diagnostic tool development and the accuracy study. We would to thank the Faculty of Computer Science Bina Nusantara University, Jakarta, for collaboration. | 2020-08-18T13:14:19.469Z | 2020-08-17T00:00:00.000 | {
"year": 2020,
"sha1": "b1fbf81251e91f8c88c3e9ceea7f29ddda4d5f09",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.00829/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1fbf81251e91f8c88c3e9ceea7f29ddda4d5f09",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
234069166 | pes2o/s2orc | v3-fos-license | Seg2pix: Few Shot Training Line Art Colorization with Segmented Image Data
: There are various challenging issues in automating line art colorization. In this paper, we propose a GAN approach incorporating semantic segmentation image data. Our GAN-based method, named Seg2pix, can automatically generate high quality colorized images, aiming at com-puterizing one of the most tedious and repetitive jobs performed by coloring workers in the webtoon industry. The network structure of Seg2pix is mostly a modification of the architecture of Pix2pix, which is a convolution-based generative adversarial network for image-to-image translation. Through this method, we can generate high quality colorized images of a particular character with only a few training data. Seg2pix is designed to reproduce a segmented image, which becomes the suggestion data for line art colorization. The segmented image is automatically generated through a generative network with a line art image and a segmentation ground truth. In the next step, this generative network creates a colorized image from the line art and segmented image, which is generated from the former step of the generative network. To summarize, only one line art image is required for testing the generative model, and an original colorized image and segmented image are additionally required as the ground truth for training the model. These generations of the segmented image and colorized image proceed by an end-to-end method sharing the same loss functions. By using this method, we produce better qualitative results for automatic colorization of a particular character’s line art. This improvement can also be measured by quantitative results with Learned Perceptual Image Patch Similarity (LPIPS) comparison. We believe this may help artists exercise their creative expertise mainly in the area where computerization is not yet capable.
Introduction
Line art colorization is an expensive and labor-intensive process especially in the animation and comics industry due to the repetitive tasks of the job. In the webtoon (web-publishing comics) industry, such an iterative work of colorizing many line art drawings makes it prone to coloring mistakes, as well as exhausting much cost and time.
To mitigate this problem, automating the procedure of coloring should be considered to reduce the amount of actual coloring labor. Recently, many challenges and improvements for automating line art colorization have appeared, and most of the studies have been based on Generative Adversarial Networks (GANs) [1]. The GAN in the past only generated random images based on the training dataset, which cannot be processed based on the user's desired sketch input. However, Pix2pix [2], a modified method of GANs, which can generate images based on an input vector, has motivated research into automatic line art colorization. For example, the methods of [3][4][5] give user-guided suggestions about the desired data for colors and locations.
The two methods of [6,7] provided the desired style for colorization using the data of reference images. Even though these methods generate a fully colorized image for the input line art image, their quality and details are not sufficient for application in the webtoon industry. For example, as illustrated in Figure 1, a currently available technique, here Style2paints [7], produces unmatching color tones and blurring in the generated images compared to the original designs. Of course, we could have obtained a better colorization, but hyper-parameter tuning and training of the model were barely possible in a reasonable testing time.
Figure 1.
Original images used in a webtoon and a set of generated images by Style2paints, a line art colorization network [7].
Practically, it is a minimum requirement that the generated figures have almost an exact color match to their original character designs, if they are to be used in commercial illustration. This is not limited to the webtoon industry. Figure 2 shows examples of coloring mistakes in commercial animation; notice that the dress color changes instantly in the animation, which should not have occurred. Our goal in this paper is to remove such colorization mistakes in the mass production of webtoons or animations and to reduce the amount of time spent on the whole process of line art coloring. To accomplish this goal, we had to find a method to colorize a particular character with high-quality color matches, rather than colorizing random characters with hardly matching colors. The same character in a sequence is mis-colored from bluish clothes to yellow clothes. This kind of mistake can be easily found in TV series animations due to the lack of time and labor [8].
Observing the colorization results from previous methods, we see mainly two features to be upgraded for the colorization used in the webtoon production pipeline. First of all, the coloring should be confined accurately by the boundaries given from the line art input. This will allow the artists to seamlessly do the job of adding highlights and details to the output when the output of automatic colorization needs a few more details. Second, the generative colorization model should be able to learn the particular character's pattern from only a small amount of input data. The number of original images created by the chief artist is not many, usually about 10 or less than that. Furthermore, using only a few data can make it profitable for making multiple colorization models for each character.
To achieve our goal, we propose a new GAN-based neural network model, Seg2pix The model generates a segmented image to be used in generating a high-quality colorized image. The segmented image provides data that clearly distinguish the parts of the character according to borderlines of the line art. As shown in Figure 3 the segmented image shows an example of the first generated result of our Seg2pix model. The parts of the character are represented by labels, but illustrated using pseudo-color: the hair segment is in green, and the two eye segments are labeled in red. For Seg2pix to learn this functionality, we prepare segmentation-labeled image data for each of the original character creations to be used for training the targets. The labeling was done through simple colorization of the originals. This job is easy for anyone, even those who are not trained in illustration. Seg2pixgenerates the output by two concatenated image data. The first step generates the "segmented image" by the line art and trap ball segment input; the second step generates the "colorized image" by the line art and segmented image. This generation works by an end-to-end method updating the same loss function data (sample image from ©Bushiroad Co., Ltd.) [9].
Then, Seg2pix receives two kinds of ground truth images for automatic colorization training: one is the input line art drawing, and the other is the trap ball segmented image. First, the model generates fully segmented image data by concatenating the line art input and trap ball segmented image. Second, Seg2pix generates the final colorized image by concatenating the line art input and fully segmented image, which was achieved by the first generation.
Our two-step GAN model is found to be effective at minimizing the blurring near the complex borderlines of the character drawing in the line art image and avoiding the mis-coloring in the labeled parts such as blobs along the borderlines of the skin and hair, which will be later shown with the results of the experiments. We overcame the problem of having a small amount of data through a data augmentation specialized to our environment. In the data preparation for neural network learning, all the images underwent rotation and translation transformations. To produce as many line art drawings as possible, we applied basic edge filters like Sobel [10] and Canny [11], performed by morphological operations and the open-source neural network model called SketchKeras [12].
GAN for Colorization
The GAN approach offers very efficient performance in image generation tasks compared to supervised methods of image generation. Early supervised methods for coloring [13,14] propagate stroke colors with low-level similarity metrics. The GAN research related to line art colorization such as [3,6] utilized user-guided color stroke maps. The colorization methods of [7] adopt StyleGAN [15] and extend the refinement stages to improve the quality of the output. While previous methods used color suggestion map data, Tag2pix [16] takes tag data written as text such as "blue shirt" or "blonde hair" for coloring line art drawings. Colorization methods like [17] propose two-stage methods with pixel parsing to overcome the lack of pixel information problem for line art drawings, which is a very similar approach to our Seg2pix. However, we observe that these methods scarcely obtain the same color matches as the original image, while we want to achieve standardized patterns of the original image's exact color match. Since the last methods aim their colorization at random line art characters with undetermined quality, we tried to focus only on a particular line art character for the industrial application of webtoon series and animation.
Pix2pix
Pix2pix [2] is a convolution-based generative adversarial network for image-to-image translation. Through this method's architecture, we designed the neural network model for high-quality colorization. Unlike other previous GANs, Pix2pix generates a designated style of image data related to the input image vector. This means that the network model can generate a colorized character image with the same pose as the input sketch image, while previous GANs could only generate a colorized random character image with no input vector.
Sketch Parsing
To generate an accurate segmented image for our Seg2pix, we searched for methods of sketch parsing. Since sketch parsing analysis is one of the important topics in the multimedia community, several studies have contributed semantic parsing of freehand sketches [18,19]. Various methods of sketch parsing have been reported such as those focusing on sketch images [20][21][22] and 3D retrieval [23][24][25]. In this paper, we perform sketch parsing using a combination of algorithms: trap ball segmentation and Seg2pix network. With the brief mention of neural network semantic segmentation methods, including FCN [26], intensive study has been done in the computer vision society and consistently shown improved results [27][28][29]. Furthermore, with particular objectives such as semantic lesion segmentation for medical use, References [30,31] showed improved methods with highly GAN-based knowledge adaptation. However, we adopt trap ball segmentation, which is based on a supervised contour detecting line filler algorithm and the Seg2pix network, which we also use for colorization.
Webtoon Dataset for Seg2pix
Since our main goal is to develop an automatic colorization method for the webtoon industry, we made a dataset from two different webtoon series: "Yumi's cell" [32] and "Fantasy sister" [33] shown on Naver Webtoon. We chose four different characters from the webtoons to test the proper performance of Seg2pix.
Capturing the image from various episodes of the comics, we chose the target character for each webtoon title. The captured images were cropped to the character's shape with a whitened background. As shown in Figure 4, the cropped images were labeled as mask images into four classes: hair, eyes, skin, and clothes. Character A from "Yumi's cell" has been published over three years, and there have been changes in the character's hairstyle, as shown in Figure 4. Nonetheless, our method is found to admit a few minor changes to the character shapes such as hairstyle and emotions. Test data were obtained from different episodes against the training data, which were for the same character.
Line art extraction: To make the line art from a colored image for supervised learning, we used two methods to vary the style of line art. First, we used the Canny edge detector [11], which is the most traditional and well-known method to extract edge pixels, and also, we thickened the lines and edges from the Canny edge detector. As the main sketch data, we used the SketchKeras [12] network, which is specializes in line art extraction in the style of hand drawing.
Data augmentation: Assuming the actual use of our method in comics and animation works, we used only 10 images for one character's colorization for training. Then, the 10 image dataset was multiplied to 120 images by variation of the line art style and rotating random features (the rotating parameter was between −30 to +30 degrees). Therefore, we prepared 120 images for each character to train the model. The number of character datasets was four, as shown in Figure 4.
Workflow Overview and Details
As shown in Figure 5, training proceeded twice, generating the segmented image and the colorized image. The generated segmented image became the input to the Seg2pix network with line art, and then, it became the final colorized image. Figure 5. The architecture of the generator and discriminator network. Two types of ground truth are required: one is the hand-labeled segmented image, and the other is the colorized original image. Seg2pix generates both desired outputs by a two-step training using the same architecture of the network, and this training is accomplished by end-to-end methods.
Trap Ball Segmentation
To obtain high-quality segmented image data by Seg2pix, we adopted a trap ball segmentation [34] for data preprocessing. This is a traditional style image analysis method for segmenting a line art image based on contours by computing several small connected structures. Flood filling, which becomes the basic method of making a trap ball segmented image, is described in Algorithm 1. More detailed code with the full iteration and optimization can be seen in [35]. This pseudo-segmentation is introduced to help Seg2pix produce a better segmented mask.
Translating raw line art images to trap ball segmented data prevents Seg2pix from making mislabeled pixels on white or widely empty areas such as the forehead of a line art character image. Furthermore, this step allows the network to recognize detailed areas with better quality, such as the eyes and hair of the character, as shown in Figure 6. For generating the segmented image that is used for colorizing the line art, we trained the Seg2pix model with the hand-labeled segmented image as the ground truth. The input data for this training session were the concatenated data of the line art and trap ball segmented image, and the training pipeline is shown in Figure 3.
Segmentation
The network to make the segmented image was the same as the colorizing architecture, and it was trained using the hand-labeled segmented image as the ground truth, as shown inn Figure 4. The input image for this network was the concatenated image data of the line art image and the trap ball segmented image. While training, image transformations such as random rotation and random resizing were applied as the data augmentation procedure. Return pass1 (this becomes the data of a single segmented area)
Colorize
After the segmented image has been achieved by the previous Section 4.1, Segmentation, we use it as the concatenating data with the line art image for the next training step. As shown in Figure 5, the Seg2pix network has an encoder-decoder network [36]. The rectangles in the figure imply convolution blocks and active layers that transform concatenated input data to feature maps. The convolution parameter grows by 64 per layer, and this encoder step progresses until the third layer of the encoder. Then, the features are progressively scaled up in dimension, and U-Net [29] follows the sub-networks to concatenate to the decoder part of the network. Here, the skip-connection is necessary because transporting information across the network to the generator to circumvent the bottleneck solves the problems of the encoder-decoder network's downsampling of the layers.
For the discriminator network, we used the PatchGAN style based on the Markovian discriminator [37]. This discriminator is structured by the convolution layer, batch normalization, and leaky ReLU, similar to the encoder-decoder network. While past discriminators for GAN tried to classify fake or real by recognizing all the image data, the Markovian discriminator recognizes the particular size of the patch data pixels and the correlation factors of the neighboring patch. This change decreases the number of parameters for the discriminator network, which results in faster computing times. Furthermore, the Markovian discriminator retains the lightweight complexity of its network.
Attention Layer
Unlike Pix2pix [2], we implemented the attention layer before the ResNet block of the generator. The design of the attention layer was based on the attention module of SAGAN [38], as shown in Figures 5 and 7. The attention layer of the generative model allows attention-driven, long-range dependency modeling for image generation tasks. The input image feature changes to a 1 × 1 convolution and produces three attention matrices query, key and value. The query and key value produce the attention map by the dot product of the two matrices. This attention map becomes the self-attention feature map by the dot product with the value matrix. As shown in Figure 8, using this attention feature map for the network produces modified details from all small feature locations. Due to the network's complexity problem, the attention layer is implemented only in the generator network of Seg2pix.
Loss
The loss of Seg2pix is briefly given in Equation (3), which is the same as that of Pix2pix [2], mostly based on the shape of the conditional GAN. The loss of the conditional GAN is written as Equation (1): To encourage less blurring, we use the L 1 distance: The final objective loss becomes: x represents the input data, which are the concatenated data of the line art and trap ball segmentation data, and y represents the real colorized data of the ground truth. z is a random noise vector that prevents the generator from producing deterministic outputs. G and D define the generator and the discriminator The generator tries to minimize this objective against an adversarial discriminator, which tries to maximize it. As a final objective, we sum with the L 1 distance loss to train the image's low-frequency content while the adversarial loss trains the high-frequency content. This makes the image sharper to avoid blurry features in the image.
Experimental Settings (Parameters)
To implement this model, the Pytorch framework [39] was used. The training was performed with an NVIDIA GTX 2080ti GPU.
The networks were trained with the collected dataset. While training images with the dataset, we separated the training data by character as Datasets A, B, C, and D, so we could make the individual network for a particular character. We randomly flipped the images horizontally for training augmentation, as described on Section 3. Specific values of the parameter for model is shown on Table 1.
For optimization, the ADAM [40] solver was implemented using the minibatch function. The initial learning rate was 0.0002 for both the generator and the discriminator. The
Comparisons
Since the goal of our network is to colorize a particular character with few training data (10 images), there is no exact way to compare the result with previous colorization networks, which were trained with over 10,000 images with the desire to colorize random characters. Just for reference, we adopted Learned Perceptual Image Patch Similarity (LPIPS) [41] to make quantitative comparison results. LPIPS compares the generated image by the convolution feature extracting method rather than the color distribution matching method. This method can detect intra-class mode dropping and can measure diversity in the quality of the generated samples, which is a well-known proper comparison for GAN generated images. Even though the Frechet Inception Distance (FID) [42] is the best known method for generated image comparison, it requires at more than 5000 test images to have confident results, so it is not appropriate for our few shot line art image colorization experiment. In this experiment, we only cropped the image of the character except the background to make a fair comparison of the performance.
As given in Table 2, the comparison of the LPIPS score shows a huge gap between Seg2pix and the previous networks except for Pix2pix (trained with the same dataset as Seg2pix). Furthermore, we performed an ablation study by removing the attention layer and trap ball segment method from Seg2pix (the same two sketch images were used as the input while removing the trap ball method). Since our Seg2pix model was trained on a particular character to automate colorization, it obviously showed a good score compared to the previous colorization methods except Pix2pix. However, Seg2pix sill had a better score than Pix2pix. We trained the Pix2pix network with the same ground truth images used in our Seg2pix training, and the results are visualized in Figure 9. Compared with Pix2pix, we may also observe obvious differences in the color distribution and borderlines with less color bleeding in the qualitative comparison.
Analysis of the Results
As shown in Figures 9-11, the raw generated image from the original image shows the correct color distribution and shading, as we expected to obtain with only a few training data. All these outputs were accomplished with one Seg2pix network. Generating segmented image data and colorized output was performed gradually by the two-step training.
Conclusions
In this paper, we propose an automatic colorization network, which produces highquality line art colorization with only a few training images. We applied data preprocessing steps before training to achieve high-quality segmentation images from the generator.
The generated segmented image becomes the suggestion map for auto-colorization and generating the step runs again for coloring, which is the step that is the most different from the Pix2pix method. We showed its performance through experiments with sets of character image data obtained from well-known webtoon series. The experimental results showed the potential of the method to reduce the amount of iterative work in practice and supporting artists to focus on the more creative aspects of the webtoon creation pipeline. As an idea for real use scenarios, we can colorize 100 sketch images of the same character by coloring and labeling only 10 or up to 20 sketch images with this Seg2pix method. Especially in the webtoon and animation industries, coloring over thousands of the same character's sketch images is a repetitive job, so the contribution of this method is very suitable in these industries. For future research, it will be required to implement a random character's line art with a single trained model to achieve application quality in the industry, while our method focuses only a particular character's line art. To accomplish this future objective, we need to collect more and various data of character line art images and hand-labeled segmented images by utilizing our Seg2pix network in industrial applications. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from Naver Webtoon services and are available with the permission of Naver webtoon company and original authors. | 2021-05-10T00:02:57.780Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "1d972f2474ffbf4151d4195009b4ed6171db6e28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/4/1464/pdf?version=1612536794",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42748a1bddde4ee2dfbfa16dadabb6a9fa97c4b4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15151067 | pes2o/s2orc | v3-fos-license | A Novel Biological Approach to Treat Chondromalacia Patellae
Mesenchymal stem cells from several sources (bone marrow, synovial tissue, cord blood, and adipose tissue) can differentiate into variable parts (bones, cartilage, muscle, and adipose tissue), representing a promising new therapy in regenerative medicine. In animal models, mesenchymal stem cells have been used successfully to regenerate cartilage and bones. However, there have been no follow-up studies on humans treated with adipose-tissue-derived stem cells (ADSCs) for the chondromalacia patellae. To obtain ADSCs, lipoaspirates were obtained from lower abdominal subcutaneous adipose tissue. The stromal vascular fraction was separated from the lipoaspirates by centrifugation after treatment with collagenase. The stem-cell-containing stromal vascular fraction was mixed with calcium chloride-activated platelet rich plasma and hyaluronic acid, and this ADSCs mixture was then injected under ultrasonic guidance into the retro-patellar joints of all three patients. Patients were subjected to pre- and post-treatment magnetic resonance imaging (MRI) scans. Pre- and post-treatment subjective pain scores and physical therapy assessments measured clinical changes. One month after the injection of autologous ADSCs, each patient's pain improved 50–70%. Three months after the treatment, the patients' pain improved 80–90%. The pain improvement persisted over 1 year, confirmed by telephone follow ups. Also, all three patients did not report any serious side effects. The repeated magnetic resonance imaging scans at three months showed improvement of the damaged tissues (softened cartilages) on the patellae-femoral joints. In patients with chondromalacia patellae who have continuous anterior knee pain, percutaneous injection of autologous ADSCs may play an important role in the restoration of the damaged tissues (softened cartilages). Thus, ADSCs treatment presents a glimpse of a new promising, effective, safe, and non-surgical method of treatment for chondromalacia patellae.
Introduction
Chondromalacia patellae (CMP), defined as cartilaginous softening and fibrillation of patellar bone cartilage, is one of the possible cause of patellofemoral pain syndrome (PFPS) [1]. PFPS is characterized by anterior knee pain (AKP) and accounts for 10-25% of all visits seen in physical therapy clinics [1]. CMP can be documented by magnetic resonance imaging (MRI) scans [2]. Currently, there is no definite cure for cartilaginous softening (e.g., CMP) thus presenting a major therapeutic challenge. However, a few recent studies have shown the possibility of cartilage recovery using mesenchymal stem cells (MSCs) [3,4]. Further, from June 2009, the Korean Food and Drug Administration has allowed medical uses of non-expanded stem cells processed in a medical facility [5]. In this report, we describe the first successful approach to dramatic reduction of AKP in CMP by percutaneously injection of autologous adipose-tissue-derived stem cells (ADSCs: one kind of MSCs) along with platelet-rich plasma (PRP), 0.5% hyaluronic acid, and 3% CaCl 2 ; a ADSCs mixture.
Materials and Methods
This is a retrospective case series with the primary endpoint being safety and efficacy (pain improvement at month 1 and 3 along with MRI evidence of probable cartilage regeneration after 3 months) of ADSCs mixture-based treatment.
Inclusion and exclusion criteria, and outcome endpoints
The inclusion criteria, exclusion criteria, and outcome endpoints are listed as follows. Inclusion criteria: (i) MRI evidence of chondromalacia patellae; (ii) either male or female; (iii) under 60 years of age; (iv) an unwillingness to proceed with surgical intervention; (v) the failure of conservative management; and (vi) ongoing disabling pain. Exclusion criteria: (i) active inflammatory or connective tissue disease thought to impact pain condition (i.e., lupus, rheumatoid arthritis, fibromyalgia); (ii) active endocrine disorder that might impact pain condition (i.e., hypothyroidism, diabetes); (iii) active neurological disorder that might impact pain condition (i.e., peripheral neuropathy, multiple sclerosis); and (iv) active pulmonary disease requiring medication usage. Outcome endpoints (obtained at three months after treatment): (i) pre-and post-treatment VAS (visual analog scale) walking index; (ii) pre-and post-treatment Functional Rating Index; (iii) pre-and posttreatment MRI; and (iv) telephone questionnaires every six months: at 6, 12, and 18 months.
Medication restrictions
Patients were restricted from taking steroids, aspirin, nonsteroidal anti-inflammatory drugs (NSAIDs), and Asian herbal medications for one week prior to the procedure.
Liposuction
In the operating room, approximately 40 mL of packed adipose tissue were obtained by liposuction of the subcutaneous layer of the lower abdominal area using sterile techniques [1,8].
ADSCs mixture-based treatment
After the left knee was cleaned with 5% povidone-iodine (Choongwae Pharmaceutical Co., Seoul, Korea) and draped in a sterile fashion, the injection site was anesthetized with 2% lidocaine (Daehan Pharmaceutical Co., Gyeonggido, Korea). The ADSCs mixture (15 mL) was injected into the retro-patellar joint on the day of liposuction with a 20-gauge, 1 1/2-inch needle under ultrasonic guidance. On the third and seventh day after the initial injection, another dose of PRP with CaCl 2 and hyaluronic acid (1 mL) was injected as the same fashion as the first day. On the fourteenth day after the initial injection, a low-dose (254.8 nmol/L) dexamethasone (Huons, Chungbuk, Korea) was added to PRP with CaCl 2 . On day 28, the last dose of PRP with CaCl 2 was injected.
Patient characteristics
According to Korean law (Rules and Regulations of the Korean Food and Drug Administration), this study does not need approval by ethical and scientific committees [5].
Further, this study was approved and classified as exempt by the Solutions Institutional Review Board (Approval No. 1301260). After obtaining written informed consents from all the participants, Jaewoo Pak, M.D. made the clinical decisions on treatments.
The patient #1 is a 43-year-old Korean female with more than one year history of right AKP. The patient started having AKP without any history of trauma. She was seen by a physician and was diagnosed with chondromalacia of the knee after MRI imaging studies. After taking NSAIDs for few weeks along with physical therapy (PT), the AKP improved. However, approximately 6 months prior to our clinic visit, the patient started having AKP again. The pain was worse when standing up, walking, exercising, but improved with rest. The pain was not much relieved with NSAIDS and PT, this time. At the time of initial evaluation, the patient reported mild pain (VAS score: 2) on rest, increased pain when walking (VAS walking index [VWI]: 7). The FRI was 17 (Fig. 1A). The physical examination was nonremarkable except mild tenderness at the medial retro-patellar region. The Q-angle was 17. MRI scans showed retro-patellar signal changes consistent with CMP ( Fig. 2A, C, and E).
The patient #2 is a 54-year-old Korean male with 3 to 4 year history of right AKP. The patient started having AKP without any history of trauma except vigorous uses of knees, such as mountain hiking. He was seen by a physician and was diagnosed with CMP of the knee after MRI imaging studies. The patient took NSAIDs along with PT, without much improvement. At the time of initial evaluation in our clinic, the patient reported mild AKP (VAS score: 2) on rest, increased pain when walking (VWI: 7). The FRI was 15 (Fig. 1B). The physical examination was non-remarkable except mild tenderness at the medial retro-patellar region. The Qangle was 14. MRI scans showed retro-patellar signal changes consistent with CMP ( Fig. 3A, C, and E).
The patient #3 is a 63-year-old Korean female with more than 5 year history of right AKP. With the diagnosis of osteoarthritis, she had received one dose of steroid injection and multiple doses of PRP and hyaluronic acid injections over the last few years. However, she did not experience any improvement of AKP. The patient was seen by an orthopedic surgeon and was offered a total knee replacement (TKR). She was reluctant to go through TKR due to possible side effects. Since then, the patient had been receiving only PT without much improvement. At the time of initial evaluation in our clinic, the patient reported moderate AKP (VAS score: 5) on rest. AKP increased when walking (VWI: 8). The patient also complained of mild knee swelling. The FRI was 24 (Fig. 1C). On physical examination, there was mild medial joint tenderness with flexion and retro-patellar tenderness. Apley's and McMurray's tests were negative, and there was no ligament laxity. The Q-angle was 17. MRI scans showed retro-patellar signal changes consistent with CMP along with medial meniscal maceration and cartilage thinning consistent with osteoarthritis ( Fig. 4A, C, and E).
The follow-up disease surveillance questionnaires All patients were followed up with telephone questionnaires every six months: at 6, 12, and 18 months. Each time, patients were asked the following questions: (i) did you experience the pain improvement due to the procedure? If no, please comment; (ii) did you experience any complications (i.e., infection, illness, etc.) you believe may be due to the procedure? If yes, please explain; and (iii) have you been diagnosed with any form of cancer since the procedure? If yes, please explain.
Results
Two young patients (e.g.,,55 years), only with diagnosis of AKP, had failed conservative treatments comprising NSAIDs and PT. Their MRI scans of right knees showed retro-patellar signal changes consistent with CMP. Another patient, an elderly female with AKP and osteoarthritis, who failed conservative treatments consisting with multiple doses of PRP, hyaluronic acid, and/or steroid injections, was reluctant to go through a TKR due to possible side effects. The MRI scan showed retro-patellar signal changes consistent with CMP along with medial meniscal maceration and cartilage thinning consistent with osteoarthritis. Being wary of no non-invasive therapy for the cure of CMP, three patients wanted to try autologous, non-cultured ADSCs mixturebased treatments for the restoration of CMP. From June 2009, autologous, non-cultured ADSCs can now be used as a source of MSCs in Korea [5]. At that time, we obtained the informed consents from the patients.
After obtaining ADSCs by liposuction and preparing PRP, the ADSCs mixture was injected under ultrasonic guidance into the retro-patellar joints of all three patients on the day of liposuction. On the third, seventh, 14th, and 28th day after the initial injection, a different variation of PRP and CaCl 2 with (or without) hyaluronic acid or a very low dose of dexamethasone (254.8 nmol/l) were injected in the same manner as that done in the first day. One month after the ADSCs mixture injection, each patient's pain improved 50-70% (Fig. 1A, B, and C). Three months after the treatment, the patients' pain improved 80-90% (Fig. 1A, B, and C). The repeated MRI scans at three months showed almost complete restoration of the damaged tissues (softened cartilages) on the patellae-femoral joints (panels B, D, and F of Figs. 2-4). This is the first human report of a successful AKP reduction attributed to probable cartilage restoration by using percutaneous injections of the ADSCs mixture. As no non-surgical therapy is available for the cure of CMP, this study may significantly improve the current treatment strategy for young and old patients suffering from CMP with (or without) other joint diseases.
All three patients did not report any serious side effects. All patients experienced 2-3 days of joint discomfort after the first injection of ADSCs mixture. The discomfort was attributed to the volume expansion after the injection and completely disappeared after 2-3 days. Since ADSCs and PRP are autologous in nature, no rejection was expected and none occurred.
Discussion
For diagnosis, MRIs of right knees were performed on three patients before ADSCs-mixture treatment. Consequently, posttreatment MRIs were repeated to compare pre-and posttreatment images. The MRI T2 sequence was used for its ability to differentiate bony anatomy from the cartilage. Due to slight differences in patient positioning and slight movement of patients during the MRI procedures, there was some difficulty in capturing the exact treatment locations. However, the pre-and posttreatment MRI results can be compared with sequential views to compensate for any possible errors [4].
In this study, significant MRI signal changes were apparent in the T2 views of right knees along the retro-patellar joint injection sites. These significant signal changes can be interpreted as possible signs of persistent and restored probable cartilages. Due to dramatic reduction in AKP, all three patients were reluctant to undergo knee cartilage biopsies to determine the true nature of the cartilage-like tissues. Although the true nature of the restored tissues is unclear, the damaged tissues (probable cartilages) have been believed to be restored, based on previous studies showing same tissue recovery using MSCs in patients with osteonecrosis, osteoarthritis, and meniscal injury [3,4,8]. The dramatic reduction in AKP and significant MRI signal changes without adverse effects support this notion.
With regard to the mechanism of the restoration, there are few plausible possibilities: (i) direct differentiation of the stem cells (e.g., ADSCs) [10][11][12][13][14][15][16][17][18]; (ii) trophic and paracrine effects of ADSCs on the existing cartilage [19]; (iii) effects of growth factors contained in PRP [20]; (iv) extracellular matrix production by chondrocytes stimulated by low physiologic doses of dexamethasone [21]; or (v) combination of all of the above-mentioned possibilities. Based on previous reports [3,4,8] and this study, ADSCs might play an important role in the restoration of the damaged tissues (softened cartilages). To our knowledge, this is the first human report of a successful restoration of CMP by using the new ADSCs mixturebased regimen.
Although the patients' symptoms and signs have improved, it is worthwhile to note that the improvement appeared gradually over three months. It can be postulated that three-months-time period is necessary for ADSCs to restore the softened cartilages.
It has been estimated that approximately 400,000 ADSCs are contained in 1 mL of adipose tissue [22]. Since 40 mL centrifuged adipose tissue were harvested, it is believed that approximately 16,000,000 stem cells were extracted and injected into knee joints. These three clinical reports were not the randomized and controlled trial, and the key clinical feature in this study is performing the potentially effective and non-invasive treatment in the patients with continuous AKP and without history of trauma. Therefore, there is not a placebo (ADSCs-free) group. However, patient 3 had received PRP and hyaluronic acid injections without ADSCs over the last few years but she did not experience any improvement of AKP. She has experienced the dramatic reduction of AKP in CMP by percutaneously injection of ADSCs mixture. This clinical study was in compliance with the Declaration of Helsinki and regulation guidelines of KFDA (Korean Food and Drug Administration).
In conclusion, CMP of the knee is managed conservatively with NSAIDs and PT. When such conservative treatment fails, there is no definite cure for CMP thus presenting a major therapeutic challenge. However, this pilot study presents an alternative and safe way through which physicians may be able to manage CMP by percutaneous injections of autologous ADSCs mixture without culture expansion. Further studies with a larger number of patients with CMP are needed to confirm the short-and long-term efficacy and safety of ADSCs mixture-based treatment of CMP that are resistant to conventional therapies. | 2017-06-05T05:39:49.656Z | 2013-05-20T00:00:00.000 | {
"year": 2013,
"sha1": "358aee678456a88d74a302817d33f732b72085ca",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0064569&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7227ff147968c7bca4acc5cf77e3f14b1a8d50c1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235529690 | pes2o/s2orc | v3-fos-license | Optimization of Slag Mobility of Biomass Fuels in a Pilot-scale Entrained-Flow Gasifier
The bioliq (cid:2) process, developed at the Karlsruhe Institute for Technology, aims at the production of synthetic fuels and chemicals from biomass. The bioliq (cid:2) technology is based on a two-step process with decentral pyrolysis for the production of a transportable slurry from residual biomass and the central entrained-flow gasification of the slurry by using biomass-to-liquid technology. This study is focused on the slag, which is formed by melting the inorganic ash components during gasification. To operate the gasifier smoothly, a range of desired viscosity has to be defined. A structure-based viscosity model was used to predict the viscosity of the slags at the gasifier outlet. A good agreement between experimental and calculated viscosities is achieved for fully liquid slag systems.
Introduction
The synthesis of biofuels by Biomass-to-Liquid (BtL) technologies displays a CO 2 -neutral alternative to liquid hydrocarbon fuels, which are derived from fossil energy carriers. The bioliq concept as an example for BtL technologies provides the opportunity to achieve the conversion of low-grade biogenic feedstocks to high-grade synthetic fuels. Usable are dry biogenic feedstocks like straw from agriculture or wood residues. Due to low volumetric energy densities, the feedstocks are pretreated by fast pyrolysis at temperatures around 500°C under inert-atmospheric conditions. The end products are char, organic condensates, and noncondensable components. The condensates and the solid char are mixed to form an intermediate fuel called bioSyncrude [1][2][3].
The bioSyncrude is used as feedstock for the pressurized entrained-flow gasifier in the next step of the process chain to gain a tar-free and low-methane syngas [4][5][6][7]. At high temperatures of > 1200°C and pressures of 40-80 bar, the ash of the slurry melts and forms a slag layer, which flows down the inner reactor wall. On the one hand, this slag protects the refractory material from corrosion (Fig. 1). On the other hand, a slag with high viscosity will block the gasifier outlet. Therefore, the mobility of the slag has to be controlled to prevent stalling of the protective layer at too low viscosity and avoid blockages of the gasifier at too high viscosity [6]. For higher cold gas efficiency and minimizing the heat loss, low operation temperatures are preferred. Hereby, viscosity is one limiting factor because a decrease in operation temperature leads to higher viscosities that could result in the blockage of the outlet and even an unscheduled shutdown of the gasifier.
The slag viscosity is dependent on temperature, pressure, and chemical composition [8][9][10]. Additionally, the liquidus temperature has to be considered as a threshold for the start of crystallization, although supercooled melts may remain. Due to crystallization, a partial melt is present consisting of a liquid part and solid crystals. An increasing amount of crystals will influence the viscosity chemically and mechanically [11,12]. Although some viscosity models assume a crystallized part of up to 40 % without significant impact [11], the viscosity behavior induced by crystallization is complex and this needs to be avoided in general during gasification. The impact of slags on the gasification process is depending on the chemical composition of the melt, the form of the crystals, and the environment of melt formation [13,14].
The main chemical component in the investigated slags is the network former SiO 2 . According to the common network theory, slags with high SiO 2 contents tend to form supercooled melts characterized by the suppression of crystallization below the liquidus temperature. Therefore, SiO 2 -based melts are preferred to guarantee the flow of the slag out of the gasifier.
Unfortunately, these supercooled melts are usually characterized by high viscosity [15]. To achieve the desired low viscosity in the bioliq gasifier, the addition of flux material to the SiO 2rich feedstock is an established technique. As fluxes alkaline metal oxides, e.g., Na 2 O, can be applied, which reduce the viscosity as network modifiers according to the network theory [16].
For the smooth gasifier operation, a viscosity range of 5-25 Pa s inside the gasifier is suggested [17] and has to be achieved by adjusting the chemical composition with fluxing or the temperature. It should be mentioned that the suggested viscosity range is not constant and depends on the individual design of the gasifier. The real-time viscosity values inside the gasifier are difficult to determine because of lacking correspondence for the chemical compositions of feedstock ash, the slag at the reactor wall, and the slag from the outlet of the reactor. Therefore, a range of viscosity instead of a specific value is in general defined as a benchmark for ideal working conditions of the gasifier, in addition to the consideration of the slag as a protective layer of the refractory material from corrosion. Since direct viscosity measurements of the slag at the reactor wall are not feasible, viscosity modeling is proposed as a promising approach to achieve reliable results.
The existing viscosity models for oxide melts can be divided into structure-based and nonstructure-based models. Nonstructure-based models do not consider various structures resulting from interactions between the different components in the slag [18][19][20]. Therefore, their application is limited for specific systems and temperature ranges. Structure-based models are more complex and take interactions between the slag components into account [21]. Their application is still limited to certain systems [22][23][24]. To describe the complex influence of slag components on the viscosity of biomass slags, the structure-based model developed by Wu et al. [24][25][26][27][28] was applied in this study.
Materials
This study is based on slag samples collected at the outflow of the entrained-flow gasifier, which is integrated in the bioliq process at the KIT [1][2][3]. Model fuels were prepared to guarantee a constant and wellknown chemical composition. Ethlyene glycol (Gly) or a pyrolysis oil from beech wood from a barbecue coal production unit ''Profagus'' (Prof) was used as model liquid phase with similar properties to liquid parts of the bioslurry, e.g., low or no ash content. Na-rich glass pearls from glass-blasting (Na-G) were used as solid phase and display the behavior of the alkali-rich ash content in bioslurries. The chemical composition of the model feedstocks is given in Tab. 1. For decreasing the viscosity of the slag inside the gasifier, several Na compounds, i.e., Na-oxalate (Na-Ox) and NaOH, were added to the feedstock (Tab. 1). The gasifier operates at temperatures > 1200°C and pressures of 40 bar using oxygen and steam as gasification agents. The slag of the model fuel flows down the reactor wall and exits the reactor with the synthesis gas at the bottom. During the quenching process, the molten slag solidifies and forms solid aggregates, which are used as slag samples for viscosity determination in this study (Fig. 1).
For characterization of the slag samples, the chemical composition of the investigated slags is given in Tab. 2. Due to high-temperature conditions, ash components dominate the composition of the slag samples. According to their chemical composition, two groups were defined. SiO 2 -dominated samples are represented by V82 and pure Na-G. The two samples of this group do not differ significantly in chemical composition. Small changes of chemical composition between the samples V82 and pure Na-glass can be caused by regular chemical deviation of samples inside the gasifier.
The second group is the one of Na 2 O-enriched samples. Here, Na 2 O was added in form of Na-Ox in V79 and as NaOH in V86 and V87 to reduce the slag viscosity. The Na 2 O-based fluxes were combined with Na-G to guarantee the formation of a molten slag layer. The addition of NaOH in V86 results in higher Na 2 O contents than in V79. The reduced amount of NaOH in V87 leads to lower Na 2 O contents.
High-Temperature Viscosimetry
The experimental determination of the slag viscosity was conducted by high-temperature viscosimetry. Mo was chosen as material for the crucibles and spindles because of its inert behavior to oxide melts. However, Mo oxidizes at high oxygen partial pressures. Therefore, an Ar/4 vol % H 2 atmosphere was employed to guarantee a sufficient reducing environment
Research Article
during the viscosity measurement. The bioliq slags were grinded to a size of < 1 mm and melted in Mo-crucibles under Ar-H 2 . When the necessary amount of molten slag was reached in the crucible, the samples were inserted in the vertical tube furnace of the high-temperature viscosimeter. A rotational viscosimeter RC1 (Rheotec, Germany) was used to determine the viscosity in the temperature range 900-1600°C. The tempera-ture was measured with a thermocouple directly below the sample at the bottom of the crucible, and a deviation of approximately 25°C was observed between the thermocouple and the temperature of the sample inside the crucible, which has been already considered and calibrated before the viscosity measurement. Seebold et al. [12,15] described the in-house developed instrumental setup ( Fig. 2) in detail.
Research Article
An automatic measuring program was developed to perform viscosity measurements according to the conditions in Tab. 3. In the temperature range of 900-1600°C, temperature intervals of 25°C were defined, and 22 isothermal viscosity measurements were conducted per temperature. During viscosity measurements, the rotational speed was altered from 1 to 400 s -1 and the torque varied accordingly in the range of 0.1-40 mN m. Inaccuracies in the measurement, e.g., due to potential eccentric rotation and even formation of Taylor vortex, were calibrated by a specific device in the calculation of experimental viscosity from the torque measured, which has been described in detail elsewhere [12].
Due to the fact that a decentring rotation of the long spindle could not be completely prevented, the flow behavior in the wide gap was used to calculate the viscosity from torque, rotational speed, and radius of spindle and crucible. Nevertheless, a calibration of the viscosimeter using samples with well-known viscosities is mandatory. The obtained values show a deviation of approximately 10 % [12].
Viscosity Calculation
The viscosity of the slags investigated was calculated using the structure-based model developed by Wu et al. [25][26][27][28], as indicated in Eq. (1). The model not only considers the chemical composition and temperature but also oxygen partial pressure [26,27]. The used chemical composition is listed in Tab. 2, and a constant oxygen partial pressure of 1 Pa is applied, which approximately corresponds to the atmosphere used in the present study.
Predicting the slag structure is necessary before the viscosity calculation is implemented. Since the viscosity and Gibbs energy have the common structural base [29], the basic structural units in molten slags are described by means of a non-ideal associate solution, which is taken to describe the Gibbs energy of the liquid phase [30]. The distribution of the aforementioned basic structural units at equilibrium was calculated by the software package FactSage using the GTox database [31,32]. When the formation of potential solid phases was suppressed, the viscosity of supercooled melts was calculated accordingly. As shown in Fig. 3, the resulting viscosity-temperature curves were plotted in combination with the experimental ones. Additionally, the liquidus temperatures (T L ) were marked in Fig. 3, which were calculated by FactSage using GTox database [31,32] and the SGPS database [33]. where =T X n j SiO 2 þ :::; j ¼ 1 and 2 (3) SiO 2 þ :::; (Si-Al) m is the silicon-aluminum-based ternary associate species. h ideal and h excess are the ideal viscosity and the excess viscosity, respectively; X i is the mole fraction of the associate species i; h i is the viscosity contribution from the associate species i; h self-pol. is the excess viscosity due to the critical silicate related self-polymerizations; h inter-pol. is the excess viscosity due to the critical inter-polymerizations; n j and n m are the integer coefficients that relate to a particular degree of polymerization; are the compositiondependent weighting factors for the contribution of relatively large clusters derived from the associate species to the excess viscosity. It is worth mentioning that other terms relevant to the excess viscosity are not given here but are described in detail elsewhere [26,27].
Results and Discussion
In Fig. 3, the viscosity values calculated by the model (calc.) are compared with experimentally determined viscosities (exp.). It is seen that the viscosity of supercooled melts (sc.) is also given. Attention should be paid to the experimental viscosity data below the liquidus temperature, where the formation of potential crystals influences the flow behavior of the melt.
The SiO 2 -dominated group is characterized by a gradual decrease of the logarithmic viscosity with increasing temperature. In the range of 1300-1450°C, the desired viscosity of 5-25 Pa s [17] is achieved for the slag sample of Na-rich glass. This implicates that a temperature variation or fluctuation within 150°C is allowed during the operation of entrained-flow gasifiers for the investigated feedstocks.
The similar chemical composition of samples from the SiO 2dominated group (V82, Na-G) results in nearly similar viscosities (Fig. 3). It is noticed that the viscosity of Na-G with higher SiO 2 content is greater than that of V82. The higher SiO 2 content also causes a higher liquidus point. The calculated and experimental viscosity values agree very well at the temperature range investigated. This indicates that the model performance is good for the SiO 2 -dominated slags and the experimental determination of these slags is not necessary for each sample.
For the calculated values, the addition of 4.5 % Na-oxalate in V79 results in a significant reduction of the viscosity values. This effect is enhanced in V86 by further increasing the amount of Na 2 O. The lower Na 2 O content in V87 results in higher viscosities than in V79 and V86. The experimental and calculated viscosities are not always in good agreement over the entire temperature range investigated. For example, the experimental viscosity of V87 remains higher up to 975°C. When exceeding this temperature, the viscosity decreases significantly in the experiment and causes the viscosity even to be lower than the ones of V79 in contrast to the calculated values.
The low viscosities of this Na 2 O-enriched group allow lower operation temperatures, i.e., around 1000°C, to achieve the desired slag viscosity. However, only a small temperature variation of approx. 20°C is allowed to achieve the desired viscosity of 5-25 Pa s because of a sharp increase of viscosity by decreasing temperature. Moreover, the viscosities were determined in supercooled melts and a noticeable crystallization was observed at temperatures around 1000°C significantly below the liquidus temperature T L .
The increasing crystallized phases at lower temperatures may cause the change from Newtonian to Non-Newtonian flow behavior below the value of critical viscosity. The averaged viscosity value below the critical viscosity is much more inaccurate and causes a deviation of the experimental data from the calculated viscosity values. It is worth noting that the experimental values were collected only to 900°C, because at lower temperatures the maximum torque allowed by the viscosimeter was reached. Additionally, the corrosion of the thermoelements can intensify in this temperature range.
Influence of Na 2 O on Melting Behavior and Viscosity
The addition of Na 2 O causes a strong increase of viscosity with decreasing temperature. This effect is reflected in the experimental results but not in the calculated ones. The calculated phase equilibrium for V79 explains this deviation (Fig. 4). Calculated phase equilibria for the other investigated slags can be found in the Supplementary Information (S1-S3
Research Article
of viscosity because of mechanical resistance of the formed crystals dependent on the shape [11,34,35] which results in an inconsistent flow behavior [36,37]. Furthermore, the crystallization leads to a significant decrease in Na 2 O content in the melt. Thus, the viscosity of the remaining melt is also increased due to less Na 2 O as a network modifier available to reduce the viscosity. That means, the reasons for the viscosity increase due to crystallization are two-fold, i.e., from physical and chemical effects as reported by Seebold et al. [12]. The calculated viscosities for V79 are in good agreement with the experimental values at temperatures above 1050°C (Fig. 3). The agreement between experimental and calculated viscosities at temperatures of 1025-1075°C indicates the presence of supercooled melts, which occurs up to 100°C below the liquidus temperature T L . In other words, a supercooling degree of approx. 75°C is required to induce the occurrence of crystallization. Since the viscosity model assumes a supercooled melt without crystallization, the calculated values are lower than the experimental values at low temperatures, when crystallization occurs. Additionally, it is noticed that at high temperatures the experimental viscosities of V86 and V87 are respectively lower than the calculated ones of these samples, possibly due to the higher Na 2 O content in melts than expected.
Relation Between the Determined Slag Viscosities and the Flow Behavior of the Slags in the Gasifier
The operating temperature of the gasifier is estimated by an equilibrium temperature calculated with the flow sheet simulation software ASPEN. The temperature of the moving slag at the reactor wall is derived from the operating temperature and supported by the results of CFD modeling [38][39][40]. The calculated slag temperatures in Tab. 3 are compared with that corresponding to the desired range of viscosity indicated in Fig. 3. For the SiO 2 -dominated sample V82 the temperature of the moving slag is estimated at 1350°C during the gasification. This corresponds well with the temperature range of 1300-1450°C, which is preferred concerning the viscosity conditions and results in the reduction of slag-related problems during the gasification of these samples. The applied temperatures during gasification for the Na 2 Oenriched slags (V79, V86, and V87) are in the range of 1250-1350°C and exceed the temperatures for the desired viscosity range of these samples (950-1050°C) significantly (Tab. 4). As displayed in Fig. 3, the viscosities are significantly lower than the desired values at the applied temperature conditions. Such low viscosities may cause the stalling of the protective slag layer (Fig. 1). The viscosity can be increased as the temperature declines. However, the relatively high temperatures are necessary to avoid the blockage of the gasifier outlet by crystallization. As an alternative, the adjustment of slag composition is therefore considered, e.g., by adding more silica or less Na.
It should be mentioned that there is a possible difference between the chemical composition of the quenched slag in the gasifier outflow and that at the reactor wall inside the gasifier. This may result in a deviation of slag mobility inside the gasifier from that determined experimentally using the quenched slags or the corresponding model prediction. The specific influence of the position at the reactor wall to the content of the main slag-forming components Na 2 O, SiO 2 , K 2 O, and CaO is demonstrated for sample V81 (Fig. 5), taken as an example with similar composition to sample V82. Fig. 5 illustrates that there is a slight increase in the SiO 2 content at the position MP5, which is associated with a decrease of CaO, Na 2 O, and K 2 O content at the same position. These changes in chemical composition in the vertical direction are regular fluctuations and can appear at any position of the gasifier. The influence of different radial positions (MP1-4) on the slag composition was also investigated and the results indicate a limited influence to the slag composition.
Based on the chemical composition obtained, the viscosities were calculated with the model (Fig. 6). The difference in the chemical composition results in different magnitudes of the viscosity data at the same temperature. The slag at MP5 has the highest viscosity because of the highest SiO 2 content. The viscosity at the other measuring points are close to each other according to the small differences in their chemical composition.
Application of Viscosity Prediction in the Adjustment of Slag Composition
This study aims to achieve the desired range of viscosity inside the gasifier via the adjustment of temperature or composition. The viscosity modeling enables this adjustment in advance, which can then be validated with a limited amount of experimental work. Fig. 7, as an example, is plotted with MATLAB , where the iso-viscosity data matrix is generated using a selfbuilt solver by setting two variables including chemical composition and temperature of slags. This enables the determination of the necessary temperature to reach the given viscosity for the investigated samples. In the example the maximum viscosity value of 25 Pa s in entrained-flow gasifiers [17] was chosen to define a threshold for the minimal temperature to operate the gasifier economically without viscosity-related problems. The three main components (SiO 2 , Na 2 O, and CaO) in the investigated slags (V79, V82, V86, and V87) are chosen to be composition variables, and other minor components are not considered in the ternary diagram. According to Fig. 7, it is easy to determine the required temperature for different slag candidates in order to achieve the desired viscosity. In the other way around, the chemical composition of the feedstock can also be adjusted for a fixed temperature and viscosity. This enables to design a target-oriented fluxing for the preferred viscosity range under the condition that the ash content of the feedstock and slag does not differ significantly.
For the SiO 2 -dominated group (Na-rich glass and V82) higher temperatures are required to achieve the desired viscosity (Fig. 7). This agrees well with both the calculated and experimental results in this investigation due to the relatively high content of the network former SiO 2 . In contrast, the samples of the Na 2 O-enriched group (V79, V86, and V87) can achieve the desired viscosity at lower temperatures due to the higher content of the network modifier Na 2 O. The addition of CaO also modifies the slag network structure and leads to a slight decrease in viscosity. However, the formation of potential Ca-based phases with high melting point will cause a higher risk of crystallization, which is not considered in this figure.
The impact of the two main parameters, i.e., slag composition and temperature, on the viscosity during the entrainedflow gasification is clearly visualized in Fig. 7. To optimize the efficiency of entrained-flow gasifiers, such visualization can be employed to adjust the slag composition in order to reach low viscosities at low temperatures. It should be pointed out that more than three components can be taken into account in the visualization in order to provide more accurate data for real slags.
Conclusion
The mobility of slags is an important parameter for operating an entrained-flow gasifier. With consideration of minimizing the corrosion on the wall material, achieving the optimal viscosity range guarantees the efficient operation of the reactor without blocking the outflow. To adjust this viscosity range, the structure-based viscosity model is adopted, using the chemical composition of slag collected at the outflow of the gasifier. For the evaluation of the model the viscosity of these slags was also determined experimentally.
The composition of slag from the outflow does not differ significantly from that at the wall inside the reactor. Nevertheless, even small deviations in chemical composition sometimes can result in significant different viscosities.
The investigated slags are characterized by high SiO 2 or Na 2 O contents. SiO 2 -dominated slags are characterized by high viscosities, which can be significantly decreased by the addition of Na 2 O. The improvement of the economic efficiency is achieved by lowering the operation temperature, where the adjustment of slag composition is challenging in order to obtain the optimal viscosity range. Attention should be paid to the risk in a sharp viscosity increase caused by crystallization at lower temperatures. Therefore, the gasification temperature for Na 2 O-enriched systems is suggested to be higher than supposed by viscosity calculations.
The results of this investigation can be applied to adjust the temperature or chemical composition in order to achieve a desired viscosity range. The viscosity-temperature-composition relations are visualized in the ternary diagram, where three major slag components, namely, SiO 2 , CaO, and Na 2 O, in this study are chosen and taken as an example. Such visualization enables to simplify the adjustment, although the results obtained are only the basis for further investigations. This provides a guideline to optimize the flow behavior of several biomass slags in entrained-flow gasification.
Supporting Information
Supporting Information for this article can be found under DOI: https://doi.org/10.1002/ceat.202000531. plant, the Wirtschaftsministerium of Baden Württemberg for financial support, and our industry cooperation partner and plant manufacturer Air Liquide for financial support and consulting. Open access funding enabled and organized by Projekt DEAL. | 2021-06-22T17:54:40.225Z | 2021-05-06T00:00:00.000 | {
"year": 2021,
"sha1": "4f215493519bc3b912b460d034221161599e161a",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ceat.202000531",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1e91d2042158dc0ff7b65fe2bc055a5c19bf24d6",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
220962208 | pes2o/s2orc | v3-fos-license | Learning from a Complementary-label Source Domain: Theory and Algorithms
In unsupervised domain adaptation (UDA), a classifier for the target domain is trained with massive true-label data from the source domain and unlabeled data from the target domain. However, collecting fully-true-label data in the source domain is high-cost and sometimes impossible. Compared to the true labels, a complementary label specifies a class that a pattern does not belong to, hence collecting complementary labels would be less laborious than collecting true labels. Thus, in this paper, we propose a novel setting that the source domain is composed of complementary-label data, and a theoretical bound for it is first proved. We consider two cases of this setting, one is that the source domain only contains complementary-label data (completely complementary unsupervised domain adaptation, CC-UDA), and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data (partly complementary unsupervised domain adaptation, PC-UDA). To this end, a complementary label adversarial network} (CLARINET) is proposed to solve CC-UDA and PC-UDA problems. CLARINET maintains two deep networks simultaneously, where one focuses on classifying complementary-label source data and the other takes care of source-to-target distributional adaptation. Experiments show that CLARINET significantly outperforms a series of competent baselines on handwritten-digits-recognition and objects-recognition tasks.
UDA methods train a target-domain classifier with massive true-label data from the source domain (true-label source methods transfer knowledge from Ds (true-label source data) to Dt (unlabeled target data). However, acquiring fully-true-label source data is costly and unaffordable (black dash line, xs → Ds, xs means unlabeled source data). This brings complementary-label based UDA, namely transferring knowledge from Ds (complementary-label source data) to Dt. It is much less costly to collect complementary-label source data (black line, required by our setting) than collecting the true-label one (black dash line, required by UDA). To handle complementary-label based UDA, a weak solution is a two-step approach (green dash line), which sequentially combines complementary-label learning methods (Ds →Ds, label correction) and existing UDA methods (Ds → Dt). This paper proposes a one-step approach called complementary label adversarial network (CLARINET, green line, Ds → Dt directly). data) and unlabeled data from the target domain (unlabeled target data). Existing works in the literature can be roughly categorised into the following three groups: integralprobability-metrics based UDA [16], [17]; adversarial-training based UDA [18], [19]; and causality-based UDA [20], [21]. Since adversarial-training based UDA methods extract better domain-invariant representations via deep networks, they usually have good target-domain accuracy [22]. However, the success of UDA still highly relies on the scale of true-label source data (black dash line in Figure 1). Namely, the target-domain accuracy of a UDA method (e.g., CDAN) decays when the scale of true-label source data decreases and we prove this phenomenon in the experiment section. Hence, massive true-label source data are inevitably required by UDA methods, which is very expensive and even prohibitive.
While determining the correct label from many candidates is laborious, choosing one of the incorrect labels (i.e., complementary labels), e.g. labeling a cat as "Not Monkey" (as shown in Figure 2), would be much easier and quicker, thus less costly, especially when we have many candidates [23]. For example, suppose that we need to annotate the labels of a bunch of animal images from 1, 000 candidates. One strategy is to ask crowd-workers to choose the true labels from 1, 000 candidates, while the other is to judge the correctness of a label randomly given by the system from the candidates. Apparently, the cost of the second strategy is much lower [24], [25].
This brings us a novel setting, complementary-label based UDA, which aims to transfer knowledge from complementarylabel source data to unlabeled target data ( Figure 1). Compared to ordinary UDA, we can greatly save the labeling cost by annotating complementary labels in the source domain rather than annotating true labels [23], [26]. Please note, existing UDA methods cannot handle complementary-label based UDA, as they require fully-true-label source data [11], [12] or at least 20% true-label source data [27], [28].
In our previous conference paper [29], we consider using completely complementary-label data in the source domain while actually we could also get a small amount of true labels when collecting complementary labels [23]. Therefore, the previous work was flawed as it did not make good use of the existing true-label data. Furthermore, experiments were conducted only on some digit datasets and a thorough learning bound was not provided. Aiming at these defects, in this work, we consider a generalized and completed version of the complementary-label based UDA problem setting.
A straightforward but weak solution to complementarylabel based UDA is a two-step approach, which sequentially combines complementary-label learning methods and existing UDA methods (green dash line in Figure 1) 1 . Complementarylabel learning methods are used to assign pseudo labels for complementary-label source data. Then, we can train a targetdomain classifier with pseudo-label source data and unlabeled target data using existing UDA methods. Nevertheless, pseudolabel source data contain noise, which may cause poor domainadaptation performance of this two-step approach [27].
Therefore, we propose a powerful one-step solution, complementary label adversarial network (CLARINET). It maintains two deep networks trained by adversarial way simultaneously, where one can accurately classify complementarylabel source data, and the other can discriminate source and target domains. Since Long et. al. [19] and Song et. al. [30] have shown that the multimodal structures of distributions can only be captured sufficiently by the cross-covariance dependency between the features and classes (i.e., true labels), we set the input of domain discriminator D as the outer product of feature representation (e.g., g s in Figure 3) and mapped classifier prediction (e.g., T (f s ) in Figure 3).
Due to the nature of complementary-label classification, the predicted probability of each class (i.e., each element of f s , Figure 3) is relatively close. According to [30], this kind of predicted probabilities could not provide sufficient information to capture the multimodal structure of distributions. To fix it, we add a sharpening function T to make the predicted probabilities more scattered (i.e., T (f s ), Figure 3) than previous ones (i.e., f s , Figure 3). By doing so, the mapped classifier predictions can better indicate their choice. In this way, we can take full advantage of classifier predictions and effectively align distributions of two domains. Our ablation study (see 1 We implement this two-step approach and take it as a baseline. We conduct experiments on 7 complementary-label based UDA tasks and compare CLARINET with a series of competent baselines. Empirical results demonstrated that CLAR-INET effectively transfers knowledge from complementarylabel source data to unlabeled target data and is superior to all baselines. We also show that the target-domain accuracy of CLARINET will increase if a small amount of true-label source data are available. To make up for the defects of previous conference paper [29], the main contributions of this paper are summarized as follows. 1) We present a generalized version of the complementarylabel based UDA. This paper considers two cases of complementary-label based UDA, one is that the source domain only contains complementary-label data (completely complementary unsupervised domain adaptation, CC-UDA), and the other is that the source domain also contains a small amount of true-label data (partly complementary unsupervised domain adaptation, PC-UDA). 2) We provide a thorough theoretical analysis of the expected target-domain risk of our approach, presenting a learning bound of complementary-label based UDA. 3) Apart from the handwritten digit datasets, we also conduct experiments on more complex image datasets, proving the applicability of complementary-label based UDA. This paper is organized as follows. Section II reviews the works related to domain adaptation, complementary-label learning, and low-cost unsupervised domain adaptation. Section III introduces the problem setting and prove a learning bound of this problem. Section IV introduces a straightforward but weak two-step approach to complementary-label based UDA. The proposed powerful one-step solution is shown in Section V. Experimental results and analyses are provided in Section VI. Finally, Section VII concludes this paper.
II. RELATED WORKS
In this section, we discuss previous works that are most related to our work, and highlight our differences from them. We mainly review some related works about domain adaptation, complementary-label learning and low-cost unsupervised domain adaptation.
A. Domain Adaptation
Domain adaptation generalizes a learner across different domains by matching the distributions of source and target domains. It has wide application in computer vision [31], [32], [33] and natural language processing [34], [35], etc. Previous domain adaptation methods in the shallow regime either try to bridge the source and target by learning invariant feature representations or estimating instance importances using labeled source data and unlabeled target data [36], [37]. Later, it is confirmed that deep learning methods formed by the composition of multiple non-linear transformations yield abstract and ultimately useful representations [38]. Besides, the learned deep representations to some extent are general True Label "Monkey" "Cat" "Dog" Complementary Label "Not Dog" "Not Monkey" "Not Cat" and are transferable to similar tasks [39]. Hence, deep neural networks have been explored for domain adaptation. Concurrently, multiple methods of matching the feature distributions in the source and the target domains have been proposed for unsupervised domain adaptation. The first category learns domain invariant features by minimizing a distance between distributions, such as Maximum Mean Discrepancy (MMD) [40]. In DAN [16], Long et.al. minimize the marginal distributions of two domains by multi-kernel MMD (MK-MMD) metric. An alternative way of learning domain invariant features in UDA is inspired by the Generative Adversarial Networks (GANs). By confusing a domain classifier (or discriminator), the deep networks can explore non-discriminative representations. The adversarial-training based UDA methods always try to play a two-player minimax game. DANN [41] employs a gradient reversal layer to realize the minimax optimation. In [19], Long et.al. propose conditional domain adversarial network (CDAN) which conditions the models on discriminative information conveyed in the classifier predictions. Some works study the UDA problem from a causal point of view where they consider the label Y is the cause for feature representation X. In [20], Gong et. al. aim to extract conditional transferable components whose conditional distribution is invariant after proper location-scale (LS) transformations.
However, the aforementioned methods all based on the truelabel source domain data, which require high labeling costs. In our work, we propose a new setting which use complementarylabel source data instead of true-label source data, which significantly save the labeling cost.
B. Complementary-label Learning
Complementary-label learning (shown in Figure 2) is one type of weak supervision learning approaches, which is first proposed by Ishida et. al. [23]. They gave theoretical analysis with a statistical consistency guarantee to show classification risk can be recovered only from complementary-label data. Nevertheless, they require the complementary label must be chosen in an unbiased way and allow only one-versus-all and pairwise comparison multi-class loss functions with certain non-convex binary losses. Namely softmax cross-entropy loss, which is the most popular loss in deep learning, could not be used to solve the problem.
Later, Yu et. al. [26] extend the problem setting to where complementary label could be chosen in biased way with the assumption that a small set of easily distinguishable true-label data are available in practice. In their point of view, due to humans are biased toward their own experience, it is unrealistic to guarantee the complementary label is chosen in an unbiased way. For example, if an annotator is more familiar with one class than with another, she is more likely to employ the more familiar one as a complementary label. They solve the problem by employing the forward loss correction technique to adjust the learning objective, but limiting the loss function to softmax cross-entropy loss. They theoretically ensure that the classifier learned with complementary labels converges to the optimal one learned with true labels.
Recently, Ishida et. al. propose a new unbiased risk estimator [24] under the unbiased label chosen assumption. They make any loss functions available for use and have no implicit assumptions on the classifier, namely the estimator could be used for arbitrary models and losses, including softmax crossentropy loss. They further investigate correction schemes to make complementary label learning practical and demonstrate the performance. Thus in our paper, we take advantage of this estimator for the source domain classification and generalize it to the unsupervised domain adaptation field.
C. Low-cost Unsupervised Domain Adaptation
Unsupervised domain adaptation with low cost source data has recently attracted attention. For instance, in [27], Liu et. al. consider the situation that the labeled data in the source domain come from amateur annotators or the Internet [42], [43]. As in the wild, acquiring a large amount of perfectly clean labeled data in the source domain is high-cost and sometimes impossible. They name the problem as wildy unsupervised domain adaptation (abbreviated as WUDA), which aims to transfer knowledge from noisy labeled data in the source domain to unlabeled target data. They show that WUDA ruins all UDA methods if taking no care of label noise in the source domain and propose a Butterfly framework, a powerful and efficient solution to WUDA.
Long et. al. consider the weakly-supervised domain adaptation, where the source domain with noises in labels, features, or both could be tolerated [28]. Label noise refers to incorrect labels of images due to errors in manual annotation, and feature noise refers to low-quality pixels of images, which may come from blur, overlap, occlusion, or corruption etc. They present a Transferable Curriculum Learning (TCL) approach, extending from curriculum learning and adversarial learning. The TCL model aims to be robust to both sample noises and distribution shift by employing a curriculum which could tell whether a sample is easy and transferable.
In [29], we consider another way to save the labeling cost by using completely complementary-label data in the source domain and prove that distributional adaptation can be effectively realized from complementary-label source data to unlabeled target data. In this paper, we consider two cases of using complementary-label data in the source domain and prove that we could use a small amount of true-label data to improve the transfer result. Besides, as shown in [23], we can obtain true-label data and complementary-label data simultaneously so that getting a small amount of true-label data is guaranteed to be low-cost. Furthermore, we provide an analysis of the expected target-domain risk of our approach. In the following sections, we will introduce the complementarylabel based UDA and explain how to address such tasks.
III. COMPLEMENTARY-LABEL BASED UNSUPERVISED DOMAIN ADAPTATION
In this section, we propose a novel problem setting, complementary-label based UDA, and prove a learning bound for it. Then, we show how it brings benefits to domain adaptation field.
A. Problem Setting
In complementary-label based UDA, we aim to realize distributional adaptation from complementary-label source data to unlabeled target data. We first consider the situation that there are only complementary-label data in the source domain, namely completely complementary unsupervised domain adaptation (CC-UDA). Let X ⊂ R d be a feature (input) space and Y := {y 1 , ..., y c , ..., y K } be a label (output) space, where y c is the one-hot vector for label c. A domain is defined as follows.
Definition 1 (Domains for CC-UDA). Given random variables X s , X t ∈ X , Y s , Y s , Y t ∈ Y, the source and target domains are joint distributions P (X s , Y s ) and P (X t , Y t ), where the joint distributions P (X s , Y s ) = P (X t , Y t ) and P (Y s = y c |Y s = y c ) = 0 for all y c ∈ Y.
Then, we propose CC-UDA problem as follows.
Problem 1 (CC-UDA). Given independent and identically
drawn from the target marginal distribution P (X t ), the aim of CC-UDA is to train a classifier F t : X → Y with D s and D t such that F t can accurately classify target data drawn from P (X t ).
It is clear that it is impossible to design a suitable learning procedure without any assumptions on P (X s , Y s ). In this paper, we use the assumption for unbiased complementary learning proposed by [23], [24]: for all k, c ∈ {1, ..., K} and c = k. This unbiased assumption indicates that the selection of complementary labels for samples is with equal probability.
Ishida et. al. [23] proposed an efficient way to collect labels through crowdsourcing: we choose one of the classes randomly and ask crowdworkers whether a pattern belongs to the chosen class or not. Then the chosen class is treated as true label if the answer is yes; otherwise, the chosen class is regarded as complementary label. Such a yes/no question is much easier and quicker than selecting the correct class from the list of all candidate classes. In addition, we could guarantee that the complementary-label data gotten through this way are under unbiased assumption in Eq. (1).
As we can obtain true-label data and complementary-label data simultaneously, we also consider the problem that the source domain contains a few true-label data. We name this problem as partly complementary-label unsupervised domain adaptation (PC-UDA).
, the aim is to find a target classifier F t : X → Y such that F t classifies target samples into the correct classes.
It is actually a more common situation to have a small number of true-label data. If we leverage both kind of labeled data properly, we could obtain a more accurate classifier. Ishida et. al. [23] have demonstrated that the usefulness of combining true-label and complementary-label data in classification problem. We will further show that in unsupervised domain adaptation field, we could also use both true-label and complementary-label source data to realize knowledge transfer and utilize the true-label data to improve the result.
B. Learning Bound of complementary-label based UDA
A learning bound of complementary-label based UDA is presented in this subsection. We could prove that we can limit the risk in the target domain. Practitioner may safely skip it.
If given a feature transformation: then the induced distributions related to P Xs and P Xt are Following the notations in [44], consider a multi-class classification task with a hypothesis space H G of the classifiers Let be the loss function. For convenience, we also require satisfying the following conditions in theoretical part: 1. is symmetric and satisfies triangle inequality; 2. (y,ỹ) = 0 iff y =ỹ; 3. (y,ỹ) ≡ 1 if y =ỹ and y,ỹ are one-hot vectors. We can check many losses satisfying the above conditions such as 0-1 loss 1 y =ỹ and 2 loss 1 2 y −ỹ 2 2 . The complementary risk for F • G with respect to over P (X s , Y s ) is The risks for the decision function F • G with respect to loss over implicit distribution P (X s , Y s ), P (X t , Y t ) are: In this paper, we propose a tighter distance named tensor discrepancy distance. The tensor discrepancy distance can future math the pseudo conditional distributions.
We consider the following tensor mapping: Then we induce two importance distributions: Using H G , we reconstruct a new hypothetical set: where Then the distance between ⊗ F # P Xs and ⊗ F # P Xt is: where sgn is the sign function.
It is easy to prove that under the conditions (1)-(3) for loss and for any F ∈ H G , we have where d H G is the distribution discrepancy defined in [45], [46]. Then, we introduce our main theorem as follows.
Theorem 1. Given a loss function satisfying conditions 1-3 and a hypothesis H G ⊂ {F : X G → Y}, then under unbiased assumption, for any F ∈ H G , we have Proof. Firstly, we prove that L s (F • G) = L s (F • G). To prove it, we investigate the connection between L s (F • G) and L s (F • G) under unbiased assumption in Eq. (1). Given K × K matrix Q whose diagonal elements are 0 and other elements are 1/K, we represent the unbiased assumption by Note that Q has inverse matrix Q −1 whose diagonal elements are −(K − 2) and other elements are 1. Thus, we have that According to Eq.
Next we will prove that we could easily prove the theorem. It is clearly that where F is any function from H G . According to conditions 1-3, we have that similarly, hence, according to the definition of Eq. (9), we have Combining Eq. (14) and Eq. (17), we have Hence, we prove this theorem.
Algorithm 1 Two-step Approach for CC-UDA Tasks to train a target-domain classifier.
C. Benefits to DA Field
Collecting true-label data is always expensive in the real world. Thus, learning from less expensive data [47], [48], [49], [50] has been extensively studied in machine learning field, including label-noise leaning [51], [52], [53], pairwise/triplewise constraints learning [54], [55], [56], positive-unlabeled learning [57], [58], [59], complementary-label learning [23], [24], [26] and so on. Among all these research directions, obtaining complementary labels is a cost-effective option. As described in the previous works mentioned above, compared with choosing the true class out of many candidate classes precisely, collecting complementary labels is obviously much easier and less costly. In addition, a classifier trained with complementary-label data is equivalent to a classifier trained with true-label data as shown in [24].
Actually in the field of domain adaptation, the high cost of true-label data is also an important issue. At present, the success of DA still highly relies on the scale of true-label source data, which is a critical bottleneck. Under low cost limitation, it is unrealistic to obtain enough true-label source data and thus cannot achieve a good distribution adaptation result. For the same cost, we can get multiple times more complementarylabel data than the true-label data. In addition, the adaptation scenario is limited to some commonly used datasets, e.g. handwritten digit datasets, as they have sufficient true labels to support distributional adaptation. This fact makes it difficult to generalize domain adaptation to more real-world scenarios where it is needed. Thus if we can reduce the labeling cost in the source domain, for example, by using complementary-label data to replace true-label data (complementary-label based UDA), we can promote domain adaptation to more fields.
Due to existing UDA methods require at least 20% truelabel source data [28], they cannot handle complementarylabel based UDA problem. To address the problem, we introduce a two-step approach, straightforward but weak solution, and then propose a powerful one-step solution, CLARINET.
IV. TWO-STEP APPROACH
To solve the problem that existing UDA methods cannot be applied to complementary-label based UDA problems directly, a straightforward way is to apply a two-step strategy. Namely, we could sequentially combine complementary-label learning methods and existing UDA methods. Algorithm 1 presents how we realize two-step approach for CC-UDA tasks specifically. In two-step approach, we first use the complementary-label learning algorithm to train a classifier on the complementary-label source data (line 1). Then, we take advantage of the classifier to assign pseudo labels for source domain data (line 2). Finally, we train the target-domain classifier with pseudo-label source data and unlabeled target data using existing UDA methods (line 3). In this way, we can transfer knowledge from the newly formed pseudo-label source data to unlabeled target data. As for PC-UDA tasks, we could combine the pseudo-label source data gotten following the first two steps and existing true-label source data together to train the target-domain classifier.
Nevertheless, the pseudo-label source data contains noise as complementary-label learning algorithms cannot be trained to produce a completely accurate classifier. As the noise will bring poor domain-adaptation performance [27], the twostep approach is a suboptimal choice. To solve this problem, we consider implementing both complementary label learning and unsupervised domain adaptation in a network. In this way, the network will always try to classify source domain data accurately during the adaptation procedure. Besides, we consider using entropy condition to make the transfer process mainly based on the classification results with high confidence, which can largely eliminate the noise effect compared with the two-step approach. Therefore, we propose a powerful one-step solution to complementary-label based UDA, CLARINET.
V. CLARINET: POWERFUL ONE-STEP APPROACH The proposed CLARINET (as shown in Figure 3) realizes distributional adaptation in an adversarial way, which mainly consists of feature extractor G, label predictor F and domain discriminator D. By working adversarially to domain discriminator D, feature extractor G encourages domaininvariant features to emerge. Label predictor F are trained to discriminate different classes based on such features.
In this section, we first introduce two losses used to train CLARINET, complementary-label loss and scattered conditional adversarial loss. Then the whole training procedure of CLARINET is presented. Finally, we show how to adjust CLARINET for PC-UDA tasks if a small amount of true-label source data are available.
A. Loss Function in CLARINET
In this subsection, we introduce how to compute the two losses mentioned above in CLARINET after obtaining minibatch d s from D s and d t from D t .
1) Complementary-label Loss.: It is designed to reduce the source classification error based on complementary-label data (the first part in the bound). We first divided d s into K disjoint subsets according to the complementary labels in d s , where d s,k ∩ d s,k = ∅ if k = k and n s,k = |d s,k |. Then, following Eq. (13), the complementary-label loss on d s,k is where can be any loss and we use the cross-entropy loss, π k is the proportion of the samples complementary-labeled k.
The total complementary-label loss on d s is as follows.
As shown in Section III-B, the complementary-label loss (i.e., Eq. (21)) is an unbiased estimator of the true-labeldata risk. Namely, the minimizer of complementary-label loss agrees with the minimizer of the true-label-data risk with no constraints on the loss and model F • G [24]. Remark 1. Due to the negative part in L s (G, F, d s ), minimizing it directly will cause over-fitting [60]. To overcome this problem, we use a correctional way [24] to minimize L s (G, F, d s ) (lines 7-13 in Algorithm 2).
2) Scattered Conditional Adversarial Loss.: It is designed to reduce distribution discrepancy distance between two domains (the third part in the bound). Adversarial domain adaptation methods [18], [61] is inspired by Generative Adversarial Networks (GANs) [62]. Normally, a domain discriminator is learned to distinguish the source domain and the target domain, while the label predictor learns transferable representations that are indistinguishable by the domain discriminator. Namely, the final classification decisions are made based on features that are both discriminative and invariant to the change of domains [18]. It is an efficient way to reduce distribution discrepancy distance between the marginal distributions.
However, when data distributions have complex multimodal structures, which is a real scenario due to the nature of multiclass classification, adapting only the feature representation is a challenge for adversarial networks. Namely, even the domain discriminator is confused, we could not confirm the two distributions are sufficiently similar [63].
According to [30], it is significant to capture multimodal structures of distributions using cross-covariance dependency between the features and classes (i.e., true labels). Since there are no true-label target data in UDA, CDAN adopts outer product of feature representations and classifier predictions (i.e., outputs of the softmax layer) as new features of two domains [19], which is inspired by Conditional Generative Adversarial Networks (CGANs) [64], [65]. The newly constructed features have shown great ability to discriminate source and target domains, since classifier predictions of true-label source data are dispersed, expressing the predicted goal clearly.
However, in the complementary-label classification mode, we observe that the predicted probability of each class (i.e., each element of f s in Figure 3) is relatively close. Namely, it is hard to find significant predictive preference from the classifier predictions. According to [30], this kind of predictions cannot provide sufficient information to capture the multimodal structure of distributions. To fix it, we add a sharpening function T to scatter the predicted probability (the output of f = [f 1 , ..., f K ] T after Softmax function, f could be f s or f t in Figure 3).
In [66], a common approach of adjusting the "temperature" of this categorical distribution is defined as follows, As l → 0, the output of T (f ) will approach a Dirac ("onehot") distribution [67]. Fetch mini-batch ds, dt from Ds, Dt;
17:
Update θF •G = θF •G + γ2λ L adv (G, F, D, ds, dt); 18: end if 19: end for 20: end for Then to prioritize the discriminator on those easy-to-transfer examples, following [19], we measure the uncertainty of the prediction for sample x by The small result implies that T (f k (x)) is close to 0 or 1, which could be regarded as the prediction is with high confidence due to the existing of the final softmax layer [68]. Thus the scattered conditional adversarial loss is as follows, where ω s (x) and ω t ( is the feature part of d s .
B. Training Procedure of CLARINET
Based on two losses proposed in Section V-A, in CLAR-INET, we try to solve the following optimization problem, where D tries to distinguish the samples from different domains by minimizing L adv , while F •G wants to maximize the L adv to make domains indistinguishable. To solve the minimax optimization problem in Eq. (25), we add a gradient reversal layer [18] between the domain discriminator and the classifier, which multiplies the gradient by a negative constant (-λ) during the back-propagation. λ is a hyper-parameter between the two losses to tradeoff source risk and domain discrepancy. If min k {L s (G, F, d s,k )} K k=1 ≥ 0, we calculate the gradient L s (G, F, d s ) and update parameters of G and F using gradient descent (lines 8-9). Otherwise, we sum negative elements in {L s (G, F, d s,k )} K k=1 as L neg (line 11) and calculate the gradient with L neg (line 12). Then, we update parameters of G and F using gradient ascent (line 12), which is suggested by [24]. When the number of epochs (i.e., t) is over T s , we start to update parameters of D (line 14). We calculate the scattered conditional adversarial loss L adv (line 15). Then, L adv is minimized over D (line 16), but maximized over F •G (line 17) for adversarial training.
C. CLARINET for PC-UDA Tasks
For PC-UDA tasks, we have both complementary-label data and true-label data in the source domain. In such cases, we want to leverage both kinds of labeled source data to help realize better adaptation results. The two loss functions mentioned in Section V-A are adjusted as follows.
After obtaining mini-batch d s from D s , we could calculate the classification loss based on true-label data by where is cross-entropy loss, d s = {(x i , y i )} n s i=1 and n s = |d s |. We could use a convex combination of classification risks derived from true-label data and complementary-label data to replace the oral complementary-label based only classification risk shown as following.
where α depends on the cost of labeling the two kind of data.
The new scattered conditional adversarial loss for PC-UDA tasks is as follows.
19:
Update θF •G+ = γ2λ L adv (G, F, D, ds, ds, dt); 20: end if 21: end for 22: end for VI. EXPERIMENTS In this section, we conducted extensive evaluations of the proposed CLARINET on several common transfer tasks against many varients of state-of-the-art transfer learning methods (e.g. two-step approach).
B. Baselines
We compare CLARINET with the following baselines: gradient ascent complementary label learning (GAC) [24], namely non-transfer method, and several two-step methods, which sequentially combine GAC with UDA methods (including DAN [16], DANN [18] and CDAN [19]). Thus, we have four possible baselines: GAC, GAC+DAN, GAC+DANN and GAC+CDAN. For two-step methods, they share the same pseudo-label source data on each task. Note that, in this paper, we use the entropy conditioning variant of CDAN (CDAN E).
C. Experimental Setup
In general, we compose feature extractor G from several CNN layers and one fully connected layer, picking their structures from previous works. The label predictor F and domain discriminator D all share the same structure in all tasks, following CDAN [19].
More precisely, four different architectures of G are used in our experiments (as shown in figures 5,6,7,8). In C → T task, we adopt the the structure provided in MT [76]. For U → M and M → U tasks, we use the LeNet provided in CDAN [19]. For S → M , Y → M and Y → S tasks, we use the DTN provided in CDAN [19]. In M → m task, we adopt the the structure provided in DANN [18].
We follow the standard protocols for unsupervised domain adaptation and compare the average classification accuracy based on 5 random experiments. For each experiment, we take the result of the last epoch.
The batch size is set to 128 and we train 500 epochs. SGD optimizer (momentum= 0.9, weight decay= 5e − 5) is with an initial learning rate of 0.005 in the adversarial network and 5e − 5 in the classifier. In sharpening function T , l is set to 0.5. For other special parameters in baselines, we all follow the original settings. We implement all methods with default parameters by PyTorch. The code of CLARINET is available at github.com/Yiyang98/BFUDA. Tasks Table I reports the target-domain accuracy of 5 methods on 7 CC-UDA tasks. As can be seen, our CLARINET performs best on each task and the average accuracy of CLARINET is significantly higher than those of baselines. Compared with GAC method, CLARINET successfully transfers knowledge from complementary-label source data to unlabeled target data. Since CDAN has shown much better adaptation performance than DANN and DAN [19], GAC+CDAN should outperform other two-step methods on each task. However, on the task U →M , the accuracy of GAC+CDAN is much lower than that of GAC+DANN. This abnormal phenomenon shows that the noise contained in pseudo-label source data significantly reduces transferability of existing UDA methods. Namely, we cannot obtain the reliable adaptation performance by using two-step CC-UDA approach.
D. Results on CC-UDA
CIFAR → STL. CIFAR and STL are 10-class object recognition datasets with colored images. We remove the nonoverlapping classes ("frog" and "monkey") and readjust the labels to align the two datasets. Namely this task reduce to a 9-class classification problem. Furthermore, we downscale the 96 × 96 image dimesion of STL to match the 32 × 32 dimension of CIFAR. As shown in Figure 4 (a), two-step methods could hardly realize knowledge transfer, while our CLARINET's performance surpasses others by a comfortable margin.
MNIST ↔ USPS. MNIST and USPS are both grayscale digits images, thus the distribution discrepancy between the two tasks is relatively small. As shown in Figure 4 with three channels to match SVHN. Because of the above factors, the gap between two distributions are relatively larger compared to that of the MNIST ↔ USPS. As shown in Figure 4 (d), GAC+CDAN perform much better than GAC+DAN and GAC+DANN, but still worse than our CLARINET.
MNIST → MNIST-M. MNIST-M is a transformed dataset from MNIST, which was composed by merging clips of a background from the BSDS500 datasets [77]. For a human the classification task on MNIST-M only becomes slightly harder, whereas for a CNN network trained on MNIST, this domain is quite different, as the background and the strokes are no longer constant. As shown in Figure 4 (e), Our method is slightly more effective than GAC+CDAN and far more effective than SYN-DIGITS → MNIST. This adaptation reflects a common adaptation problem of transferring from synthetic images to real images. The SYN-DIGITS dataset consists of a huge amount of data, generated from Windows fonts by varying the text, positioning, orientation, background, stroke color, and the amount of blur. As shown in Figure 4 (f), our method outperforms other baselines and achieves pretty high accuracy. Thus with sufficient source data, CLARINE could achieve excellent results.
SYN-DIGITS → SVHN. This adaptation is another common adaptation problem of transferring from synthetic images to real images, but is more challenging than in the case of the MNIST experiment. As shown in Figure 4 (g), our method is obviously more effective than other baselines. GAC+DANN does not apply to this task, achieving the lowest accuracy. Tasks Table II reports the target-domain accuracy of CLARINET on PC-UDA tasks when we have different amount of truelabel source data. "true only" means training on a certain number of true-label source data with ordinary UDA method. "com only" means training on complementary-label source data only with CLARINET. "com+true" stands for training on a certain number of true-label source data and complementarylabel source data with CLARINET. In general, the accuracy of CLARINET increases when increasing the amount of truelabel source data from 0 to 1000. Thus, it is proved that CLARINET can sufficiently leverage true-label source data to improve adaptation performance.
E. Results on PC-UDA
The improvement is especially evident on U →M task and S→M task. For U →M task, this is probably because the dataset sample size of USPS is relatively small, true-label data actually has occupied a large part. For S→M task, SVHN is complicated for complementary-label learning. Hence adding a small number of true-label data could help to train a more III: Ablation Study. Bold value represents the highest accuracy (%) on each column. Obviously to see, UDA methods cannot handle complementary-label based UDA tasks directly. We also prove that the conditioning adversarial part and the sharpening function T can help improve the adaptation performance. accurate classifier. This phenomenon also reminds us that for complex datasets, adding some true-label data to assist training would be pretty appropriate. On Y →M task, adding true-label source data does not bring significant improvement, which is most likely due to the result on complementary-label data is already relatively good and true-label source data is unable to assist in achieving better result.
We also compare the efficacy of true-label source data with complementary-label source data. Taking S→M task as an example (shown in the left part of Figure 4 (h)), we compare the target-domain accuracy of ordinary UDA method trained with different amount of true-label source data and that of CLARINET trained with complementary-label source data only ("CLs Only"). The accuracy decreases significantly when reducing the amount of true-label source data, which suggests that sufficient true-label source data are inevitably required in UDA scenario. Then we compare the target-domain accuracy of CLARINET trained with complementary-label source data only with that of CLARINET trained with different amount of true-label and complementary-label source data. It is clear that CLARINET effectively uses two kinds of data to obtain better adaptation performance than using complementary-label source data only. Besides, as the number of true-label source data used increases, the classification accuracy becomes better (shown in the right part of Figure 4 (h)).
F. Ablation Study
Finally, we conduct experiments to show the contributions of different components in CLARINET. We consider following baselines: • C w/ L CE : train CLARINET by Algorithm 2, while replacing L s (G, F, D s ) by cross-entropy loss. • C w/o c : train CLARINET without conditioning, namely train the domain discriminator D only based on feature representations g s and g t . • C w/o T : train CLARINET by Algorithm 2, without sharpening function T . C w/ L CE uses the cross-entropy loss to take place of complementary-label loss. Actually, it stands for applying ordinary UDA methods directly on complementary-label based UDA tasks. The target-domain accuracy of C w/ L CE will show whether UDA methods can address the complementarylabel based UDA problem. C w/o c train the domain discriminator D only based on feature representations g s and g t , thus the result could indicate whether the conditional adversarial way could capture the multimodal structures so as to improve the transfer effect. Please notice, the sharpening function T is useless in this network as it works on the label prediction f s and f t . Comparing CLARINET with C w/o T reveals if the sharpening function T takes effect.
As shown in Table III, the target-domain accuracy of C w/ L CE is much lower than that of other methods. Namely, UDA methods cannot handle complementary-label based UDA tasks directly. Its result is not even as good as random classification, as the network is trained taking the wrong label as the target result. Comparing with C w/o T , C w/o c has a worse performance, which proves that the conditional adversarial way could really improve the transfer effect. Therefore, it is necessary to capture the multimodal structures of distributions with cross-covariance dependency between the features and classes in the field of adversarial based UDA. Although C w/o T achieves better accuracy than other baselines, its accuracy still worse than CLARINET's. The result reveals that the sharpening function T helps to capture multimodal structures of distributions on basis of the characteristics of complementary-label learning. Thus, the sharpening function T can improve the adaptation performance.
VII. CONCLUSION AND FURTHER STUDY
This paper presents a new setting, complementary-label based UDA, which exploits economical complementary-label source data instead of expensive true-label source data. We consider two cases of the complementary-label based UDA, one is that the source domain only contains complementarylabel data (CC-UDA), and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data (PC-UDA). Since existing UDA methods cannot address complementary-label based UDA problem, we propose a novel, one-step approach, called complementary label adversarial network (CLARINET). CLARINET could handle both CC-UDA and PC-UDA tasks. Experiments conducted on 7 complementary-label based UDA tasks confirm that CLARINET effectively achieves distributional adaptation from complementary-label source data to unlabeled target data and outperforms a series of competitive baselines. In the future, we plan to explore more effective ways to solve complementary-label based UDA and extend the application of complementary labels in domain adaptation. Fig. 8: The architecture of CLARINET for M → m task. Feature extractor G is adopted from [18]. Guangquan Zhang is a Professor and Director of the Decision Systems and e-Service Intelligent (DeSI) Research Laboratory, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia. He received his PhD in applied mathematics from Curtin University of Technology, Australia, in 2001. His research interests include fuzzy machine learning, fuzzy optimization, and machine learning and data analytics. He has authored four monographs, five textbooks, and 350 papers including 160 refereed international journal papers. Dr. Zhang has won seven Australian Research Council (ARC) Discovery Project grants and many other research grants. He was awarded an ARC QEII Fellowship in 2005. He has served as a member of the editorial boards of several international journals, as a guest editor of eight special issues for IEEE Transactions and other international journals, and has cochaired several international conferences and work-shops in the area of fuzzy decision-making and knowledge engineering. | 2020-08-05T01:01:20.820Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "67e7399be0df5d8431fa4a14c828cfb07ed22606",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.01454",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "67e7399be0df5d8431fa4a14c828cfb07ed22606",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Mathematics"
]
} |
73549956 | pes2o/s2orc | v3-fos-license | Induced Settlement Reduction of Adjacent Masonry Building in Residential Constructions
Many buildings and heritages are yearly damaged due to new construction plans in vicinity of them. Current engineering practice in Iran lacks unfortunately regulations to enforce the designers of new buildings to re-evaluate the structural integrity of adjacent old buildings which are prone to unacceptable induced settlement and distortions. To damage assessment of old building, deflection ratio was used for unreinforced load-bearing wall (masonry) building. In this paper some practical methods like story limits for the new buildings according to the specification of old structure, improvement of shallow foundations and increasing the embedment depth of new foundations have been studied in order to decrease the settlement and undesirable effects of adjacent constructions. Parametric studies using numerical analysis, Flac3D, have paved the way how above mentioned methods can remedy the problem. So induced consolidation settlements due to new construction in adjacent building were studied. In conclusion, increase of admissible story of new building up to one floor by increase of embedment depth as much as one meter and three floors by using of mat foundation instead of single footing were highlighted these methods.
Introduction
Today with the increasing development of cities and high rise buildings in urban areas, it's necessary to keep an eye on the destructive effects of new constructions on the existing old buildings which are mainly small and masonry.Lodging many complaints and appeals to the municipalities and local courts in a yearly basis regarding the disruptive events due to new constructions in highly populated zones reflects the dissatisfaction of the citizens from the existing design and construction practice and the lack of mutual consideration when designing a new building.
Votyakov [1] studied the additional settlement of a building due to the foundation of an adjacent building.It is known that the bases of two adjacent structures in zones of the superposition of stresses undergo increased deformations.
Murthy [2] studied the interference effect of surface footings on sand, but both the footings were not loaded simultaneously, i.e. one of the two footings was first loaded to its safe bearing capacity and the load on the other adjacent footing was then increased up to the ultimate load.
Selvadurai [3] examined the problem of the interaction between a loaded rigid circular foundation resting in smooth contact with an isotropic elastic half-space and a distributed loading which acts at an exterior point on the surface of the half-space.His studies were showed that the settlement of the rigid circular foundation due to an externally placed concentrated force can be evaluated in exact closed form.
Amir [4] and Saran and Amir [5] extended drawing pressure-settlement characteristics of isolated footing for the interfering strip foundations by consideration of hyperbola constitutive law for sand.
The investigation by Ahmed et al. [6] was showed the magnitude of settlement and tilt of the interfering footings is affected by ratio of spacing of footing to width one.Therefore, proportioning of interfering footings should be carried out by actual estimation of settlement and tilt.
A study on 225 building failure in United States from 1989 to 2000 shows that failures of low-rise buildings constitute about 63% of all cases and 25% of them was reported due to construction procedure [7].
Zdravkovic and Neo [8] presented the results of finite element analyses performed to investigate the effect of spacing on the bearing capacity and deformation of rough, rigid strip surface footings.The analyses show that the reduction in bearing capacity for spacings of B and 0.5B is less than 3% and a significant effect is again observed only at close spacings of 0.5B and B, with all the settlement components increasing by up to 40% as compared to the settlement of a single footing.the experimental study by conducting a number of laboratory scaled model tests on the effect of interference between two adjacent surface strip footings was on reinforced sand was carried out by Ghosh and Kumar [9].Their investigation showed that the ultimate bearing capacity and settlement of footing generally becomes maximum in terms of efficiency factors (ξγ and ξδ) at a certain critical spacing between the footings.López Gayarre et al. [10] researched on pile foundation failure in the set of building in the Gijon (NW of Spain).By considering of some hypothesis of the failure they concluded exposing and rotting the top of wooden pile due to decreasing in the groundwater level causing by pumping in adjacent excavation of new building was a reasonable of existing crack in block.[11] carried out a number of model tests to verifying the effect of interfering footings on cohesionless layered soil with experimental observation and theoretical result.The effect of centre to centre spacing (S) between two adjacent footings on their bearing capacity and settlement at failure was focused by authors.The results are presented in terms of efficiency factors (ξγ, ξδ) and their variation was obtained with the change in S.
Ghosh and Kumar
Nainegali et al. [12] provided the 2-D and 3-D finite element code to analysis of two closely spaced footings by considering linearly varying of elastic modules of soil.Influence between the load-settlement of footing and various parameters like width, spacing between the footings, length to width ratio, modulus of elasticity were highlighted in this research.
Alimardani Lavasan and Ghazavi [13] conducted an experimental investigation to evaluate the ultimate bearing capacity, the settlement and the tilt of two types closely spaced footings, one having square shapes and the other having circular shapes, on unreinforced and reinforced soil.The ultimate bearing capacity of the interfering footings increased by about 25-40%, whereas the settlement of the interfering footings at the ultimate load increased in the range of 60-100%.However, the closely spaced footings tilted by approximately 45% and 75% for reinforced sand with one and two layers of geogrid, respectively.
The research conducted was by Saibaba Reddy et al. [14] on the results of series of model tests in which two adjacently placed rough-placed footing resting on sand were loaded simultaneously in order to study the effect of footing size and spacing on the load settlement and failure behavior.The experimental results obtained from the model tests indicated that, the proximity of footings founded on sand enhances of foundations both in terms of settlement and ultimate bearing capacity.The interference effect is negligible when the spacing between the footings is greater than six times the footing size.
Panjalizadeh Marseh et al. [15] studied on negative effects of adjacent building construction in a case and various solutions were offered to reduce its adverse effects.
Hefdhallah [16] presented a case history that shows effect of neighboring footings on single footing settlement.The case history in hand consists of 28 auxiliary buildings of an Electrical Power plant near Cairo, Egypt.Settlement analysis was carried out by computing a profile of elastic stress increase due to all loaded areas at the foundation level.The results of the analysis suggested that the effect neighboring footings could be important to the extent that necessitates the change of the foundation system from isolated footings to raft foundation in the light of the maximum allowable settlement of each foundation system.
The forensic analysis was under taken on a 5-storey RC building in Athens by Anastasopoulos [17].For more than 30 years, no damage had been observed in this building.In 1999, construction of an adjacent 5-storey RC building begun, and shear cracks in old building started appearing.In order to assess the relative importance of the two factors (construction of the adjacent structure vs. construction defects), numerical analyses were conducted modeling both buildings in detail.Results showed that almost 70% (3.5 cm) of the differential settlement had already taken place before construction of the adjacent building.Neighboring building induced Just 1.5 cm of the differential settlement.
Alimardani Lavasan and Ghazavi [18] studied the variation of ultimate bearing capacity, failure mechanism and deformation pattern of soil beneath two closely square footings.The presented numerical analyses are based on explicitfinite-difference code, Flac3D.The results of numerical modeling of interfering square footings show that the interference causes significant increase in the ultimate bearing capacity up to 1.5 times than that obtained for an isolated identical footing.
An investigated by experimental work on the bearing capacity and settlement of two closely spaced squared footings was conducted by Dhatrak and Awachat [19].The parameters were spacing of footings, reinforcement layer and magnitude of prestressing force.The spacing between footings was varied from 1B to 3B along the width for both left and right footings.The prestressed reinforcement results in increasing bearing capacity and reduction in settlement behavior of the soil.
Impact of adjacent footings on immediate settlement of shallow footings was studied by Aytekin [20].In the estimation of elastic (immediate) settlement, the Schmertmann's method was employed.The results show that settlement of an isolated footing with adjacent footings is always larger than one with no adjacent footing.The effect of adjacent footings on settlement of main footing is increased linearly with the number of adjacent footing.
Aghahadi Forushani and Mehrnahad [21] conducted a case study using of Pile to reduce induced settlement in existing adjacent building following construction of new building.Results revealed that proper arrangement of piles in the first row of piling and with axis-axis spacing of 3 m and length of 15 m next to the old building can rule out structure cracks.
As prevention is always better than retrofitting, checking for potential failure mechanisms and seeking methods to avert uncalled-for situations by revising the current regulations so as to allow for prevention of such effects is conceived to be effective.
Tall buildings construction induces significant stresses to the ground which not only spreads in depth, but also distributes laterally.Increments of stress transferred to underneath the existing foundations, induces ground deformation that may affect existing structures.
Although the resulting ground settlements rarely cause major structural damage to buildings, aesthetic and serviceability damage may occur.Structures adjacent to a new building, in particular old masonry buildings, may suffer damage resulting in costly repairs or require expensive protective measures, such as underpinning or compensation grouting [22].This paper describes some case studies and the potential preventive measures to control the angular distortion of the adjacent building due to construction in neighborhood.3-Dimensional finite difference is used to investigate some measure numerically.
Case Studies
The motivation for this study was raised by getting involved in forensic analysis for some failure cases in Guilan province located in northern Iran.The first case is related to a cottage house failure adjacent to a five-story building located in city of Rasht.After construction of five-story building in vicinity of it, some cracks in facade were founded (Figure 1).This large vertical crack was around 2 cm wide at first, subsequently increasing markedly until reaching some 10 cm in width.This conclusion was included in the sentence of the Court of Justice of Rasht, which rules that "there exists necessary causal link between the construction of the new construction and the damage to the masonry building''.The second case refers to the failure of a two-story building induced by the additional settlement due to a new construction nearby.Figure 2. shows one of the cracks that occurred in the interior partitions of this building due to adjacent construction of new six-story structure.The rupture can be seen by following the joints of the bricks, where the mortar shows less resistance.This cracks caused by differential settlement of existing foundation due to induced settlement of new building.
Figure 2. Adjacent building failure case II located in Somesara, Guilan (a) location of two building and (b) cracks in the walls due to differential settlement
There are quite a few other cases suffering additional settlement-induced failure in Guilan province which are out of the scope of this paper.This study has just exemplified the failure phenomena by bringing some real cases.The balance of the study only focuses on the general causes of similar failure by conducting some numerical parametric studies.
Mechanism of Foundation Damage
When a new building construction happens in vicinity of another building, in view of geotechnical concerns two major risks arise: 1. Soil rupturing and sliding in vicinity or underneath the existing foundation caused by excavation of a new foundation.
Figure 3b.shows displacement of an existing footing caused by excavation for new footing [23].Excavation wall can face rupture and slide or by removing the lateral support it will face lateral deformation called "Bulging".2. Differential settlement between existing foundations induced by stress increments from the new construction.New structure is often built at one side of old structure introducing increments of stresses in adjacent side.So foundations or those parts of foundations that are closer to the new structure will face more settlement compared to those that are farther.This process will lead to the so-called angular distortion in old building shown schematically in Figure 4. b) If the new footing is lower than the existing footing.Figure 3b.indicates that there is a possibility that the soil may flow laterally from beneath the existing footing.This may increase the amount of excavation somewhat but, more importantly, may result in settlement cracks in the existing building.This problem is difficult to analyze; however, an approximation of the safe depth may be made for a ϕ-c soil since ₃ = 0 on the vertical face of the excavation.The vertical pressure σ1 would include the pressure from the existing footing.This analysis is as follows: Solving for excavation depth, (and using a safety factor, SF), is obtains: It is difficult to compute how close one may excavate to existing footings such as those of Figure 3. before the adjacent structure is distressed.The problem may be avoided by constructing a wall (sheet pile or other material) to retain the soil [23].
Regulatory Limits
Regulations limits for the settlement are divided to total and differential settlement.There is a structural damage if differential settlement exceeds limiting value.Common limitations are shown in Table 1. to control the settlements in urban buildings.The most meaningful distortion parameter in the context of cracking walls in masonry building is the deflection ratio, which is defined as the maximum vertical displacement (Δ) relative to the straight line connecting two points divided by the length () between those two points [25].
Damage Reduction Measures
Practical measures to prevent damage to the existing building after new construction nearby can mentioned as follows: Restriction of the number of story for new constructions. Increasing the embedment depth of new foundations. Proper choose of shallow foundations. Provision of deep foundations. Improving the load bearing characteristics of subsoil underneath both old and new buildings.
A combination of the above listed measures can be adopted for a construction project so as to minimize the side effect of the new building construction on the adjacent building.Current study will focuses on the first three methods by conduction a numerical study.
Numerical Modeling
In Rasht city, there are a lot of Masonry buildings with one or two floors.The most masonry buildings have been constructed by cement blocks and their structure system is unreinforced load bearing wall.Because not having qualified structural system and proper foundation; these old masonry buildings are sensitive to additional settlement.Due to the economic development and population growth, new buildings are built in the form of tall buildings and this may be located nearby the buildings with low floors.The majorities of new structures have frame system and are constructed more than four floors.So, to designing such buildings, in addition to the soil properties which must be carefully evaluated, it is necessary to consider the status of adjacent structures.
The Flac3D software was used to numerically investigate the effect of adjacent building construction based on finite difference method.In order to set up the numerical model and to carry out simulations, three fundamental components of finite difference approach including grid geometry, boundary and initial condition and constitutive behavior should be specified.
Artificial boundaries fall into two categories: planes of symmetry and planes of truncation.The model was created the one half-symmetry model to takes advantage of the fact that the geometry and loading in a system are symmetrical about one plane.
The grid can be defined from the geometry of model as shown in Figure 6.The procedure of geometric modeling was simplified using 'Primitive Shape' zones.'Radial Tunnel' and 'Brick' elements were used to define geometry of soil and foundation respectively.
The boundary conditions applied to numerical models consist of roller boundaries provided in outside and bottom planes.Geometry dimension and boundary condition of model has shown in Table 2. Flac3D offers a variety of material models.The Mohr-Coulomb material model which allows for plastic deformation after meeting the failure criterion (strength) is used for soil elements which employs a linearly elastic-perfectly plastic stress-strain response.The Consolidation analysis is used for calculation of settlement.The formulation of coupled deformation-diffusion processes in Flac3D is done within the framework of the quasi-static Biot theory, and can be applied to problems involving single-phase Darcy flow in a porous medium.In this research, geotechnical parameters of Rasht soil (Southern area of Caspian Sea) are considered.Sekhavatian [26] investigated on general soil profile and geotechnical properties of Rasht clayey soil.Properties of the clay materials according to Sekhavatian's studies are summarized in Table 3. Overall dimensions of the model are normally dependent on the foundation width; B. boundaries should be far enough so as not to be affected by the stress bulbs beneath the foundation.For parameter study of induced settlement, the old and new foundations have modeled separated from their structures.In order to damage assessment of old building, induced deflection ratio is traced in old foundation.The definition and limited value of deflection ratio have discussed in clause 4.
Table 3. Properties of chosen clay
The dimensions of foundation for new building are assumed 20 × 20 in plan and for the adjacent building are 10 × 10 .The overall dimension of the model was then chosen to be 160 in length, 50 in breadth and 50 in height.Due to the Iranian building loading manual, the total dead and live loads were summed at 1000 / ² for each story of the building.
Before applying the load to the system, the model was brought to equilibrium under initial state to simulate this condition; the model was simulated under initial stresses caused by gravity of soil.
Two different foundation systems were adopted on this study as demonstrated in Figure 7. Single footing and mat foundations were chosen for the new building in order to study the effect of foundation type.
Parametric Studies
As stated before, three measures, namely the story limits for the new buildings, increasing the embedment depth of new foundations and the proper choose of the shallow foundations for the new building are typical methods to reduce damage imposed on the adjacent old building.Old building distortion threshold limit was taken 1/5000 for cracking by relative hog and 1/2500 for cracking by relative sag which is compared with the deflection ratio in old building due to construction in the vicinity of it.
The induced deflection ratio in adjacent old building was calculated for every story increment in new building.Results of such calculations are provided in Figure 8. for different foundation systems in new building.It's assumed old building has one story.
With increasing in number of stories in the new building, deflection ratio of the old building increases.
As expected, the improvement of shallow foundation system will lead to an increase in the story limits for the new building.Following the increment in embedment depth of new foundation, both for single and mat foundation type, the limiting number of story will increase in comparison to shallower constructions.This might be because the transfer of the pressure bubbles induced by the new construction distributes through deeper zones with more stiffness and finally to make the adjacent building less sensitive.However the increase of the depth of embedment depth will have an adverse effect when the vertical trench is left unsupported and the old adjacent building to experience failure.Sheet pile or any other retaining structure could be an option for such a case so as to increase the safety against new constructions.Figure 13.shows the Compression of the story limits of new building with two types of foundations.Above results were summarized in this chart.
Conclusion
This paper focused on the effect of new building construction on the deflection ratio of adjacent old building.For this aim three different measures were examined to be effective in urban building constructions.They are: a) Story limits for the new buildings b) Increasing the embedment depth of new foundation c) Proper choose of shallow foundations for the new building Flac3D is used for consolidation analysis of settlement calculation.After detailed numerical analyses, it was concluded to foundation type has a significant effect on this case.To be more specific, Mat foundation is an option when the adjacent old building is prune to excessive deflection ratio.Furthermore it was found the embedment depth will have paramount contribution by transferring the stress bulbs to the lower layers, so reducing the destructive effects.
If the trench keep supported it was found that the increment in embedment depth of new foundation, both for single and mat foundation type, the limiting number of story will increase in comparison to shallower constructions.
The study was shown with increasing of embedment depth of new foundation equal to one meter; story limits for the new building increase at least one story.In other words, height of building is increase 3 meter with incensing one meter of embedment depth of new foundation.
Figure 1 .
Figure 1.Adjacent building failure case I located in Rasht, Guilan (a) location of two building (b) and details of facade failure
Figure 3 .
Figure 3. Location considerations for spread footings; (a) An approximation for the spacing of footings to avoid interference between old and new footings.If the "new" footing is in the relative position of the "existing" footing of this figure, interchange the words "existing" and "new."Make m > zf; (b) Possible settlement of "existing" footing because of loss of lateral support of soil wedge beneath existing footing [23]
Figure 4 .
Figure 4. Angular distortion in old structureAccording to the position of two foundations it is recommended[23]: a) If the new footing is upper than the existing footing.When footings are to be placed adjacent to an existing structure, as indicated in Figure3a, the line from the base of the new footing to the bottom edge of the existing footing should be 45° or less with the horizontal plane.From this requirement it follows that the distance m of Figure3a.should be greater than the difference in elevation of the two footings .
Figure 6 .
Figure 6.Mesh discretization for the soil model in Flac3D: Types of new foundation: (a) Single footing; (b) Mat foundation
Figure 8 .Figure 9
Figure 8. Story limits in the new building with Df = 1 for different type of the new foundation
Figure 9 .Figure 10 .Figure 11 .Figure 12 .
Figure 9. Story limits in the new building with Df = 2 for different type of the new foundation
Figure 13 .
Figure 13.Story limits for the new building
Table 1 . Limitations of total and differential settlement in urban buildings [24]
* For unreinforced load bearing walls | 2018-12-20T17:59:55.328Z | 2017-07-30T00:00:00.000 | {
"year": 2017,
"sha1": "bbdc3bf08df07f3dac4084a8d7018b6504785b73",
"oa_license": "CCBY",
"oa_url": "https://civilejournal.org/index.php/cej/article/download/225/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bbdc3bf08df07f3dac4084a8d7018b6504785b73",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
266726327 | pes2o/s2orc | v3-fos-license | Electroencephalography Findings in Menstrually-Related Mood Disorders: A Critical Review
The female reproductive years are characterized by fluctuations in ovarian hormones across the menstrual cycle, which have the potential to modulate neurophysiological and behavioral dynamics. Menstrually-related mood disorders (MRMDs) comprise cognitive-affective or somatic symptoms that are thought to be triggered by the rapid fluctuations in ovarian hormones in the luteal phase of the menstrual cycle. MRMDs include premenstrual syndrome (PMS), premenstrual dysphoric disorder (PMDD)
Ovarian hormone fluctuations play a prominent role in the pathophysiological mechanisms of menstrual-related mood disorders (MRMDs) (Rubinow and Schmidt, 2018).MRMDs include premenstrual syndrome (PMS), premenstrual dysphoric disorder (PMDD), and premenstrual exacerbation (PME) of other psychiatric disorders (Yonkers and Simoni, 2018).MRMDs can lead to severe psychological symptoms, as well as functional and interpersonal impairment, which can foster suicidal ideation and behaviour (Wikman et al., 2022;Eisenlohr-Moul, 2022).The premenstrual symptoms associated with MRMDs most commonly entail emotional distress, social, and occupational impairments, partly overlapping with those of major mood disorders, anxiety, and post-traumatic stress disorders (Halbreich, 2003;Epperson et al., 2012).
PMS has been described as a variety of psychological and physical symptoms, experienced during the late luteal phase of the cycle, that subside shortly following subsequent menstruation, if not better explained by other diagnosis (Biggs and Demuth, 2011).A severe form of PMS, known as premenstrual dysphoric disorder (PMDD) has been categorized as a depressive disorder and recently added to the DSM-5 (Epperson et al., 2012).On the other hand, significant worsening of the emotional and behavioural symptoms underlying an affective disorder during the luteal phase is classified as PME.Due to lack of comprehensive diagnostic schemes (Halbreich, 2004) and poor awareness in the medical and research field (Eisenlohr-Moul, 2019), individuals with PMDD are often misdiagnosed with a non-cyclical affective disorder or PME (Hantsoo et al., 2022).Several clinical conditions have been linked to PME such as depression (Kuehner and Nayman, 2021;Handy et al., 2022), bipolar disorder (Payne, 2011;Pearlstein, 2022), and schizophrenia (Seeman, 2012), although the syndrome is not yet defined as a diagnostic entity, with its prevalence and neurobiological underpinnings being even less investigated than those of PMDD.
A main characteristic of the hormonal fluctuation-related symptomatology in MRMDs is the temporal variance in affective, cognitive, or somatic states (Rubinow and Schmidt, 2003).This cluster of symptoms poses several methodological challenges, likely complicating the identification of the psychobiology of MRMDs.In fact, knowledge of the neurophysiological mechanisms of MRMDs is scarce.Symptomatology or symptom exacerbation is temporally linked to the luteal phase of the cycle and thus likely the luteal-phase specific surge in progesterone and estradiol (Eisenlohr-Moul, 2019).Considering that ovarian hormone concentrations in persons with PMS and PMDD have been reported to lay within standard ranges (Backstrom et al., 1983;Hantsoo and Epperson, 2015;Schmidt et al., 1994), current evidence suggests an altered central sensitivity to physiological phasic hormone fluctuations in individuals affected by MRMDs (Schmidt et al., 1998;Schmidt et al., 2017).Indeed, pharmacological suppression of these fluctuations with gonadotropin-releasing hormone (GnRH) agonists leads to significant remission of premenstrual symptoms in persons with PMS or PMDD (Schmidt et al., 1998;Wyatt et al., 2004;Sundström et al., 1999;Segebladh et al., 2009).Albeit, the distinct effects of estradiol and progesterone on premenstrual symptomatology and the surrounding contribution of other phenomena such as receptor plasticity and dynamics have not yet been disentangled.
Different mechanisms have been suggested to underlie the ovarian hormone sensitivity hypothesized for MRMDs, one of which is highlighting the contribution of luteal progesterone fluctuations to premenstrual symptoms (Wyatt et al., 2004;Comasco et al., 2021), or those of its neuroactive metabolite allopregnanolone (Bixo et al., 2018).This has been corroborated by preliminary evidence indicating altered γ-Aminobutyric acid (GABA) function in PMDD in comparison with healthy controls.Namely, lower GABA concentrations have been found in subjects with PMDD in the luteal phase compared with healthy controls (Liu et al., 2015), Further evidence demonstrates an increase in GABA concentrations from the follicular to the mid-and late luteal phase for individuals with PMDD, while their follicular cortical GABA levels were also higher compared with controls (Bixo et al., 2018;Epperson et al., 2002).Complementary to this, a recent clinical trial showed that progesterone antagonism through the use of a selective progesterone receptor modulator treatment, which leads to stable and low levels of progesterone and mid-follicular levels of estrogen, alleviates the mood symptoms of PMDD, in particular irritability and depression (Comasco et al., 2021).In a functional neuroimaging investigation, greater frontocingulate activity during aggressive response to provocation was associated with this treatment in comparison with placebo (Kaltsouni et al., 2021).This can be interpreted as a potential beneficial enhancement of top-down regulation, namely better executive control on emotional reactivity, a mechanism that has been suggested to be altered in PMDD and to underlie poor emotional regulation in mood and anxiety disorders (Huang et al., 2014;Blair et al., 2012;Rive et al., 2013).Further, another line of research highlights the involvement of serotonergic neurotransmission in premenstrual suffering (Roca et al., 2002;Halbreich and Tworek, 1993;Rapkin and Akopians, 2012), especially supported by the efficacy of intermittent dosing with serotonergic antidepressants as treatment for PMDD (Marjoribanks et al., 2013).Further evidence for an interplay between ovarian hormone fluctuations and the serotonergic system in MRMDs comes from a hormonal manipulation study that pharmacologically induced fluctuations in estradiol (Frokjaer et al., 2015).Decreases in estradiol were associated with depressive responses in healthy women, where worse symptom severity was correlated with both the magnitude of estradiol decline and increase in neocortical serotonin transporter binding (Frokjaer et al., 2015), suggesting less available extracellular serotonin.In individuals with PMDD, positron emission tomography (PET) studies have provided in vivo evidence of a relationship with serotonergic neurotransmission markers (Eriksson et al., 2006;Jovanovic et al., 2006).Most recently, serotonin transporter binding increase from the periovulatory to the premenstrual phase was noted in patients with PMDD, in which increased binding correlated with greater symptom severity, corroborating the hypothesis on serotonin depletion impacting depressive mood (Sacher et al., 2023).Nevertheless, evidence on the precise mechanism by which serotonergic neurotransmission may be related to MRMDs symptomatology is scarce.
Altered neurophysiological response to canonical ovarian hormones fluctuations likely induces premenstrual distress (Bäckström et al., 2003;Dubol et al., 2020;Sundstrom-Poromaa et al., 2020).To date, neuroimaging techniques with high spatial resolution have been sparsely employed to study the neural signatures of MRMDs and have been mainly focused on PMDD.These include structural magnetic resonance imaging (sMRI), functional MRI (fMRI), diffusion tensor imaging (DTI) (Gu et al., 2022), and molecular imaging methods such as PET as well as single photon emission computed tomography (SPECT) (Dubol et al., 2020;Dubol et al., 2022;Dubol et al., 2022).Regarding anatomical differences, contradictory findings have been found in small samples with regards to PMDD (Syan et al., 2018;Protopopescu et al., 2008;Berman et al., 2013;Jeong et al., 2012).A recent multi-scale investigation in a relatively large sample, accounting for potential covariates, found differential anatomy of top-down control regions, such as the medial prefrontal, superior prefrontal, and orbitofrontal cortex, as well as superior and inferior parietal lobules, influencing the limbic system and visual processing areas, as features of the PMDD brain (Dubol et al., 2022).These structural characteristics were additionally correlated with the severity of symptoms of PMDD during the luteal phase (Dubol et al., 2022).On the other hand, the sole study on white matter structure points to higher integrity and volume in tracts connected to the limbic brain in patients with PMDD during the symptomatic phase, in comparison with controls (Gu et al., 2022).However, it remains to be investigated whether these are state-or trait-features of the PMDD brain.Moreover, to the best of our knowledge, no studies have thus far been conducted on the structural correlates of PMS and PME.
Functional neuroimaging studies have found differences between females with and without PMDD during emotion processing and cognitive tasks; these differences include altered activity of corticolimbic structures, such as lower activity in the anterior cingulate cortex (ACC) and prefrontal cortex (PFC) subregions, and higher amygdala reactivity to emotional stimuli pointing to increased bottom-up reactivity and depleted top-down cognitive control (Dubol et al., 2020).Menstrual cycle dependent reactivity to different emotion processing or cognitive tasks has been reported to differ in several corticolimbic regions, although the direction of effects greatly depends on the nature of the task used (Dubol et al., 2020).Moreover, current evidence suggests altered functional connectivity patterns between regions of the default mode network (DMN) in persons with PMS compared to healthy controls (De Bondt et al., 2015;Liu et al., 2015), along with some alterations of amplitude in low frequency fluctuations within the precuneus, hippocampal, and inferior temporal cortex in persons with PMS compared with healthy controls (Liao et al., 2017).Preliminary evidence on altered network dynamics has been found, exemplified as increased hippocampal-frontocortical and decreased hippocampal-premotor resting state connectivity associated with PME of bipolar disorder (Syan et al., 2018), as well as with menstrual cycle phase-independent increased functional temporal connectivity with the executive control network in patients with PMDD (Petersen et al., 2019).Nevertheless, despite the recent efforts to document menstrual cycle-dependent variations in intrinsic brain dynamics, alternatively known as functional brain organization (Friston, 2002), in healthy females (Hidalgo-Lopez et al., 2020;Pritschet et al., 2020), the overall literature on dynamic functional connectivity in MRMDs falls short.
Furthermore, functional magnetic resonance imaging relies on signals derived from hemodynamic activity, which (i) is merely an indirect proxy for brain activity and (ii) demonstrates only moderate temporal resolution while incurring (iii) highly non-stationary noise at (iv) high imaging acquisition costs.While spatial resolution is limited for subcortical brain areas as the amygdala, electroencephalography (EEG) recordings afford the measurement of instantaneous neuronal electrical activity, deriving from multi-source synaptic trans-membrane currents with high temporal resolution (Michel and Murray, 2012).
Electroencephalography
In MRMDs, psychological symptoms coincide with phases of the menstrual cycle, providing an opportunity to systematically study the modulation of affective and cognitive state changes (Rubinow, 2021).Mental and behavioural states are implemented in the brain via the synchronous activity of neuronal populations, which represent direct substrates of neural information processing (Mathalon and Sohal, 2015).With EEG, electrodes are fixed to the scalp (Fig. 1) and each channel records electrical activity (Louis et al., 2016).In order for the brain to produce and electrical signal that is detectable at the scalp, computational modelling has revealed it requires 10,000-50-000 synchronously active, apically projecting, neurons (Murakami and Okada, 2006).Thus EEG non-invasively records electrical activity in the brain, made up of primarily excitatory and inhibitory postsynaptic potentials in vivo, with high temporal and moderate spatial resolution (Biasiucci et al., 2019).This provides the opportunity to non-invasively explore the neural syntax underlying cognitive, affective, and behavioural dynamics in MRMDs.Interestingly, albeit not consistent, menstrual cycle dependent variations in the oscillatory activity has been shown (Thériault and Perreault, 2019).
Time or phase-locked cortical activity, (i.e. the neuronal activity that spontaneously arises at a specific time after a stimulus or falls into phase synchronicity as part of an ongoing oscillation) can be assessed by analysing event related potentials (ERPs), recorded with EEG.ERPs are high voltage fluctuations detected at the scalp relative to a reference electrode and studied based on their polarity and latency and spatial distribution or topography.ERPs provide a direct link between behaviour and stimulus-or task-related activation changes in mental processes such as memory, attention, emotion regulation, and beyond (Bridwell et al., 2018).Alternatively, the EEG can detect amplitude and frequency changes in oscillations while the brain is at rest (task-free), or when evoked (phase-locked) or induced (non-phase-locked).Neural oscillation amplitude and frequency changes are currently most often at a single source or based on coherence between sources.EEG may, therefore, serve as a candidate tool to disentangle neural markers of state retention and transition for MRMDs in a real-time fashion during task performance or at rest (Lee and Dan, 2012;da Silva, 2013).
Potential for EEG in MRMDs
An advantage of EEG for future translation of scientific findings to use as a clinical measurement for MRMDs includes the fact that EEG recordings are noninvasive, painless, and relatively cost effective compared to other brain imaging/recording techniques (for example, PET, MRI and magnetoencephalography (MEG).Advancements in both EEG hardware and analysis pipelines have increased the robustness of detecting brain signal in the recorded data relative to sources of noise; indeed, physical movement, muscle, and electrical line noise, can interfere with and present challenges for interpreting EEG data if not controlled.With such advancements, portable and wireless systems present a growing potential for bedside, outpatient or even home-based monitoring of the functioning brain (Niso et al., 2023).
Because EEG provides a direct index of neuronal activity, it is particularly well suited for accessing cyclical changes in neurotransmission.For example, changes in the power and peak frequency of resting state spectral bands across menstrual cycle phases have been found in several studies (Solís-Ortiz et al., 1994;Brötzner et al., 2014;Creutzfeldt et al., 1976;Bazanova et al., 2014;Makarchouk et al., 2011), with the most commonly reported observation being increased alpha power in the luteal phase (Bazanova et al., 2014;Brötzner et al., 2014).Evoked response paradigms (roving mismatch negativity and visually induced long-term potentiation) have shown that EEG is sensitive to changes in plasticity over the menstrual cycle (Sumner et al., 2018).Studies also show that EEG detects shifts in interhemispheric-transfer time; specifically, in the luteal phase there is an increase in the latency of visually evoked potentials in the hemisphere contralateral to the stimulated visual field (Hausmann et al., 2013).Induced changes in the EEG spectra include statistically significant increases in visually induced gamma oscillation frequency by ~5 Hz in the luteal phase of healthy females (Sumner et al., 2018).Additionally, an MEG study showed reduced suppression response to induced gamma oscillations triggered by moving gratings in PMDD (Manyukhina et al., 2022).Of relevance, induced gamma oscillations have an established relationship with excitation and inhibition (Manyukhina et al., 2022) changes in the brain.
When the above EEG studies on changes in both evoked and induced signal over the menstrual cycle are considered alongside TMS based research (Inghilleri et al., 2004;Smith et al., 2002;Smith et al., 1999), the interpretations consistently point to increased GABAergic inhibition in the healthy luteal phase.This is because the TMS studies cited (Inghilleri et al., 2004;Smith et al., 2002;Smith et al., 1999) discuss their results in light of non-menstrual cycle related TMS findings based on GABAergic and glutamatergic drug interventions.The increase in GABAergic inhibition is hypothesized to be driven by progesterone's metabolite allopregnanolone's effects on the GABA-A receptor α4βδ subunit (Bäckström et al., 2014), which could play an important role in tonic inhibition and neuronal excitability variations.Allopregnanolone/ oestradiol mediation of excitation and inhibition is one of the putative mechanisms implicated in the pathophysiology of MRMDs (Sundstrom-Poromaa et al., 2020;Clemens et al., 2019).However, further research, particularly using interventions, is required to deepen the potential inferences drawn from these data.
EEG can also be used beyond direct neurophysiological effects to study emotion and cognition.A common conceptual model in studying emotion regulation and vulnerability in psychopathology is frontal alpha asymmetry, already investigated in relation with the risk for, and treatment outcomes in, depression and anxiety disorders, among other psychopathologies (Allen and Reznik, 2015;Allen et al., 2018).Frontal alpha asymmetry is most often calculated as the right-left difference in log-transformed spectral power in the alpha frequency band (8)(9)(10)(11)(12)(13) between directly bilateral, frontal electrodes (Davidson, 1988).It has been studied in the context of motivation differences (defined as approach vs. avoidance behaviour in relation to some tasks) (Davidson, 1984;Kelley et al., 2017).High alpha asymmetry refers to lower frontal alpha power in the left relative to the right hemisphere and has been related to approach behavior, namely the willingness to explore reward sensitivity and related to increased positive affect.Low alpha asymmetry refers to the opposite pattern, namely higher left relative to right alpha power, and has been related to withdrawal and negative affect (Davidson, 1984;Kelley et al., 2017).
In a meta-analysis, lower alpha asymmetry has been associated with depression, and dysphoric symptoms generally (Thibodeau et al., 2006).However, caution is advised over its reliability, with subsequent reviews arguing that alpha asymmetry is unlikely to become diagnostically useful despite the large numbers of studies using it (van der Vinne et al., 2017).It is suggested that, with future research, alpha asymmetry may instead be a marker for specific symptoms, such as suicidal ideation, or for differentiating depressive disorders (van der Vinne et al., 2017), or treatment response prognosis (van der Vinne et al., 2017;Baskaran et al., 2012).Furthermore, lower alpha asymmetry has been consistently documented as a response to trauma-related stimuli in people with posttraumatic stress disorder (Meyer et al., 2015).Of relevance to the present review, changes in alpha asymmetry have also been documented across the menstrual cycle (Makarchouk et al., 2011;Huang et al., 2015).
There are numerous other examples of markers of mood disorders in the EEG literature related to clinically relevant phenotypes for MRMDs.
Though not an exhaustive list, relevant examples include the reward positivity ERP in anhedonic depression (Proudfit, 2015), and the errorrelated negativity evoked response in anxiety disorders (Meyer, 2016).Alternatively, insight can be drawn from changes in EEG data features related to underlying neurobiological mechanisms.For example, bidirectional involvement of gamma and theta band activity as diagnostic classifiers or predictors of treatment response (Baskaran et al., 2012;de Aguiar Neto and Rosa, 2019).
While it is not within the scope of this review to consider all of the literature on EEG markers of mood disorders, it is of relevance to MRMDs that there is empirical evidence demonstrating the sensitivity of EEG to psychiatric symptomatology.Further, that there is a wealth of literature attempting to find the parameters of reliable and specific markers upon which future EEG research on MRMDs can draw, as the symptomatology profile partly overlaps with the one of mood and anxiety disorders (Halbreich, 2010).The current critical literature review aimed to summarize EEG findings on MRMDs (Fig. 1), to identify gaps in this field of research, and to discuss future perspective in utilizing EEG biomarkers as diagnostic or prognostic indicators.
Method
A literature search was performed using PubMed to identify articles investigating the neurophysiological correlates of MRMDs.The following keywords, along with their derivatives, were used for the search: "MRMDs", "menstrual", "mood disorders", "electrophysiology", "EEG", "electroencephalogram", "mood disorders", "PMDD", "PMS", "PME", and "premenstrual mood".We considered studies published in peer-reviewed journals until November 2022.Moreover, we screened the references cited in the included results.All publications meeting the following criteria were included in this critical review: (1) samples with MRMDs alone or in comparison to healthy controls and (2) papers written in English and indexed in PubMed.
Sample
To date, nine studies have investigated MRMDs with EEG (Tables 1-4): five focused on PMS and four on PMDD (of which two potentially on PME of major depression).The sample size ranged from 10 to 113 females (M d = 41.2,IQR = 29), with participants being on average M d = 22.5, IQR = 8.8, years old.All studies were case-control studies (Parry et al., 1999;Baehr et al., 2004;Baker et al., 2007;Baker and Colrain, 2010;Accortt et al., 2011;Lin et al., 2013;Deng et al., 2019;Liu et al., 2017), except for one that compared participants with high PMS symptoms to those with low PMS symptoms (Hou et al., 2022).
Inclusion and exclusion criteria
The extent and depth in evaluating subject features differed substantially across studies and inhomogeneous eligibility criteria were applied.Among the most common inclusion criteria were regular menstrual cycle length and absence of concurrent psychoactive drug use, hormonal contraceptive (HC) use, and neuropsychiatric conditions other than MRDMs.Regarding HC use, six studies excluded participants who reported current use (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Deng et al., 2019;Liu et al., 2017;Hou et al., 2022) or use within the past three months (Baker et al., 2007).Two studies provided no information about HC use (Baehr et al., 2004;Lin et al., 2013), while one study included participants that were using HC and did not provide information on the type of HC (Accortt et al., 2011).
MRMDs screening
In the selected papers, MRMDs screening was implemented in various ways, ranging from retrospective self-reports to standardised or non-standardised scales.Several studies opted for structured clinical interviews and prospective self-reported mood ratings (Parry et al., 1999;Baker and Colrain, 2010;Accortt et al., 2011;Deng et al., 2019).One assessed depression symptoms, not targeting MRMD-specific symptomatology (Parry et al., 1999); while two relied on retrospective methods (Lin et al., 2013;Hou et al., 2022).Baehr and colleagues did not provide enough information on the screening process to establish what their method was (Baehr et al., 2004).In two studies using overlapping cohorts, PMS and PMDD were not clearly distinguished, as the authors included persons with both PMDD and severe PMS (Baker et al., 2007;Baker and Colrain, 2010).
More specifically, to establish the MRMD diagnosis, as summarized in Tables 1-4, two studies applied the SCID based on the DSM-IV, using the custom module on PMS symptoms to confirm PMDD diagnosis, and requested the participants to record their symptoms for two menstrual cycles (Baker et al., 2007;Baker and Colrain, 2010).Another study utilized a battery of interviews, including the SCID, Hamilton Depression Rating Scale (HDRS), a custom PMDD interview, and the Daily Record of Severity of Problems (DRSP) scale for reporting of PMDD symptoms (Accortt et al., 2011).In another study focusing on PMS, the participants self-documented their symptoms for two months to confirm the diagnosis, criteria were defined in accordance with the American College of Obstetrics and Gynecology guidelines (Deng et al., 2019).Lastly, three studies used retrospective premenstrual symptom recording, one used the official Premenstrual Syndrome Scale (PMSS) (Hou et al., 2022), another through self-compiled history accompanied by a PMS scale (Liu et al., 2017), and, lastly, one by use of MDQ and BDI-II to confirm PMDD diagnosis (Lin et al., 2013).
Psychiatric comorbidities
Systematic interviews to assess eligibility criteria regarding mental health were described in seven studies (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Accortt et al., 2011;Lin et al., 2013;Deng et al., 2019), with six among them explicitly stating the use of standardised questionnaires (i.e., the Structured Clinical Interview for DSM-IV (SCID), Menstrual Distress Questionnaire (MDQ), and Beck Depression Inventory II (BDI-II)) (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Accortt et al., 2011;Lin et al., 2013).Six studies excluded participants with ongoing psychiatric conditions (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Deng et al., 2019;Liu et al., 2017;Hou et al., 2022), with two of these including individuals with a past history of psychiatric diagnosis or substance abuse (Baker et al., 2007;Baker and Colrain, 2010).Two studies included participants with ongoing comorbid disoders, that is major depressive disorder (MDD) (Baehr et al., 2004) or MDD and dysthymia (Accortt et al., 2011).Two studies stated a history of mental or other serious medical illness in their exclusion criteria (Lin et al., 2013;Deng et al., 2019).In their PMS sample, Baker and colleagues included severe cases that had a history of psychiatric illness or substance abuse during the past year (Baker et al., 2007).
Menstrual cycle assessment
Menstrual cycle regularity was an inclusion criterion in seven studies (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Lin et al., 2013;Deng et al., 2019;Liu et al., 2017;Hou et al., 2022).One study did not report information about menstrual cycle regularity assessment (Baehr et al., 2004), while another study did not include regular menstrual cycles in the inclusion criteria (Accortt et al., 2011).Furthermore, high variability was found in the type of menstrual cycle assessments and reporting (Tables 1-4), with some studies using saliva, blood, or urine testing, and others self-reports of menses onset.Baehr and colleagues (Baehr et al., 2004) did not have menses start dates for the majority of their control group, instead separated their EEG sessions by seven days.
Experimental settings and protocol
As presented in Tables 1-4, one study employed a cross-sectional design (Liu et al., 2017) and the other eight studies used withinsubject measurements to assess the effects of menstrual cycle phase (Parry et al., 1999;Baehr et al., 2004;Baker et al., 2007;Baker and Colrain, 2010;Accortt et al., 2011;Lin et al., 2013;Deng et al., 2019;Hou et al., 2022) with the strongest focus on the late luteal phase (Parry et al., 1999;Baker et al., 2007;Baker and Colrain, 2010;Deng et al., 2019;Liu et al., 2017;Hou et al., 2022).In two of the studies, alpha asymmetry measurements were collected for cases and controls irrespective of menstrual cycle phase (Baehr et al., 2004;Accortt et al., 2011), one collected data three times over two cycles for the PMDD cases and over one cycle for the controls (Baehr et al., 2004), while the other study acquired data on four occasions within a two-week period (Accortt et al., 2011).Resting and task-related EEG activity was measured by Liu and colleagues only once with the menstrual cycle phase being modeled as a covariate (Liu et al., 2017).Finally, five studies collected EEG data twice, once in the follicular and once during the luteal phase of the menstrual cycle (Baker et al., 2007;Baker and Colrain, 2010;Lin et al., 2013;Deng et al., 2019;Hou et al., 2022).
One of the studies assessed frontal alpha asymmetry differences between PMDD and controls under depressive induction, recall, recovery and relaxation (Lin et al., 2013).In the depressive induction condition, participants were guided to bring to mind an event that was followed by depressive feelings and active processing of depressive thoughts; evidence of the effectiveness of the depressive induction paradigm was not provided (Lin et al., 2013).Deng and colleagues (Deng et al., 2019) collected data both at rest and during an emotional picture task.Liu and colleagues (Liu et al., 2017) employed a stress reactivity task to assess changes in alpha activity, as well as the positive affect and negative affect scale, and a physiological stress evaluation scale.Baker and Colrain used a polysomnographic investigation as well as probed daytime sleepiness in cases and controls in a multimodal setting, using a maintenance of wakefulness test, a psychomotor vigilance task, auditory and visual recording of event-related potentials, as well as psychological scales for mood and sleepiness, in combination with waking EEG measurements (Baker and Colrain, 2010).Features of the sleep EEG were investigated once in the follicular and once in the luteal phase through the same aforementioned polysomnographic investigation, including EEG, electrooculography, and electromyographic recordings (Baker et al., 2007).Daytime was specified to have been set approximately 90-120 min after waking and typically between 9:00 and 10:00 am (Baker and Colrain, 2010).Last, to study the effects of sleep deprivation in PMDD and controls, Parry and colleagues (Parry et al., 1999) collected EEG measurements based on a partial sleep deprivation protocol on three occasions: a baseline sleep recording during the midfollicular phase, during early (sleep between 3:00-7:00 am) or late (21:00-1:00 am) sleep deprivation in the late luteal phase, and on a recovery night, entailing a session of full night (22:30-6:30 am) sleep during the late luteal phase.
EEG measurement and processing
The extent to which the experimental arrangements and EEG settings were detailed varied substantially between studies affecting comparability.For example, in the frontal alpha asymmetry studies, the most common reference was the central electrode C Z (Fig. 1), but not always (Accortt et al., 2011).Analog sampling rates ranged from 128 Hz (Baehr et al., 2004;Baker et al., 2007;Lin et al., 2013) to 1 kHz (Baker and Colrain, 2010;Accortt et al., 2011;Deng et al., 2019;Hou et al., 2022).Some reports lacked a precise enough depiction of the task settings and conditions to permit replication or between study comparison (Lin et al., 2013;Liu et al., 2017).All studies included information about recording times and lengths and/or epoch segmentation procedures.Only three explained their methods for the spectral analysis in sufficient detail to permit replication (Baehr et al., 2004;Baker et al., 2007;Accortt et al., 2011).
Frontal alpha band asymmetry
Four studies investigated alpha asymmetry as defined above.The evidence suggests lower anterior alpha asymmetry at rest in individuals with PMS or PMDD compared with controls (Table 1).Deng and colleagues (Deng et al., 2019) compared alpha asymmetry at 8-12 Hz between mid-late follicular and late luteal phases, and evaluated the mean of respective differences in spectral power of frontal electrode pairs (FP1/2, AF3/4, F1/2, F3/4, F5/6, F7/8).The authors found a main effect of group, with lower alpha asymmetry at rest (regardless of menstrual cycle phase, here defined as trait) in persons with PMS compared to healthy controls, and a main effect of phase with relatively lower asymmetry scores comparing luteal to follicular phase scores in both groups (η 2 = 0.11-0.25).No group by phase interaction effect was found.Upon visual presentation of emotional pictures (here defined as state), healthy controls presented higher asymmetry to positive than negative pictures, while no such effect was found in the PMS group (Deng et al., 2019).
Adding to the task-related findings in (Deng et al., 2019), Lin and colleagues investigated frontal alpha asymmetry when comparing patients with PMDD to healthy controls, during both the luteal and follicular phase (Lin et al., 2013).The authors describe higher asymmetry between F4-F3 channels in PMDD as compared with control subjects, only in the luteal phase during depressive induction and a subsequent relaxation period of three minutes.A similar but less pronounced effect was found in the follicular phase (Lin et al., 2013).
Accortt et al. (Accortt et al., 2011) also found lower resting alpha asymmetry indices for frontal channel pairs F1/2, F3/4, F5/6, F7/8 in patients with PMDD versus controls.In this study, measurements were on four occasions within a two-week period, regardless of menstrual cycle phase.Additionally, PMDD accompanied by lifetime MDD diagnosis was associated with lower alpha asymmetry compared to PMDD without MDD (Accortt et al., 2011).However, no effect size was reported for these results.Lastly, Baehr and co-workers (Baehr et al., 2004) analysed the percent of time that the absolute alpha asymmetry (F4− F3)/(F4+F3) was above null at rest in PMDD versus controls during the luteal and "non-luteal" phase.The authors found alpha asymmetry to be lower and self-reported negative affect to be higher than positive affect in the luteal compared to the "non-luteal" phase for only PMDD cases, although no effect size was reported (Baehr et al., 2004).
To summarise, both studies recording resting state EEG show lower alpha asymmetry in subjects with PMS and PMDD (Accortt et al., 2011;Deng et al., 2019), with one study pointing to a luteal phase-specific effect (Baehr et al., 2004).Task related asymmetry produced mixed results that cannot be compared across the tasks and menstrual cycle phases, thus needing replication to determine reliability.
Stress reactivity
As reported in Table 2, one study focused on the EEG signatures of stress reactivity in PMS.To assess central stress reactivity using a biofeedback system and a stress evaluation test battery, Liu and colleagues (Liu et al., 2017) compared one channel (C Z ) 8-13 Hz alpha band power between persons with and without PMS, accounting for menstrual cycle phase (follicular and luteal phases) as a covariate.The authors found higher alpha power, higher negative affect, and lower positive affect in individuals with PMS compared to healthy controls.A main effect of group (PMS vs. controls η 2 = 0.410) and an interaction of group-by-test condition on alpha activity during the EEG stress evaluation paradigm were found (η 2 = 0.117), driven by higher alpha power under rest as well as during the attention and cognition tests in the PMS group compared with the controls (Liu et al., 2017).
Circadian characteristics
Regarding sleep architecture and circadian rhythmicity, three studies investigated sleep characteristics in regular sleep and after sleep deprivation, as well as alterations in daytime wakefulness, in PMS and PMDD (Table 3).The results can be seen as complementary, suggesting menstrual cycle phase-independent alterations, including higher low range spectral power (Baker and Colrain, 2010), decreased delta or theta incidence (Baker et al., 2007), and increased sleep quality as a response to sleep deprivation in MRMDs (Parry et al., 1999).
Comparing the EEG sleep architecture in a group of persons with either PMS or PMDD to healthy controls in the follicular and late luteal phase, power spectral and period amplitude analyses of two channel recordings (C3-A2, C4-A1) yielded menstrual cycle-dependent changes, in both directions in non-rapid eye movement (REM) EEG features (Baker et al., 2007).For instance, absolute and relative luteal spectral power were elevated in low beta (15 ≤ 23 Hz) and sleep spindle range sigma (12 ≤ 15 Hz), namely the rhythmic sigma waves characterizing stage 2 sleep (De Gennaro and Ferrara, 2003), frequency bins in both groups compared to the follicular phase.Sigma wave incidence (waves per epoch) and amplitude were increased in the luteal phase for all subjects, while the PMS/PMDD group displayed lower incidence in delta (0.3-≤4 Hz) activity and higher incidence in theta (4-≤8 Hz) oscillations, regardless of menstrual cycle phase.Moreover, sleep efficiency, measured as the percentage of sleep time during time in bed, was significantly worse in participants with PMS/PMDD and REM-sleep onset latency was longer compared to controls, independent of cycle phase.Both groups showed more frequent wakefulness after sleep onset and increased microarousal in the luteal phase compared to the follicular phase, while no difference between the two groups was noted in total time in bed (Baker et al., 2007).
Data from the latter experiment was then used to test whether individuals with PMS/PMDD display electrophysiological and behavioural differences in daytime sleepiness compared to healthy controls, across cycle phases (Baker and Colrain, 2010).Both groups displayed increased power in the delta/heta and alpha luteal phase frequencies when compared to the follicular phase.Mood and fatigue ratings indicated premenstrual deterioration in patients with "severe PMS" compared to controls.Though an effect of severe PMS was not found between the menstrual cycle phases (state), differences regardless of phase were found (trait).Participants with severe PMS performed worse on a psychomotor vigilance task (slower reaction time and more lapses).Using EEG, significantly higher power across spectral bands in the low-beta range (12-16 Hz) and lower P300 amplitude in response to auditory and visual stimuli were observed.The relationship between EEG and behavioural findings was not explored in this paper.
Lastly, a cross-over experiment investigated the EEG sleep architecture in an early (sleeping time 3:00 am-7:00 am) and late (sleeping time 9:00 pm-1:00 am) sleep deprivation intervention in healthy controls and patients with PMDD.The authors demonstrated facilitated recovery in PMDD after partial sleep loss intervention (Parry et al., 1999).Comparing baseline luteal recordings and post-sleep deprivation recovery nights, patients with PMDD showed augmented sleep efficiency, shorter sleep onset latency, fewer time awake independent of sleep deprivation type.When compared with controls, the PMDD sample was found to have increased sleep time and efficiency, shorter sleep latency, less time spent in the light sleep stage (stage 1), and less awake time.Positive correlations between changes in depressive scores (HDRS) from baseline, and changes in EEG (sleep variables) from baseline to the sleep deprivation recording were also observed in 9 out of 16 PMDD intervention responders between the luteal baseline and early sleep deprivation recovery night.Sleep variables comprised (a) fast REM sleep latency, (b) time awake after sleep onset, (c) stage 2 sleep, and (d) REM density during first REM phase.Complementary to the findings in (Baker et al., 2007), both case and control subjects were found to display longer REM latencies and reduced REM sleep in the luteal with the follicular phase, while only healthy controls showed increased stage 1 sleep in the luteal compared to the follicular phase (Parry et al., 1999).
In summary, three studies assessed the circadian and sleep characteristics in MRMDs.Baker and colleagues observed higher power in the low-beta spectral range in individuals with severe PMS compared with controls, irrespective of menstrual cycle phase (Baker et al., 2007;Baker and Colrain, 2010).Additionally, higher incidence and amplitude in theta and lower in delta oscillations, regardless of menstrual cycle phase, were noted in the PMS/PMDD sample compared with the control subjects (Baker et al., 2007), while after a night of sleep or sleep deprivation, EEG revealed higher luteal phase spectral power in PMDD patients for the low frequency bands, delta, theta, and alpha, along with fatigue and mood deterioration compared with the follicular phase (Baker and Colrain, 2010).Longer REM latencies and reduced REM sleep were found in the PMDD sample in the luteal compared with the follicular phase, while sleep efficiency and recovery after sleep deprivation were improved in females with PMDD compared with controls (Parry et al., 1999).
Resting spectral dynamics
One study (Table 4) examined the spontaneous variation in delta (1-3 Hz), theta (4-7 Hz), and beta (13-30 Hz) power, as well as the changes in relative slow/fast wave proportions from follicular to luteal phase, along with differences between individuals with moderate versus severe PMS (Hou et al., 2022).Hou and colleagues report a higher frontal and central delta/beta ratio (DBR) in persons with severe PMS in the luteal phase compared to the follicular (Table 4).No such result was found for theta/beta ratios and overall differences between the high and moderate PMS group were insignificant.The luteal DBRs in the high PMS group were further found to be positively correlated with scores of self-blame and rumination (Hou et al., 2022).
Discussion
The present review presents the current EEG findings in MRMDs, while highlighting the variability in the methodology of the few existing studies.Categorised by diagnosis, four studies focused only on PMDD (with two of them potentially recruiting subjects with PME of MDD rather than comorbid PMDD with MDD), three on PMS only, and two on PMS and PMDD.In general, syndrome definitions, diagnostic procedures and recording settings were not comprehensively described in all reviewed papers, and greatly varied, which makes it difficult to compare the findings.
As a major focus of the reviewed studies has been on interhemispheric frontal alpha asymmetry, and consistent results are beginning to emerge in the literature, these results are discussed first.In general, power of alpha oscillations plays a functional role in cortical inhibition (Klimesch et al., 2007), where alpha power is inversely related to cortical activity.Frontal alpha asymmetry has been reliably used as an electrophysiology marker that can capture state and trait indices of emotion processes (Coan and Allen, 2003), with higher asymmetry values reflecting greater left frontal activity and lower values reflecting greater right frontal activity.Interpreted as balancing approachavoidance motivation (Davidson, 1998), high frontal alpha asymmetry scores are understood to be an indicator of positive affect and approach drive (Davidson, 1984).On the other hand, low alpha asymmetry scores have been related to avoidance and negative emotional state (Harmon-Jones and Allen, 1997).While previous studies have demonstrated consistent frontal EEG alpha asymmetries in depressed individuals (Allen and Kline, 2004;Hagemann et al., 2002;Tomarken et al., 1992), our review aimed to extend this literature to studies specific to MRMDs.The two studies published so far suggests lower alpha asymmetry, assessed at rest using frontal electrodes, between persons with PMS/ PMDD versus healthy controls (Accortt et al., 2011;Deng et al., 2019) and luteal alpha asymmetry did not vary depending on emotional stimuli (Deng et al., 2019).In one study, alpha asymmetry was found to decline from the follicular to the luteal phase, exclusively in the MRMD group (Baehr et al., 2004).However, in one of the studies assessing alpha asymmetry, the opposite pattern was observed, with higher alpha asymmetry in the luteal phase, potentially attributed to the fact that this was noted under a depressive induction state (Lin et al., 2013).The authors further discussed that sample-related factors could have influenced this result, indicating that PMDD diagnosis was retrospectively confirmed and diagnosis cannot be assured, thus highlighting the complexity of MRMD research (Lin et al., 2013).Overall, albeit limited, lower alpha asymmetry in persons with MRMDs seems to reflect both a general (Accortt et al., 2011;Deng et al., 2019) but also luteal phasespecific (Baehr et al., 2004) sensitivity in the avoidance motivational system.
Although the neural mechanism of alpha asymmetry is not conclusively defined, the main assumption, is that alpha power is inversely related to frontal cortical activity (Smith et al., 2017).Functional MRI studies focusing mainly on depression and emotional regulation have investigated the neurovasculature of the lateralized motivationapproach model described in (Davidson, 1984;Davidson, 1998) and observed effects on regions such as the dorsolateral PFC, insula, amygdala, ACC, amongst others (Zotev et al., 2016;Herrington et al., 2010).These regions are abundant in estrogen and progesterone receptors (Brinton et al., 2008;McEwen, 2002).Further, ovarian hormone levels have repeatedly been shown to impact some of the main neurotransmission pathways, like the GABAegic, glutamatergic, dopaminergic, and serotonergic, whose receptors are also widespread in the aforementioned regions (Barth et al., 2015).Even though the neuropathophysiological mechanism underlying the psychological symptomatology of MRMDs is not yet fully clarified, altered prefrontal activation patterns observed in MRMDs could be translated as rewardmotivation and cognitive control deficiencies that relate to negative bias towards negative stimuli and poorer coping skills coupled with attentional shifting.
When discussing frontal alpha asymmetry as a potential index of altered emotional processing in MRMDS, limitations found in the wider literature on mood and anxiety disorders have to be considered.As a construct, alpha asymmetry is sensitive to factors such as the diagnostic and psychiatric properties of the sample, electrophysiological recording settings and study design (Thibodeau et al., 2006;van der Vinne et al., 2017).As a result, the low number of studies into MRMDs and the variability in methodologies poses a challenge in reliably inferring a causal relationship between frontal asymmetry and negative affect in MRMDs.The majority of studies have been conducted with a low number of participants (Baehr et al., 2004;Baker and Colrain, 2010;Lin et al., 2013) and within a narrow age range (Liao et al., 2017;Accortt et al., 2011;Lin et al., 2013;Deng et al., 2019).The studies also showed substantial diversity in sample symptomatology and clinical characteristics (Baehr et al., 2004;Baker et al., 2007;Baker and Colrain, 2010;Accortt et al., 2011).Diagnostic criteria varied broadly in two studies including patients with PMDD comorbid with MDD, potentially reflecting PME (Baehr et al., 2004;Accortt et al., 2011).The different studies might have thus covered different combinations of mood symptoms on the MRDM spectrum.
Addressing specifically the contradictory results from Lin and colleagues (Lin et al., 2013), as the authors mention, the task might have not specifically induced depressive mood, and instead induced a stress response.Therefore, it is plausible that asymmetry as an index of emotional state could possibly vary depending on symptom category; inconsistencies in frontal asymmetry results are also present in studies on depression (van der Vinne et al., 2017).Future EEG studies could experimentally manipulate states of emotional processing to resolve this, by using standardized measurements with more consistent symptom profile.Altogether, these findings (Baehr et al., 2004;Accortt et al., 2011;Lin et al., 2013;Deng et al., 2019) warrant replication on larger scale to confirm the direction of alpha asymmetry at rest and during tasks as a possible feature of MRMDs.
Sleep EEG findings demonstrate menstrual cycle-related changes and group differences in spectral properties.On the one hand, the luteal phase might generally represent a phase of increased vulnerability to wakefulness and arousal, reflected by incremented power in fast waves (Baker et al., 2007) and longer REM sleep duration and latency (Parry et al., 1999).On the other hand, alterations in theta and delta wave incidence (waves per epoch) (Baker et al., 2007) might reflect global impairments in sleep quality, as reported by persons with severe PMS but not controls.In particular, altered patterns of delta wave sleep would largely affect homeostatic regulation of synaptic scaling (Tononi and Cirelli, 2006) and glymphatic activity (Hablitz et al., 2019) in the sleeping brain.Disturbance of night sleep might result in compromised daytime wakefulness and increased fatigue in persons with severe PMS, leading to functionally impaired behavioural performance (Baker and Colrain, 2010).Maladaptive task activity could be the result, putatively represented by increased cortical idling (Liu et al., 2017) and disturbances in resting-state spectral dynamics (Hou et al., 2022).Complex regulative behaviour and stress resilience might depend on adaptive tuning of fast and slow oscillatory activity (Ivaskevych et al., 2019;Snyder and Hall, 2006;Knyazev, 2007), which could potentially be recovered through luteal sleep intervention in PMS/PMDD (Baker et al., 2007).However, a realistic representation of stress has to be modelled and evaluated experimentally to systematically assess the response to stress and premenstrual adversity.This could ultimately help to tailor prevention and rehabilitation strategies in persons with MRMDs.
Interestingly, in two of the reviewed studies, potential sleep quality disruptions in MRMDs do not seem to be an effect of menstrual cycle phase (Parry et al., 1999;Baker et al., 2007).This means that the observed effects could presumably be disorder-specific (trait-like) and not necessarily attributable to the ovarian hormone cyclicity throughout the cycle.Nevertheless, considering the inconsistencies in methods between studies, this interpretation needs to be supported by further investigation.However, functional neuroimaging findings also support this trait-like vulnerability in relation to neural correlates of PMDD; indeed, independently of menstrual cycle phase, differential dorsolateral prefrontal activation has been observed in PMDD patients compared with controls (Baller and M.S., Shau-Ming Wei, B.S., Philip D. Kohn, M. S., David R. Rubinow, M.D., Gabriela Alarcón, B.A., Peter J. Schmidt, M. D., and and Karen F. Berman, M.D., 2013;Toffoletto et al., 2014).Taking it one step further, lower relative delta, theta, and alpha power have been found in perimenopausal subjects compared to non-depressed controls, with those alterations being associated with vigilance and depression scores (Saletu et al., 1996), which would imply an overall female vulnerability to depressive states and not necessarily to cycledependent fluctuations.At present, findings on regularly cycling individuals and rodents indicate menstrual/estrous cycle phase effects on E. Kaltsouni et al. low-frequency band oscillations, such as on the theta, alpha, or delta bands, although the pattern of change is not consistently related to one cycle phase or endocrine state (Thériault and Perreault, 2019).
Taken together, these findings provide first evidence for alterations in electrophysiological markers of menstrual cycle related mood disorders.Further replication and methodological advancement could test the robustness and the scope of this first evidence.The following critical discussion aims to facilitate appraisal of the presented evidence and highlights pivotal aspects in the complex study of females psychoneuroendocrinology.
MRMD symptom assessment
Regarding the requirements for studying and diagnosing MRMDs (Epperson et al., 2012;Schmalenberger et al., 2021), the reviewed studies did not consistently apply validated questionnaires, specifically targeting MRMDs symptomatology and its cyclical nature, such as the validated DRSP (Endicott et al., 2006) and diagnostic protocols as the Carolina Premenstrual Assessment Scoring System (C-PASS) (Eisenlohr-Moul et al., 2017).It is thus important for future studies to consider the nature and timing of symptoms in MRMDs, including individual differences in symptom profiles, and to employ appropriate inclusion and exclusion criteria (Gehlert et al., 2009).Even though the included studies were categorized according to diagnosis, how diagnosis was determined was unclear in four studies.While two studies focused on PMDD comorbid with MDD, it is highly likely that the participants could have been suffering from PME, according to literature definitions of the term (Kuehner and Nayman, 2021).Other than mentioning the comorbidity type included in their samples, the authors did not distinguish between PMDD and MDD nor did they provide additional information on the cyclicity and possible overlap in the symptoms of the two disorders (Baehr et al., 2004;Accortt et al., 2011).Similarly, in two other studies, even though participants were given a PMDD diagnosis, there were subjects included in the sample that did not meet the criteria, but instead had pronounced PMS symptomatology (Baker et al., 2007;Baker and Colrain, 2010).To what degree, however, this impacted the results is difficult to assess.Future research in the field of biological psychiatry might also have to consolidate features of the control groups to adequately interpret the results.
EEG methods
In the presented studies, the comparability of EEG assessments between experiments is hampered due to high variability in EEG methods as well as sparse and variable reporting practices.For example, for the alpha asymmetry work in particular, there are differences in referencing.Given referencing methods affect the spatial and amplitudebased features of the EEG activity, it also impacts comparability of outcomes (van der Vinne et al., 2017).Further, re-referencing to the average is more common and in fact recommended for alpha asymmetry research going forward (van der Vinne et al., 2017), which ought to include MRMD studies.This issue of methodological and reporting variability in biological psychiatry research is not limited to the field of MRMDs or even EEG, however, in order to translate future work into potential clinical biomarkers for MRMDs, more consistency in approach and reporting is needed (Bixo et al., 2018;Liu et al., 2015;Epperson et al., 2002;Halbreich and Tworek, 1993;Sundstrom-Poromaa et al., 2020;Muthukumaraswamy, 2014;Hantsoo and Epperson, 2020).
Strengths, limitations, and potential of EEG for future use in MRMD research
The current literature on MRMDs and EEG provides a basis to leverage the potential strengths of this technique.One of the major advantages of EEG is that it is a direct measure of neurophysiology and has demonstrated sensitivity to the dynamic and phasic sex hormone fluctuations throughout the menstrual cycle.The variations in excitatory and inhibitory neurophysiology mediated by the major neurotransmitter systems glutamate and GABA, respectively, throughout the menstrual cycle, have been described (Smith et al., 2002;Harada et al., 2011).This review demonstrates that the potential of EEG to make detailed inferences on the biological mechanisms underlying MRMDs has not yet been fully realized.Future studies could draw upon the psychopharmacology research literature that manipulates key systems implicated in MRMDs like serotonin, GABA, and glutamate.
Currently, the most utilized imaging tool in discerning the emotionand cognition-centered neural correlates of MRMDs has primarily been fMRI, followed by sMRI.Recent advances in psychiatric neuroscience have suggested alterations in resting-state network activity to characterize certain psychiatric disorders (Canario et al., 2021;Sha et al., 2019) in a symptom-domain specific manner (Andreano et al., 2018;Doucet et al., 2020;Xu et al., 2019) with sensitivity to sex hormone fluctuations (Dubol et al., 2021;Creutzfeldt et al., 1976;Huang et al., 2015;Andreano et al., 2018).In alignment with contemporary views of the emotional brain (Pessoa, 2018), the study of large-scale functional networks might help to elaborate the understanding of menstrual cycle related mood disorders, given ever changing endocrine states along the reproductive lifespan of a woman (Dubol et al., 2021;Andreano et al., 2018;Brinton et al., 2015;Hare et al., 2020).
Though sparse, the majority of fMRI results on PMDD report alterations within regions of the corticolimbic system activation during affective tasks, as reviewed in (Dubol et al., 2020).Similar evidence suggests that network dynamics may play a role in PMS, as confirmed by recent evidence on altered thalamocortical connectivity in persons with PMS (Liu et al., 2018;Meng et al., 2021;Long et al., 2022;Liu et al., 2022); however, studies on PME are even scarcer.As EEG lacks spatial resolution for subcortical regions that are suspected to have etiological relevance for MRMDs, EEG can complement research in this context, though it is unlikely to replace the need for further MRI studies.Future MRMDs research using EEG could focus on cycle-related electrophysiological changes in the dorsolateral PFC, given that frontal regions serve as a hub for mood regulation (Ray and Zald, 2012) and are highly accessible to surface EEG.Speculatively, potential clinical translation opportunities in PMDD include alpha waves in the PFC, which could be a potential target for EEG neurofeedback.Differential corticolimbic activation in response to emotional stimuli is proposed to distinguish the PMDD brain, namely enhanced amygdala and diminished frontocortical function (Dubol et al., 2020).Moreover, the most efficacious target for FDA-approved TMS-treatment for treatment-resistant depression is the dorsolateral PFC (Cantone et al., 2017), thus this brain region may be most responsive to fast neuroplastic changes in patients with a disorder of overlapping symptomatology.In line, Riddle and colleagues by employing an alternating alpha-power current stimulation paradigm on patients with PMDD, observed a decreased prefrontal alpha amplitude in the luteal phase relative to the follicular, thus corroborating phase-specific neural activity alterations as a feature of PMDD (Riddle et al., 2022).
Alternatively, if future studies use higher density EEG systems than the ones employed in many of the studies investigating MRMDs so far, spectral analyses on MRMDs could become more advanced employing source based, effective, and functional connectivity measures.For instance, resting-state functional connectivity alterations using EEG have already been widely reported in research on depression (Petersen et al., 2019;Kaiser et al., 2016;Syan et al., 2018;Lanius et al., 2010;Miljevic et al., 2023).In addition to resting-state studies, ERP studies in response to emotional or cognitive based tasks may also assist in disentangling the shared and unique phenomenology between MRMDs themselves and when compared to other mood disorders.
Conclusion
In conclusion, the present overview systematically assessed the E. Kaltsouni et al. current state of EEG research on MRMDs.Findings on frontal asymmetry in the alpha frequency were rather consistent; two out of two studies showed preliminary evidence for lower asymmetry at rest associated with MRMDs in the luteal phase.In terms of sleep and circadian rhythms, the results from the three reviewed studies point to generalized differential sleep dynamics being associated with MRMDs regardless of menstrual cycle phase.Recent evidence demonstrates differences in other frequency bands, as measured by slow-to-fast wave ratios between low and high severity individuals with MRMDs.This pioneering work could be followed up by well-powered studies using cutting edge methodology and analysis designs to expand the nascent literature on EEG spectral, ERP, and connectivity changes in MRMDs across the menstrual cycle in comparison with healthy controls.The conjunct implementation of psychophysiological and behavioural paradigms will contribute to advance our understanding of MRMDs.
Fig. 1 .
Fig. 1.Summary of the main electrophysiological measurements in relation to MRMDs.Upon daily mood symptom monitoring across the menstrual cycle, premenstrual syndrome (PMS), premenstrual dysphoric disorder (PMDD), and premenstrual exacerbation of another medical condition (PME) are diagnosed if their occurrence is concomitant with the premenstrual phase.The electroencephalogram (EEG) permits the measurement of oscillatory activity of large populations of neurons as electrophysiological signals, detected via scalp electrodes on the head.Signals are captured as microvolt (μV) differences in charge in the temporal range of milliseconds, commonly reported in the frequency domain (range 0.5-40 Hz).Up to 256 electrodes can be placed on the scalp with a peripheral reference electrode.Electrodes are commonly placed according to the 10-20 system with the following coding: A, auricle; C, central; F, frontal; Fp, frontal pole; O, occipital; P, parietal; T, temporal.Electrophysiological rhythms are associated with patterns of behaviour and information processing (e.g., level of alertness, arousal, sleep, and memory) in a frequencyspecific manner.More specifically, EEG is commonly assessed along the constraints of the following frequency margins: Gamma: 30-80 Hz; Beta: 12-30 Hz; Alpha: 8-12 Hz; Theta: 4-8 Hz; Delta: 1-4 Hz.To date, nine studies investigated the relationship between MRMDs and EEG, predominantly alpha asymmetry in the frontal region.The results, limited by the sparsity of studies and hampered by study design limitations, are inconclusive.
Table 1
Summary of the study characteristics and findings on EEG measurements in relation to MRMDs: frontal alpha band asymmetry studies.
Table 2
Summary of the study characteristics and findings on EEG measurements in relation to MRMDs: stress reactivity study.
Table 3
Summary of the study characteristics and findings on EEG measurements in relation to MRMDs: circadian characteristics studies.
Table 4
Summary of the study characteristics and findings on EEG measurements in relation to MRMDs: resting spectral dynamics. | 2024-01-03T14:14:37.728Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "a224387507e39d587329e9b38913ccd640b5cf22",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.yfrne.2023.101120",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2f47990f205d7b3c176d24e8ea14353ac1eb389d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
79833606 | pes2o/s2orc | v3-fos-license | Sports and Therapy Supplementations of Low Doses of Fish Oil Effects on Clinic and Ambulatory Blood Pressure Levels in Treated Hypertensive Postmenopausal Women Sex Hormones Influence
In the present study, we evaluate the changes that fish oil intake has (as a treatment’s coadjutant) over women’s blood pressure (BP) levels and the possible changes in their sexual hormones. For this reason, two groups were constituted: one group took fish oil during 6 months (SG), and the other group did not (CG). Anthropometric, dietetic, blood pressure and sex hormones controls, including β-estradiol, total testosterone, free testosterone and dehydroepiandrosterone (DHEA), were carried out by RIA techniques at the third and sixth month of starting or not the intake of fish oil, and 3 months after the end of the supplementation. Our study shows improvements in blood pressure only in the group that effectuated the supplementation. In these women, an increase of the total weight was noticed, accompanied by a decrease in the skinfolds. In regard to the hormones, it calls our attention the high DHEA levels in both groups. But the fish oil intake generated a very significant fall in it. Total testosterone also decreased significantly its concentrations. Therefore, we can conclude that the intake of low doses of fish oil produces a decrease in BP, and that the decline in androgenic hormones (DHEA and total testosterone) can play an important role in this decrease.
Introduction
More than 25% of the female adult population worldwide is hypertensive [1]. Elevations in blood pressure (BP) in women are related to cardiovascular risk with the prevalence of hypertension [2,3], being particularly high among women aged ≥ 60 years. In United States, approximately 75% of postmenopausal women are hypertensive [4]. Hypertension is often accompanied by other cardiovascular risk factors, e.g., obesity, dyslipidemia, and diabetes mellitus [5]. It is noteworthy that the prevalence of hypertension-related cardiovascular complications is higher in postmenopausal women than in agematched men. Indeed, these complications represent the leading cause of death in women [6].
In premenopausal women, endogenous estrogens maintain vasodilatation and thus, contribute to the BP control. Aging and the loss of endogenous estrogen production after menopause are accompanied by increases in BP, contributing to the high prevalence of hypertension in older women [7]. Cross sectional [8,9] but not longitudinal [10] studies showed a significant increase in systolic (SBP) and diastolic blood pressure (DBP), following the onset of menopause. Staessen et al. [8] reported a four-fold increase in the incidence of hypertension in postmenopausal women (40% in postmenopausal women vs. 10% in premenopausal women).
However, relatively little is known regarding the influence of androgens on BP and cardiovascular disease. There are reports of lower circulating testosterone and androstenedione levels in hypertensive men [11,12] and circulating testosterone levels in men with coronary artery disease [13] or myocardial infarction [14] are either unchanged or decreased.
Furthermore, women suffering from chronic anovulation and displaying hypertestosteronemia have an increased risk of coronary artery disease and myocardial infarction. Moreover, men with testosterone deficiency following orchiectomy present a slightly lower mortality from heart disease, suggesting that lower testosterone may protect against cardiovascular disease [14]. In female SH rats testosterone increases BP when they are ovariectomized [15,16].
N-3 PUFAs supplements may effectively reduce BP, but their use has been limited due to a requirement for high doses with its attendant side effects. It is not clear neither n-3 fatty acids would significantly lower BP in patients with high to normal BP nor lower doses would be effective and free of side effects [17,18]. Few studies have looked at the effects that these fatty acids supplementation may have on some hormones which in turn can have effects on BP.
The aim of our study is to evaluate the effects of having low doses of n-3 fatty acids (1.5 g/day) in hypertensive women who followed different antihypertensive therapies, with special focus on the effect that n-3 fatty acids has on sex hormones that may be linked to the pathogenesis of hypertension.
Study population
Fifty-five hypertensive postmenopausal women were voluntarily recruited from San Fernando Health Centre in Badajoz. All participants were informed and signed in a written informed consent. This study was conducted in accordance with the guidelines proposed in The Declaration of Helsinki and the study protocol was reviewed and approved by the Ethics Committee, University of Extremadura, Spain.
For the study, all members were divided into two groups, the supplementation group (SG) (n=28) which received supplementation with fish oil fatty acids, and a control group (CG) (n=27) which received no supplementation. As inclusion criteria, they were asked to present at least 12 months of amenorrhea, to be considered postmenopausal. The general features of the selected groups are shown in Table 1. The Supplementation Group was given a dietary supplement of fish oil capsules as co-adjuvant treatment containing the ingredients: oil of salmon, trout, mackerel, herring, sardine (Equivalent to a 21% EPA and 11% DHA) during six months.
Both groups were controlled at the 3 rd and 6 th month of beginning the period of supplementation or no supplementation, and 3 months once finished the study. Dietary and anthropometric control, blood pressure and blood extraction was made.
All hypertensive patients were diagnosed according to the criteria of fifth Joint National Committee on Detection, Evaluation, and Treatment of High Blood Pressure of 1993 (JNC-V) (SBP>140; DBP>90 mm Hg) , and included in the Hypertension agenda of the Health Centre. Antihypertensive farmacological treatment were followed homogeneously between groups, and consisted in: diet (n=8), diuretics (n=16), calcium antagonists (n=10), Inhibitors of the angiotensin-converting enzyme (n=6), calcium antagonists plus diuretics (n=10), Inhibitors of the angiotensin-converting enzyme plus calcium antagonists plus diuretics (n=5). The change of pharmacological treatment during the experimental period was fixed as exclusion criteria.
Dietary control
All women followed a similar diet of 1500 kcal with a low sodium concentration. Prior to the start of the seizure of a dietary supplement, a dietary study was performed for all women in the studied population to assess about the intake of n-3 fatty acids and lipids. Dietary intake was monitored by the same dietitian throughout the study, with fulfillment of a 3-day diet record (2 weekdays and one weekend day) at baseline and repeated at the end of the 3 rd and 6 th month of the study, and at the 3 rd month post intervention period. The dietitian determined if their usual eating habits were maintained and reminded them to make no changes. For its evaluation we use food tables prepared by Moreiras et al. [19].
Anthropometric measurements
The morphological characteristics of the participants were measured in the afternoon and always at the same time. Body height was measured to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 220), and body weight was measured to the nearest 0.01 kg using calibrated electronic digital scales (Seca 769) with subjects barefoot. Body Mass Index was calculated by dividing the weight (in kg) by the height squared (in m 2 ). Body fat composition was evaluated using a skinfold thickness (Harpender calliper) by the same operator.
Blood pressure measurements
Hypertension was defined as blood pressure ≥ 140 mmHg and/or ≥ 90 mmHg, and/or use of antihypertensive medication. Blood pressure was measured in the sitting position on the right arm, and the mean of tree recordings, at least 3 min apart, was registered. This determination was made with a Mercury Sphygmomanometer (Riester, 660-2-306). The Blood Pressure was determined at baseline and every 15 days in the nursing office at the Health Centre, provided by the same person.
Blood sample measurements
All women were taken a sample of blood from the antecubital vein after overnight fasting, deposited in glass tubes containing lithium heparin and immediately centrifuged (2500 rpm for 10 min.). The separated plasma was stored at -70°C until analysis.
Total testosterone (TT), Free Testosterone (FT), Dehydroepiandrosterone (DHEA) and β-estradiol (E2) were determined with commercial kits by RIA. Aromatase activity was evaluated by the rate between β-estradiol/total testosterone.
Statistical procedures
All data were analysed using the analysis program SPSS version 17.0. The results are expressed as mean ± standard deviation and comparison of samples required a minimum of 5% significance (p<0.05). The normality of the variables' distribution was assessed using the Shapiro-Wilks test and Levene´s Test. We applied the Wilcoxon test to compare the changes throughout the studio between the two groups and the t-test for comparison of both groups.
Study population
All the fifty-five randomized subjects completed the study. Baseline characteristics about blood pressure of the two groups are shown in Table 2, and confirmed that they were well matched for the entry criteria.
Cholesterol and Fatty acid ingested
The nutritional study for cholesterol and saturated, monounsaturated and polyunsaturated fatty acids in the diet followed by women is reflected in the Table 3.
Low levels of cholesterol were observed in the diet of participants. Monounsaturated and saturated fatty acids were consumed on high levels. Likewise, within the saturated fatty acids, highlights the high consumption of palmitic acid (C16:0), together with stearic acids (C18:0) ( especially n-3 fatty acids. The rate of n6-PUFAs/n3-PUFAs is high. There were not significant changes in these levels during the study. Table 3 shows the anthropometric characteristics of women during the study.
Anthropometric results
In the SP group, weight increased significantly during the period of fish oil supplementation, which is maintained three months at the end of the study. Skinfolds measures decreased significantly (triceps and subscapular fold, p<0.001) after six months of supplementation and this decrease was maintained three months after the study ending. In the case of abdominal fold, significant decrease (p<0.001) was observed only after three months of the supplementation finish. No significant changes were observed in the control group. Table 1 indicate that in supplementation group, SBP decreased significantly at the third (p<0.001) and sixth (p<0.001) month after starting the study, and three months (p<0.01) since the end of supplementation. With regard to the DBP levels in the same group, it also produced a decrease that becomes significant (p<0.01) at the sixth month of supplementation and that is maintained three months after the completion of the supplementation, even without acquiring statistically significant. In the control group there were no significant changes in BP during the study.
Sex hormones
In Table 4, we present the results obtained for the sex hormones that may have an important role in the genesis and development of hypertension in the two study groups.
In the SG, β-estradiol experienced a decline in the first three months and was significant (p<0.01) at reaching the sixth month, returning to baseline after three months of leaving the supplementation. β-estradiol concentrations were maintained within the normal range in all the samples. However, DHEA at the beginning of the study was higher than normal values in the two study groups. The SG showed a significant decrease (p<0.001) after three months of the onset, maintained with the same significance degree at the sixth month of the supplementation.
Total testosterone decreased at the third (p<0.001) and sixth months (p<0.01), although in all cases was within the range considered normal. Free testosterone values increased statistically 6 months after beginning the supplementation, and maintained the same levels of significance at 3 months after the stop of supplementation, since in the previous case values were within normal limits.
The aromatase rate increased significantly at the 3 rd , 6 th months and after stopping the supplementation.
No changes were found in any hormone studied in the CG.
Discussion
The aim of the present study was to examine the effect of a moderate long-term supplementation with low doses of fish oil (1,5 g/day) on the development and treatment of hypertension in relation with sex hormones. All participants in this study were hypertensive and postmenopausal women, and significantly older than groups of others studies [20][21][22]. These women have clear risk factors for cardiovascular disease (CVD), menopause, obesity, high cholesterol levels and they intake small amounts of n-3 fatty acids in their diet ( way, the effects that fish oil administration (n-3 fatty acids) as adjuvant therapy, has on BP and on their sexual hormones. Administration of 1.5 g/day of fish oil to the SG, decreased systolic (p<0.001) and diastolic blood pressure (p<0.01) ( Table 1). Some data proved the reduction of BP by n-3 PUFAs in essential hypertension [17,18] in doses ≥ 3 g/day. In vitro animal and human studies have shown that EPA and DHA are differently incorporated into plasma [23,24] platelet, membranes [25] and tissue lipids [26]. We think this waning of SBP in the SG, could be due to a progressive accumulation in the body of n-3 PUFAs during the supplementation period and it is needed to remember that this women have a low dietary ingestion of n-3 PUFAs, so this low doses of fish oils can reduce significantly the SBP, and this reduction is higher when the supplementation period is longer. This accumulation can maintain the significant decrease in SBP during twelve weeks after treatment ends.
Regarding DBP, which decreases during the study, is only statistically significant (p<0.05) after 6 months of treatment only in the SG. As happening to the SBP, after 3 months of completing treatment the decrease in DBP remained, although it did not reach statistical significance. These data may indicate the need to maintain, for the longest time, the supplying of n-3 fatty acids in order to get major accumulations in tissues, especially subcutaneous and in cell membranes and obtain with them largest decreases and sustained reductions on BP, without giving high doses that may lead to side effects and difficult their destination.
Weight suffered a slight increase, statistically significant in the SG during the study ( Table 3). The decrease in skin folds indicates that this is probably due to an increase in free-fat weight induced for the increased levels at this time in total testosterone and free testosterone that promotes increases in the muscular cells synthesis. This gain of weight is also seen in the study by Gray et al. [22]. The skinfolds decreases were statistically significant on triceps and subscapular skinfolds (Table 3) in the SG, indicating a decrease in peripheral fat body as a result of taking n-3 fatty acids, because there were no changes in nutritional habits during the study and in the CG no changes were found. This decrease of subcutaneous fat has great significance related to health, since subcutaneous fat is known to be a great strain on the person's cardiovascular system, increasing its cardiac output, leading to a BP increase. This lessen in peripheral fat can improve BP levels, causing too a decrease in it through this way. It should be noticed that in the other reviewed studies, there was no measure of skin folds in patients to assess changes in body weight.
Regarding the hormonal levels on the CG and SG, it led us to a number of important changes only with the fish oil supplementation.
Prior to menopause, blood pressure is lower in women compared with age-matched men [27,28]. During the menstrual cycle, blood pressure levels are inversely related to circulating estrogen concentrations and lower when β-estradiol levels peak [28], reflecting the vasodilator activity of endogenous β-estradiol [29]. The first decade after menopause is accompanied by an increase in BP. In the seventh decade of life, the prevalence of hypertension among women is even higher than in men, regardless of ethnic background [27,30].
The protective effects of β-estradiol on hypertensive women are based on the acute and long-term vasodilator effects of estradiol mediated in part via generation of endothelium-derived nitric oxide (NO), and they are attenuated by NO inhibitors [31,32]. β-estradiol induces an increase in intracellular free calcium concentration in endothelial cells [33,34], which could contribute to the increase in endothelial-derived NO. Since inhibition of NO synthesis promotes arterial hypertension [35], it is conceivable that estradiol protects against hypertension by increasing NO synthesis.
About β-estradiol in our women, a decrease occurred in β-estradiol, ovarian steroid hormone, reaching statistical significance at the 6 th month in the SG, returning to the initial values at the third month after the supplementation period. This decrease was accompanied with a significant decrease of total testosterone, its precursor. We think, actually recognized, that n-3 PUFAs supplementation can raise NO synthesis by the endothelial cells, improving the activity of cell membrane, (calcium Channels and the cells receptors for hormones) [17] and the n-3 PUFAs can act as endogenous enhancers of parasympathetic tone, suppressing inflammatory events and inhibiting sympathetic over activity; and they can block β-receptor action. The endothelium cells, with good levels of n-3 PUFAs, need low levels of β-estradiol to produce a good control on the BP. On the other hand, in the Table 4, we can see an aromatase activity increase, due to the low levels of total testosterone above all, its precursor, indicating more disposition and anti-hypertensive activity of β-estradiol by the cells.
Dehydroepiandrosterone (DHEA), adrenocortical hormone and possible precursor in the synthesis of total testosterone, presented levels above the normal values in the two groups. The SG suffered a significant decline throughout the study at the 3 rd and 6 th month (p<0.001). According to Cleare, this hormone has a positive correlation with blood pressure, meaning that higher DHEA levels associates with higher BP levels [36]. This relationship would be due to structural similarity that mineralocorticoid aldosterone have with DHEA. This could be another mechanism by which the fish oil intake could low blood pressure, and mainly the SBP, being the most sensitive to increase fluids and leading us to this hormone increases. On the other hand, blood pressure can be modulated indirectly by the action of the enzyme steroid sulfatase (STS) that catalyzes the conversion of estrone sulfate, DHEA sulphate, testosterone sulphate and other forms of conjugated steroids to the free form and to active steroid hormones. When this enzyme acts, potently lots of active steroids, estrogen, DHEA, testosterone and mineralocorticoids are released; because of this, we could observe an increase in BP, due to an increased extracellular volume. When we give an inhibitor of this enzyme, the presence of free hormones is inhibited and thus its activity; and in our case, an increase in BP is prevented as well [37]. On the other hand, administration of DHEA and Testosterone in normotensive rats produced renal hypertension [38]. Because of it, the decrease on DHEA and total Testosterone for taking fish oil would lead us to a decrease in BP by directly decreasing its action on the kidney. Schunkert et al. [39] showed also a positive association between endogenous DHEA and SBP and it is independent of the other adrenal steroids. In this way, lower levels of DHEA, as in our study, drops levels of BP. Barna et al. [40] found a highly positive correlation (p<0.001) between levels of DHEA sulfate and BP. Therefore, we believe that the DHEA decrease in serum levels would be largely responsible on BP decreases.
In the other hand, some of the advantageous effects of testosterone observed in males (decrease of BP) may be due to its conversion to estradiol and estradiol metabolites that can induce an increase in the synthesis of NO [41]. This hypothesis is supported by the finding that the inhibitory effects of dehydroepiandrosterone, a precursor of androstenedione, on atherosclerosis are blocked by the aromatase inhibitor fadrozole [41]. This finding suggests that the sequential conversion of several androgens to estradiol is responsible for their anti-atherosclerotic actions, but whether estradiol is the ultimate mediator remains unclear.
Total testosterone, steroid produced in women ovaries, had a highly significant decrease throughout the study. This hormone, like estrogen, has by its structural similarity with mineralocorticoid, similar actions to them, producing hypertension; and it is because of this their decline that it would lead to decreases in BP, especially systolic pressure. Reckelhoff et al. [15] showed that androgens in general may raise BP by increasing fluid reabsorption in the proximal tubule, or either by activating the renin-angiotensin system, meaning therefore that the decline in testosterone levels in our women would, by this mechanism, decrease BP. For the foregoing reasons, we believe that the decreases in estradiol, total testosterone and DHEA, observed in our study could be due to an inhibition of the enzyme steroid sulfatase (STS) for taking fish oil, and this would lead to a BP decrease, shown in our study.
Both estradiol and testosterone are present in both sexes, in different concentrations and ratios. Endogenous androgens (dehydroepiandrosterone, androstenedione, and testosterone) are readily converted to estradiol by the sequential actions of 17β-hydroxysteroid dehydrogenase (17β-HSD) and aromatase [14]. In our study, in the SG, the aromatase activity actually increases at 3 rd (p<0.01) and 6 th month (p<0.01), fact that can justify the decrease experimented by the DHEA and TT in our supplemented with n-3 PUFAs women.
For all these reasons, we conclude that taking a low dose of fish oil (1.5 g/day), as adjunctive therapy in hypertensive postmenopausal women, can produce decreases in BP both systolic and diastolic, and these decrease can be produced in the case of SBP in the third months of the intake, but more months of supplementation are required to produce similar changes in DBP. These decreases in BP could be due to changes in levels of sex hormones such as DHEA, β-estradiol, total testosterone or free testosterone. It also could be due to changes in the cellular receptors of these hormones, induced by the fish oil supplementation. More studies are needed in this line to clarify the influences these fatty acids can involve on sex hormones. | 2019-03-17T13:09:07.770Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "119226369cae9fdf02ccff96c50b66fced859f41",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2473-6449.1000129",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a486d6a68e958f6eeee876aa741bbd064ec7b26d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13751880 | pes2o/s2orc | v3-fos-license | Fish learn collectively, but groups with differing personalities are slower to decide and more likely to split
ABSTRACT We tested zebrafish shoals to examine whether groups exhibit collective spatial learning and whether this relates to the personality of group members. To do this we trained shoals to associate a collective spatial decision with a reward and tested whether shoals could reorient to the learned location from a new starting point. There were strong indications of collective learning and collective reorienting, most likely by memorising distal cues, but these processes were unrelated to personality differences within shoals. However, there was evidence that group decisions require agreement between differing personalities. Notably, shoals with more boldness variation were more likely to split during training trials and took longer to reach a collective decision. Thus cognitive tasks, such as learning and cue memorisation, may be exhibited collectively, but the ability to reach collective decisions is affected by the personality composition of the group. A likely outcome of the splitting of groups with very disparate personalities is the formation of groups with members more similar in their personality.
INTRODUCTION
Organised groups are characterised by cooperative and synchronised behaviour, which allows for better resource acquisition and risk avoidance (Pitcher and Parrish, 1993). However, collective behaviour varies depending on external and internal conditions, e.g. environmental risk levels and inter-group dynamics (Hoare et al., 2004;Sumpter, 2006). On some occasions, such as during foraging, this may require that information about current local conditions is disseminated between individuals within the group and presumably processed collectively by the group (Laland and Williams, 1997). The collaborative use of shared information to solve problems and make decisions is called collective cognition (Couzin, 2009). Although collective cognition may be utilised for various group functions, it is particularly useful for adjusting group behaviour in spatial contexts such as food location or route choice (de Perera and Guilford, 1999;Conradt and Roper, 2005;Couzin et al., 2005). Indeed, group living has been proposed to enhance navigation performance via information-sharing (Simons, 2004). Navigation relies on several behavioural and cognitive processes, such as exploration/sampling effort, decision-making, learning and cue memorisation (Brown et al., 2006). The use of these processes by a group may be limited by the extent to which cognitive or behavioural similarities between individuals facilitate collective responses.
Most studies on group navigation have focused on collective decision-making as a means of choosing between routes while maintaining group structure (Couzin, 2009;Couzin et al., 2005;Conradt and Roper, 2005). Yet individual variation has been noted in important cognitive processes: some individuals may be better at memorising information from their environment (Croston et al., 2016), faster or more successful in their decisions (Chittka et al., 2009) or faster learners (Trompf and Brown, 2014). Interestingly, individual variation in many of these processes has been linked to animal personality (Griffin et al., 2015;Guillette et al., 2016). Animal personality is often described by behavioural traits exhibiting consistent inter-individual differences and intraindividual repeatability (Wolf and Weissing, 2012). A wellstudied trait, boldness, is indicated by exploration tendencies and feeding motivation (Toms et al., 2010), making it a regular predictor of spatial associative learning (e.g. Trompf and Brown, 2014;Mamuneas et al., 2015). Although a prominent hypothesis is that bolder animals are faster but less accurate in their decisions (Chittka et al., 2009), often effects manifest independently of these tradeoffs. For example, bolder fish may be faster at choosing between locations and faster learning rewarded responses, but not less accurate in their choices than more timid animals (Trompf and Brown, 2014;Mamuneas et al., 2015;Kareklas et al., 2017). Regardless of these trade-offs, the effects of personality on cognitive performance may also influence how animals work collectively. In particular, personality differences between individuals may predict how they tackle cognitive tasks collectively; the exploration tendencies and reward motivation of group members could affect how they coordinate responses, how they decide, and how they organise, share and utilise information when learning (Couzin, 2009).
To examine whether collective processes of decision-making and learning are affected by the composition of groups, in terms of the individual boldness of their members, we studied the zebrafish Danio rerio. Fish were first tested as individuals to determine their levels of boldness ( Fig. 1) and were then trained as groups of five, referred to here as shoals, in a spatial-associative learning task. During training, only spatial decisions made by all individuals by reaching a location together were reinforced (reward or punishment), to determine learning specific to a collective response. After reaching a learning criterion, we tested the ability of shoals to reorient, examining their ability to memorise distal cues during training. Animals may simply rely on the memorisation of a response, such as a turning direction, or also on the memorisation of the relative positions of distal cues (Tolman et al., 1946;Burgess, 2006). Associating a memorised response to a rewarded location relies on orienting from a familiar starting point. In contrast, the additional memorisation of distal cues can facilitate reorientation from novel starting points by attending to changes in the relative position of these cues towards the correct location ( place learning; Rodriguez et al., 1994). Therefore, reorienting can identify whether learning relies on composite strategies that utilise the memorisation of the relative position of distal cues or simple associations of location to directional-response.
First, we tested the hypothesis that collective decisions, learning and memorisation are related to mean boldness levels, with shoals of bolder composition differing from those with shier composition. Second, we tested the hypothesis that collective decisions, learning and memorisation are predicted by the variance in boldness among shoal members, because large differences in personality inhibit agreement or cooperation. Based on effects by personality composition on group response time in other shoaling species, we expected decision times to be related to boldness, being generally faster for groups of bolder individuals (Dyer et al., 2009). The learning of a collective response and memorisation strategies, such as place learning, have only recently been experimentally studied in fish groups (McAroe et al., 2017), noting both the facilitation of visual-cue memorisation and faster learning by zebrafish in groups. However, the effects of the personality composition of groups on these group processes have not been examined. We predict that links to personality may be indicated due to either differences between individuals in their response tendency or their performance in particular cognitive tasks, with more variable groups reaching lower agreement and cohesion (Ioannou and Dall, 2016), and overall bolder groups being faster to decide and associate food reward to a location [such as in individuals, e.g. Griffin et al. (2015); Guillette et al. (2016); Kareklas et al. (2017)].
Collective decisions
All shoals reached collective decisions within the time limit (<5 min) in both the initial and probe trial, but some tended to split before reaching a decision ( please see the supplemental information). No significant differences were found between the initial trial (before training) and the probe trial (after training) for either decision times (R 2 =0.017; P>0.05) or the probability of splitting (R 2 =0.02; P>0.05), suggesting consistency in collective behaviour and limited effects from differing individual learning during training. The mean boldness of shoal members did not significantly contribute to the probability of splitting (R 2 =0.016; P>0.05; Fig. 2A), and although shoals with members of greater mean boldness exhibited shorter decision times (R 2 =-0.73; Fig. 2A) the relative effect was not significant (P>0.05). The only significant predictor was variance in shoal-member boldness, which strongly predicted both collective decision-times (R 2 =0.816; F 1,20 =9.19, P=0.008) and the probability of splitting (R 2 =0.482, χ 2 1,20 =13.26, P<0.001). Groups with greater variance in boldness between their members were more likely to split and took longer to collectively reach an arm (Fig. 2B). Further, consistency in splitting across trials was noted for shoals with greater variance in boldness (ANOVA, F 3,10 =15.93, P=0.002, R 2 =0.820; Fig. 2C) and collective decisions took longer when splitting occurred than when not (Welch's t=4.15, P=0.002; Fig. 2D).
Decision accuracy (number of erroneous decisions during training) was only weakly predicted by the mean of shoalmember boldness (R 2 =0.127; χ 2 =8.19, P<0.05), but was not significantly predicted by the probability of splitting decision (R 2 <0.04; P>0.05). Contrary to predicted speed-accuracy trade-offs (Chittka et al., 2009), the number of erroneous decisions during training did not significantly correlate with the time shoals needed to decide in either the initial or the probe trial (r s <0.2, P>0.05).
Collective learning
All shoals met the collective learning criterion of all fish being simultaneously in the rewarded location for eight/ten trials over three consecutive days (Fig. 3). The rate of learning (number of days to reach criterion) was negatively related to the number of erroneous choices during training (i.e. choosing the punished arm) (R 2 = −0.945, χ 2 1,10 =3.99, P=0.046; Fig. 3). However, learning rate was not significantly predicted by the variance and the mean of shoalmember boldness, or the likelihood of splitting (R 2 <0.04; P>0.05).
At probe trials from the new starting point in the top arm, which was blocked during training, all shoals reached one of the arms collectively (i.e. were at the same arm together before the 5 min), but the ability to reorient to the arm rewarded during training was unrelated to the variance and the mean of shoal-member boldness or the likelihood of splitting (R 2 <0.04; P>0.05). Indeed, the majority of shoals (eight/ten) showed preference for reaching the rewarded arm significantly more than predicted by chance ( proportion>0.5: z 10 =1.90, P=0.029; Fig. 3).
DISCUSSION
To collectively reach one of two locations, groups must maintain cohesion and structure. This relies on interactions between the individuals comprising the group, a process known as selforganisation (Sumpter, 2006). The interactions facilitate information sharing (Couzin, 2009;Ward et al., 2011) and in fish this can be in the form of changes in swimming direction, where swimming towards a location by some individuals propagates through the group (Croft et al., 2003). The extent of the propagation is indicated by the time needed by all individuals to change direction together, which can be limited by individuals deciding to act otherwise (Couzin, 2009;Ward et al., 2008). Here, our findings implicate personality differences between group members in this process. Groups with greater variance in boldness between their members were consistently more likely to split and took longer to collectively reach an arm (Fig. 2B,C). Given collective decisions took longer when splitting occurred than when not (Fig. 2D), we conclude that the splitting of groups with members more dissimilar in their boldness results in collective decisions taking longer to be reached. The involvement of personality on collective decision speed may reflect a greater tendency by bolder individuals to reach food-rewarded locations (Kareklas et al., 2017).
The relationship of personality differences with cohesion and collective-decision speed proposes that high-variance groups might be disadvantaged when competing for spatially distributed resources. A study on guppies Poecilia reticulata did not find mixed groups more disadvantaged than bold groups, but faster at reaching food than shy groups (Dyer et al., 2009). Differences in the effects of personality may depend on the species, but the study in guppies also utilised a categorical separation of bold and shy to compose groups. In contrast, here we measured the variance in boldness score within randomly assembled groups. A higher variance in our shoals is most likely due to the presence of extremely shy individuals, according to individual latency distributions (Fig. 1). The direct effects of high variance on splitting are unclear, as we did not track individuals, but they are possibly driven by intra-group differences in exploration and approach tendency between more greatly differing personalities (Toms et al., 2010) and possibly due to related differences in sociality (Ward et al., 2004;McDonald et al., 2016). Another possibility is that differences in boldness correspond to differences in decision-making strategy (Griffin et al., 2015;Kareklas et al., 2017), which again would require identifying consistencies in the position individuals occupy in a shoal. Further, different types of splitting may represent different processes. Lateral fission may reflect individuals being less social and actively seeking to split, but rear fission may be the result of either active splitting or passive restraints (Croft et al., 2003), such as being more fearful and timid (Toms et al., 2010;Kareklas et al., 2017). The splitting of groups with very high variance in personality could possibly lead to the formation of groups with lower variance in personality. While this is yet to be tested, it could be a way for groups to ensure that agreements are reached more easily. Indeed, larger differences in personality can manifest effects on the way fish socialise, cooperate and prioritise reward or risk (Ioannou et al., 2015). Alternatively, splitting might be an effect of hierarchical dynamics, with leader initiations and follower delays relying on similarities in personality aspects such as boldness and flexibility (Ioannou and Dall, 2016).
Contrary to expectations that personality differences have an effect on both speed and accuracy due to trade-offs (Chittka et al., 2009), the number of erroneous decisions during training was independent of how fast fish in a shoal reached a location together. However, shoals that made fewer erroneous collective decisions during training reached the learning criterion faster (Fig. 3). This negative association between erroneous trials and learning rate is consistent with learning by positive reinforcement, given less erroneous shoals would collectively reach the rewarded arm more frequently during training (Brown et al., 2006), but suggests a low effect from negative reinforcement by the mild punishment of erroneous trials. Interestingly, the majority of shoals (eight/ten) re-oriented at probe trials to the location rewarded during training (Fig. 3). This indicates that most shoals did not simply use a learned response for collectively reaching the rewarded arm, e.g. turn direction, but learned the place of the reward. Place learning proposedly involves allocentric processes, where positions of distant cues in relation to a target are memorised and reorientation is possible (Tolman et al., 1946;Rodriguez et al., 1996). Although this may involve cognitive mapping (mental representations of space using the relative positions of landmarks), other cue-based strategies are difficult to exclude, e.g. beaconing to large cues near the goal (Bennett, 1996). Most notably, D. rerio zebrafish individuals can take longer to learn and do not prefer place over response learning (McAroe et al., 2016). Thus, being in a shoal can facilitate both learning efficiency and the use of learning strategies that rely on the memorisation of cues and not solely of simple directional responses. This has been exemplified recently in a study comparing shoals to individual zebrafish, where only shoals were able to exhibit place learning (McAroe et al., 2017). This is enabled in fish groups by social learning (Laland and Williams, 1997;Trompf and Brown, 2014), cooperative vigilance and information sharing (Pitcher and Parrish, 1993;Miller and Gerlai, 2011).
In contrast to models predicting that cohesion and individual differences in behaviour may affect collective behaviour and learning (Couzin, 2009), we found no strong evidence of personality or splitting having any significant influence on collective learning or accuracy. Decision accuracy and learning may instead be influenced by inter-individual differences in experience, attention, acquisition and cue perception (Couzin, 2009;Kao et al., 2014). Indeed, in the absence of effects from individual behavioural phenotypes, based on personality, differences in individual experience and a balancing between personal and shared information in the group are both very likely alternative factors (Miller et al., 2013). Otherwise, groups may rely on the leadership of more experienced or reward-driven individuals (de Perera and Guilford, 1999;Krause et al., 2000). For memorisation strategies in particular, there is evidence that individuals can use cue and response based strategies together and often animals reverse between strategies over training times (Packard and McGaugh, 1996;Burgess, 2006). These processes could carry over in collective learning and this can be tested by repeated probe trials during collective training.
Although our study did not include analysis of any kinematic data, recent work has increasingly shown the benefit of identifying behaviour-specific movement bouts (Marques et al., 2018) and for assessing how the solitary movement patterns of group members Fig. 1. Latency distributions on a logarithmic scale for the novelobject and feeding test, as exhibited by individuals (n=50) ranked by their composite boldness score. affect collective swimming patterns (Marras et al., 2015). This would provide more evidence for the individual effects on collective decisions and learning, and could identify the extent to which effects from individual motor behaviour are related to personality [e.g. bouts related to risk response or approach; Marques et al. (2018)] or other phenotypic factors, such as morphology (Conradsen and McGuigan, 2015). While these effects remain to be examined, here we show that zebrafish can learn to reach collective spatial decisions for rewards and utilise place memorisation strategies to do this, but that collective decisions are biased by personality differences.
Animals and housing
Naïve adult male zebrafish D. rerio (n=50) were acquired from a local supplier, Grosvenor Tropicals, Lisburn, Northern Ireland. Given the supplier was not informed on strain variations in their stock, we used only males that show no strain preferences for shoaling (Snekser et al., 2010), Fig. 2. Shoal cohesion ( probability of splitting) and consequent effects on collective decision-times were influenced by individual boldness differences, but were not linked to majority averages in boldness. (A) The mean boldness of shoal members (5% trimmed to exclude biases by extremely bold or timid fish) had a negative, nonsignificant, effect on mean decision times between initial and probe trial (black line and marks), but no effect on splitting probability (grey curve and marks) as indicated by regression models (decision times: linear, probability of splitting: binomial). (B) In contrast, the variance in boldness within shoals (mean average deviation of all fish) positively predicted the probability of splitting at probe and initial trials (grey curve and marks) and the mean decision times between initial and probe trial (black line and marks). (C) The level of consistency in splitting between initial and probe trials was greater for shoals with higher variance in boldness ( which also removed the chance of mating during group living and controlled for sex-related differences in boldness. Fish were housed in tanks (26 cm W×36 cm L×30 cm H; 26±2°C and 7.4±0.4 pH dechlorinated tap water) enriched with fine sediment, plants and plastic pipes. Photoperiods were 12 h long (0700-1900) and feeding was daily (TetraMin® flakes).
Behavioural tests for boldness
Following a week-long acclimation to individual housing (tanks filled to 15 L with view of neighbours to reduce isolation effects), the boldness of each fish was assessed in their housing tank by measuring consistency in their approach latency towards novelty between two contexts often used to test differences in boldness [see review by Toms et al. (2010)]. First, novelobject inspection was tested by the time fish took to reach ∼1.5 body-length distance from a 7 cm toy after it was lowered by a pulley system to the bottom of the tank, as estimated by viewing through a screen with a grid from above. Second, feeding motivation towards an unusual food was tested by recording the time fish needed to initiate feeding on chironomid larvae (released by forceps), which had not been previously offered to the fish in the laboratory. Opaque sheets visually separated each group from the others and shielded the observer during tests. Observations were made via a Sony HDR CX190E handycam video camera. Fish had not been fed for ∼24 h prior to testing. Both tests were 5 min in duration, carried out at 11:00-13.00, with a 48 h interval between them and in the same order for all fish to control for carry-over effects [see Kareklas et al. (2017)]. As would be expected for the expression of personality traits, like boldness (Toms et al., 2010;Wolf and Weissing, 2012), latencies were found to be consistent between contexts (Chronbach's α=0.803; Pearson's r=0.844) and used to calculate composite boldness scores. Greater latencies are linked to lower boldness (Toms et al., 2010), thus the standardised sums of latencies from both tests were used as scores (z-values) and inversed in sign ( positive or negative) to rank by increasing boldness (Fig. 1).
Collective tests for learning
Following individual behavioural tests, fish were randomly sorted in shoals of five (n=10) and housed together (tanks filled to 25 L) for a further week and then trained in a plus maze (four-arm maze constructed from acrylic sheets; each arm measuring 15 cm W×30 cm L). During training internal landmarks were unavailable, but visual cues were available outside the maze, including white paper sheets on a distant wall, adjacent tank tops and the camera arm above the tank. To control for inter-shoal differences by differing information, these external cues and their locations were kept constant during trials and for all shoals. Shoal trials started in the bottom arm and the top arm was blocked during training. Trials commenced by removing an opaque divider that kept shoals constrained in the starting arm for 2 min. Shoals were then presented with the two remaining arms, left or right, with 5 min to choose between them. A collective decision was indicated by all individuals being in the same arm at the same time, training them to associate a collective decision towards one arm with a reward and towards the other arm with a mild punishment. The choice of direction, left or right arm, for the rewarded and mildly-punished arm was randomised across shoals. When reaching the arm randomly assigned to be food rewarded, shoals were blocked in until each fish received 1-2 chironomid larvae (individual feeding latency was <5 s). However, in the unrewarded arm they were blocked in for 2 min and not fed [mild punishment;McAroe et al. (2016); Kareklas et al. (2017)]. Following their choice, fish were gently guided by a net to the starting arm. After each trial, the tank water was disturbed to minimise use of olfactory cues. Shoals had ten such trials daily until reaching a learning criterion of a minimum of eight/ten correct trials (i.e. collectively choosing the rewarded arm) on three consecutive days. The learning criterion corresponds to a learning plateaux and success rates exceeding 24/30 correct trials, which differ from chance (15/30) at the 0.1% level. Shoals were given a single probe trial 24 h after reaching the learning criterion, which started from the previously blocked top arm. This tested if fish were able to collectively reorient to the rewarded arm from a novel starting point, via the memorisation of the relative positions of the distal cues during training (Rodriguez et al., 1994). The probe trial was unrewarded to control for the use of olfactory cues.
Reaching the correct arm during probe trials showed the ability to reorient by using distal landmarks, i.e. place learning. By contrast, a failure to reach the goal arm in the probe trial was considered the result of learning to go left or right during training, i.e. response learning (McAroe et al., 2016(McAroe et al., , 2017. Collective decision speed, measured until the last fish of the group passed the mark to either arm (given all other fish were already in the same arm to designate a collective choice), was recorded only for the first training trial (novel task) and the probe trial (novel starting point). The choice of using decision times only from these two trials was because their novelty controlled possible effects of familiarity and experience of making a particular decision; decisions from other trials during training could be biased by reinforcement from previous trials and thus not representative of a novel decision. In addition, by measuring times at two relatively novel trials, where one was before and one after training, allowed us to examine if novel decisions are affected by the experience of training as a group. Comparisons before and after training further enabled us to test consistency in the effects of intra-group boldness on decision-making and to test for effects by individual-level learning. Before reaching collective decisions in these trials, some shoals exhibited splitting: individuals either stayed behind in the starting arm while others had chosen between left or right (rear fission) or went in a different direction, reaching the opposite arm from the rest (lateral fission) (Croft et al., 2003). The distance needed to travel between arms (centre to centre) was ∼27 cm or five zebrafish body-lengths (4-6 cm), and Fig. 3. Shoals that made more erroneous trials during training (black bars) also took more days to learn (grey bars), but a greater than chance majority of shoals was able to memorise place. Inset: proportion of shoals reorienting at probe trial, showing place learning. Shoals (n=10) are ordered by increasing number of error counts and marked (cross) if they showed place learning (*P<0.05, binomial-test).
was thus considered sufficient to indicate splitting. We recorded the occurrence of any type of splitting as an inverse measure of cohesion. If fish reached an arm together within the 5 min recording time, any splitting was noted and the collective decision was recorded as either correct (rewarded arm) or erroneous (unrewarded arm). Alternatively, if no choice was reached, any splitting was again recorded, but we did not count the trial as either correct or erroneous. Decision accuracy was measured by the total number of erroneous trials throughout training, because the number of correct trials can also be influenced by fish not choosing. The number of training days to reach criterion indicated learning rate.
Analysis
Calculations, analyses and graphical representations were all carried out in the Minitab® statistics software (version 17; Minitab Inc., State College, USA). The proportion of shoals reorienting at the probe trial was first tested against chance levels (0.5) by a binomial-proportion test. Speed-accuracy trade-offs were tested by rank correlations between time to decide and the number of erroneous trials during training (Spearman's r s ) (Chittka et al., 2009). Decision times from initial and probe trials were found to be normally distributed. Comparisons between trials where any splitting occurred and trials where no splitting occurred was tested by Welch's t-test, which does not assume equal variance and sample size. Individuals could not be identified during collective tests because the week-long group acclimation period prevented us from continuously tracking them, and methods of tagging were unavailable. As a result, we could not identify particular individuals with a known boldness score, but we could compare groups of differing composition in terms of individual member boldness. Therefore, regression models (linear for decision times, Poisson for number of days to learn and number of erroneous trials during training, and binary logistic for splitting probability) tested whether each measure was predicted by the mean (5% trimmed to limit bias by minority fish with extreme phenotypes) or the mean absolute deviation of shoal-member boldness (variance across all fish). Individuals with personality tendencies on the extreme ends of our distribution, mostly very shy individuals (Fig. 1), can skew both the mean and variance, making it impossible to assess them as having a different effect, i.e. effects by the slowest individual would appear both in the mean and variance. However, by removing the extreme ends of the group (5% trimmed) we extracted mean values for shoals that represent the majority of their members and not biased by a single very timid fish. Conversely, the variance measure includes these extreme personalities. This enabled differentiation between effects by the majority average (trimmed mean) and the extremes (variance). Models testing decision speed and splitting additionally tested differences between initial and probe trial (categorical predictor; effect of learning) and included shoal number as a random effects term to avoid pseudoreplication. Post-hoc comparisons of consistency in splitting were carried out for boldness measures that were found related to splitting, using a one way ANOVA to test if shoals which had split in one, two or zero trials differed in boldness measures.
Ethical note
All applicable animal-welfare guidelines were followed (ASAB, 2016). Veterinary inspections by DHSSPS, Northern Ireland, deemed no need for licensing. Following the conclusion of the study, animals were kept for separate non-invasive tests. | 2018-05-08T05:00:07.461Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "320f4a75c8c150c4a8b565e0a40221da76ce01a2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1242/bio.033613",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "869d293d5f9b11f4ec0fbebcf650aa694d412ba3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
216270894 | pes2o/s2orc | v3-fos-license | A Comprehensive Insight into Game Theory in relevance to Cyber Security
ABSTRACT
INTRODUCTION
Modern-day communication technology, as well as information, are progressing rapidly in terms of diversity and its sophistication level. The growing connectivity, pervasiveness and complexity in the current information systems pose novel challenges to security, with the cyberspace becoming a playground for people with varying skill levels and intentionspositive or negative. Thus, the protection of assets, identities and information is gaining more and more importance since constant connectivity has become an essential part of the daily life of people [1]. Besides, the social well-being and the economic progress of a nation are increasingly becoming reliant on cyberspace. The increasing interconnectivity, as well as the rise in the computational resource availability for attackers, offers them provision for sophisticated, unpredictable and distributed attacks [2]. Attackers can, therefore, cause disruptions to critical infrastructures, including financial networks, telecommunications, energy pipelines, electrical power, refineries, etc. [3]. Recent events reveal the damage cyber-attacks can cause to private enterprises, governments and the public in general, in terms of reputation, data confidentiality and money [4]. For more than twenty years, the research community has been paying attention to the field of cyberspace security. Nevertheless, the cybersecurity issue has still not been solved completely. In this paper, the applicability of game-theoretic methods in solving cybersecurity issues has been explored.
Contineours advancements have been accomplished by traditional security methods in the protection of well-defined goals viz. confidentiality, integrity and availability (CIA). Cryptography is one such strong theoretic security foundation that depends on the cryptographic key secrecy. But as in social engineering attacks or Advanced Persistent Threats (APTs) where attackers steal the whole cryptographic keys, the assumption of confidentiality of keys gets violated and thus leading to the penetration of the systems [1]. A novel theoretical foundation and an innovative perspective are required for capturing the scenarios where attackers can compromise systems thoroughly, and defenders can secure the systems without the fundamental key secrecy assumption.
The limitation of conventional security solutions is the non-existence of a quantitative decision framework. As such, many research groups have begun encouraging the employment of game-theoretic methods. Since game theory handles the problems where several players with opposing goals compete with one another, it can offer a mathematical framework for the modelling and analysis of network security issues.
The models based on game theory are natural frameworks for capturing the defensive as well as adversarial interaction among players [5,6]. Game theory can offer a quantitative measure of the security quality provided using the Nash equilibrium concept where defender, as well as attacker, look for optimal strategies and none has the incentive for unilateral deviation from their equilibrium strategy regardless of their opposing security goals. This equilibrium concept further, quantitatively, predicts the security outcome of the scenario being captured by the game model. Game theory, thus, provides manageable security with its quantitative security measure, unlike the qualitative measure assured by cryptographic security. Furthermore, the game-theoretic approach can be extended to mechanism designing, allowing the system designers to shift the equilibrium as well as the predicted outcome in favour of the defender utilizing an intricate game design.
The interest in the field of game and decision theory has grown for more than a decade, and it has become a well-proven, systematic, strong theoretical foundation of the present-day security research. Game theory espouses a distinct and economic perspective of security, not the same as the standard definition, i.e., security is not the nonexistence of threats, but the stage where attacking a system is more expensive than not attacking. Therefore, beginning from the game-theoretic base attains the most sophisticated self-enforcing protection by evaluating and generating incentives for encouraging honest behaviour instead of thwarting maliciousness. Simultaneously, the economic approach to security is essential as well since it is analogous to the progression of attackers in the present day. Cybercrime has developed into a fully-featured economy with the maintenance of supply chains, black marketing and mostly resembles an illicit counterpart of the legal software market [1]. Although conventional security forms a significant base for dealing with the problem from below, game theory provides a top-down approach by the adoption of strategic and economic perspectives of the attackers as well and thus complements technological security methods. The ideal stage is attained when the two routes taken up converge towards the middle, and this is the goal of game theory.
The rest of the paper is organised into various sections with Section 2 discussing the cybersecurity in detail, followed by the discussion on game theory aspects in Section 3. The relation between game theory and cybersecurity has been presented in Section 4, and the various categories of games have been elucidated in Section 5. Section 6 comprises of the illustration of varied game models that are applied in cybersecurity and Section 7 discusses the bridging of game theory and cryptography. Finally, the future research directions and concluding remarks have been given in Section 8 and 9, respectively.
CYBER SECURITY
Before defining cybersecurity, cyberspace needs to be determined. As per National Security Presidential Directive 54, "Cyberspace is the interdependent network of information technology infrastructures including the Internet, embedded processors, computer systems, controllers in critical industries and telecommunication networks" [52]. The social well-being and economic development of a nation are now becoming reliant on cyberspace.
Furthermore, the term 'cyber' is also linked with various other genres such as cybergoth is associated with music; cyberpunk is a kind of novel based on fiction; cybercrime involves crimes done using computers, and cyberbullying is bullying anyone on social media or internet [53].
Cybersecurity has been defined in the Oxford English Dictionary [54] as, "The state of being protected against the criminal or unauthorized use of electronic data, or the measures taken to achieve this". Practically, it implies that any criminal-based or unauthorized use of electronic devices or data is regarded as a cyber threat. Manipulation of physical assets is also considered as threat to cybersecurity. Nevertheless, the connecting line between information security and cybersecurity is very thin since the issues in cybersecurity can be transformed into those of information security and vice versa in several instances. Some public sources even consider both as synonymous terms. However, cybersecurity involves human factors like people as cyber-attack targets, unlike information security [55].
European Union Agency for Network and Information Security (ENISA) considers cybersecurity as a collective term of various realms that include communication security, operation security, information security, military security and physical security [56]. The relation between the domains is shown in Figure 1.
Cyber Security
Communication Security
Operation Security
Information Security
Military Security
Physical Security The function of each of these domains is the security in respective areas. Table 1 discusses the various domains, along with their objectives in a detailed manner. Security against a threat that attempts to affect the technical setup and influence specific values in a way not anticipated by the designer or owner Operation Security Securing against the threat that attempts to influence workflows or processes into undesirable outcomes Information Security Securing data saved in cyber systems against risks of deletion, manipulation or theft Physical Security Security against illicit use and securing physical assets of cyber systems like network components, storages or servers Military Security Protection against threats against physical assets but have flavours of strategic, military or political aspects As is evident from Table 1 and Figure 1, each security domain focusses on its forte; nevertheless, in the end, all of them are linked to cybersecurity. From this perspective, cybersecurity can be understood as the umbrella term for all the security domains.
The definition of cybersecurity given in Table 1 depicts the relationship between various components. Notably, cybersecurity is not limited to technical security of the environment, but it also includes potential threat sources, assets and the fundamental elements associated with organizations such as the CIA.
Confidentiality, Integrity and Availability or CIA is a significant definition associated with cybersecurity. While defining cybersecurity policies and information security on an organizational level, the CIA is considered as a fundamental element [53]. Figure 2. Interdependence of cyber network and physical systems Figure 2 shows how cyber networks are interconnected with physical systems that comprise of crucial infrastructures like the communication network, subway and power grid. The physical system components such as subway stations can function properly only if the cross-layer nodes (like surveillance cameras and power substations) and other subway stations work correctly. This interdependent nature of the cyber-physical infrastructure paves the way for coordinated attack exploitation leveraging the susceptibilities in cyber networks and physical systems and increasing the attack probability as well as infrastructure failure rate [57]. For instance, cyberattacks can be used by a terrorist for compromising surveillance cameras of a government building, public place or an airport and planting a bomb stealthily without being sensed physically. The physical loss inflicted on the infrastructure systems may also aid the attackers in intruding into the cyber systems like control rooms and data centres. Therefore, both physical as well as cyberinfrastructure failures might lead to detrimental outcomes. Furthermore, the physical, logical and cyber connectivity of the setups leads to interdependencies and dependencies between components and nodes within and across the infrastructures. Consequently, the failure of an element might cause cascading failures in several arrangements. For mitigating these cyber-physical threats, designing effective design mechanisms is essential for hardening the physical and cybersecurity at the infrastructure nodes to secure them from failures.
Together with the development in cyberinfrastructure, cyber risks are growing at a rapid pace. The conventional cybersecurity technologies are focussed on common threats but are not meant for the infrastructures with heavy traffic. Game theory provides a better approach for dealing with cybersecurity issues than traditional security solutions.
ASPECTS OF GAME THEORY
Nowadays, the research community is making all the efforts possible to bring effective network security solutions to organizations; and one such resolution is the utilization of theories suitable for real-life situations for developing mitigation methods. The game theory falls in the category of one such approach that is being explored by the researchers. Over the years, researchers have been investigating the applicability of game-theoretic methods for dealing with cybersecurity; and many of those methods have been successful. Game theory helps in understanding scenarios where there is an interaction among the decision-makers in some way. In the regular sense, a game can be considered as a competitive activity in which the players compete with one another based on a distinct rule-set or the moves that have already been laid down [58].
With growing distributed and infrastructure-less systems, game theory has also found applicability in the security of decentralized communication systems viz. wireless sensor networks (WSN) [59]. Such a security scenario, involving the interaction of attacker and defender, can be precisely mapped to a game among the players where every player tries their best to increase their profit. Most importantly, game theory is perfectly suitable for such a security model since the action taken by the attacker or defender relies on the behaviour of the opposite party [60][61][62].
Lately, the extensive application of game theory in the field of security is categorized into security games. Besides cybersecurity, game theory has applications in many other spheres such as politics, sciences, economics, auction, finance, etc. This paper reviews the game theory applications in cybersecurity.
Definition of a game
Applying mathematical analysis of cooperative and/or individual behaviours among players selecting a specific action/strategy to fulfil their self-interests can be understood as game theory [63].
The definitions of the basic parameters of a game have been given below: 1. A game is defined as the strategic interaction among cooperating or opposing interests taking into account the payoffs for the actions taken by the players and constraints without revealing anything about the actual steps carried out [58]. A player is the fundamental game entity involved in a game with a finite player set (depicted by ) that take logical actions (illustrated by ), for every player . A player can be a machine, a group of people or an individual in the game. 3. The payoff/utility is the negative or positive reward given to players for their actions in the game. It is signified as : → ℝ, that measures the result for the player based on the actions of other players =× ∈ , where ℝ is the set of real numbers, and × indicates Cartesian product. 4. A strategy implies the plan of action that can be adopted by a player in the game that can be represented in the strategic/extensive form as Game = 〈 , ( ), ( )〉 (1) Game theory is a description of a multi-person decision scenario as a game where every player selects an action that leads to the best reward for their ownselves while expecting logical actions from the rest of the players. A typical game, when applied to game theory, is characterized by four fundamental features, which are: a) Two or more players b) Competing nature c) Rules governing each game d) Payoffs for every player
Nash equilibrium
An important game-theoretic concept is the Nash Equilibrium that is defined as the intersection point of best responses, i.e., every player plays their best response against the actions of the rest of the players [58]. In general, Nash equilibrium is an intelligent solution to social problems that have become a favourable concept for wireless sensor network security [64,65].
Nash equilibrium is a solution concept describing the steady-state game condition; no player would desire to modify its strategy since it may decrease their payoffs provided the rest of the players are following the stipulated policy.
However, this solution concept indicates the steady-state in the game without specifying how to reach such a state. Although various other solution concepts are utilized occasionally, Nash equilibrium is the most prominent. This information shall be employed for defining games having relevant characteristics to represent network security issues [58].
Let us assume the strategy profile for an player game is 1 , 2 , 3 , … … … * (2) Where * denotes Nash equilibrium, if every player has a payoff value , then ( * , − * ) ≥ ( , − * ) (3) Must be valid for every player with * denoting the action profile of player and − * indicating the equilibrium action of the rest of the players.
For each game, two significant concepts hold, viz. common knowledge and rationality. Common knowledge comprises of the earliest comprehension of results as well as the mutual knowledge of every player about the results. Rationality, on the other hand, specifies the consistency in decisions without taking into account the likes/dislikes of the players within the game [66].
Take into consideration a simple game -Prisoner's dilemma that involves only two players. The matrix representation of the game is given in Figure 3. The police have arrested two people who have been accused of the possession of guns. They are suspected of having committed a crime; they both shall be put behind bars for a 6-year term if neither of them admits. But if only one of them confesses, he/she shall be freed, and the other one shall be imprisoned for nine years. Now, if both of them admit to the crime, each of them shall be jailed for a year. In such a game, the Nash Figure 3 as (-6, -6).
GAME THEORY AND CYBER SECURITY
The recent proliferation in cyber-attacks, as well as identity thefts, have turned the Internet into a daunting place. Cyber-attacks have generated a global threat, in securing global as well as local networks [4]. They are also threatening society since communication, and economic infrastructures primarily rely on information technology and computer networks [63].
Therefore, more effective defence strategies are a must for countering the threats caused by the rising cybersecurity concerns. In cybersecurity, game theory can help in knowing the response of the defender to the attacker and vice versa. A two-player game can be used to capture the strategic interaction between the attacker and defender in which both players try to maximize their interests. The strategy of the attacker is determined by the actions of the defender and vice versa. Therefore, a defence mechanism can be said to be valid on the basis of the strategic behaviours of the attacker and the defender. A tactical analysis can be performed by employing game theory for investigating the attacks from multiple nodes or a single node. As a result, game theory is essential for studying the strategic decision-making scenarios of defenders and analysing the attackers' incentives.
There are several aspects in which game-theoretic methods are better than the traditional approaches to cybersecurity and privacy, which have been discussed below [63]: (1) Well-timed action: Owing to the absence of incentives for the participants, traditional security solutions are adopted rather slowly. However, game-theoretic methods back the defenders by the employment of fundamental incentive mechanisms for allocating restricted resources to even out the risks perceived.
(2) Proven mathematics: Majority of the traditional security approaches that are implemented in reactive devices like anti-virus programs or preventive tools like firewalls are dependent only on heuristics. Nonetheless, game-theoretic approaches analyse the security decisions methodically with proven mathematics.
(3) Distributed solution:
The decision-making in conventional security solutions is centralized and not distributed (or individualized) in nature. Because of the absence of coordinators in autonomous systems, the centralized decision-making process is almost impossible in network security games. Thus, security solutions can be realized in a distributed way by making use of game theory.
(4) Reliable defence: The researchers can design defence strategies for robust and dependable cybersecurity systems against attacks (or selfish behaviours) based on the analytical results provided from game theory.
All the reasons as mentioned above, make game theory a suitable solution for cybersecurity problems. Nevertheless, the following issues need to be kept in mind while using game-theoretical methods for implementation in cyber systems: (1) Multi-layer protection: In the previous works reviewed, it is observed that the defender targets a particular defence mechanism and attempts to maximize its utility by adjusting suitable parameters. Nevertheless, the presence of multiple layers of defenders providing security against the attack, that is frequently realized in the current cyber systems, is overlooked. Thus, a fitting game-theoretic approach is needed to resolve how multiple-layer defenders can offer security against attacks while implementing the defending layers simultaneously and how the other layers can be enhanced.
(2) Rationality: Most of the game-theoretic methods utilized in cybersecurity emphasize on equilibrium strategies in the action profiles of the attackers and defenders. But it is not easy for the attacker or the defender to provide the best-response actions owing to constrained rationality (in terms of restricted resources or information) in real cyber systems [67]. Thus, suitable models like Quantal Response Equilibrium and Prospect Theory are essential for predicting the behaviour of the players [68,69]. Moreover, in the scenarios with multiple equilibria, it is not clear what the players shall select or if at all they agree to choose one.
(3) Implementation: When the defenders and attackers make decisions in real cyber systems, they take into account several factors that are real but uncertain like signal-to-noise ratio, the traffic produced in a typical network and/or the node power in wireless networks. Nevertheless, the defenders may not perceive the entire information accurately in realistic situations. Consequently, they should be able to study the environment and understand it. Besides, there are several game-theoretic methods that model security games as two-player games with multiple defenders or attackers being taken as a single entity. Two-player games are realistic models if multiple defenders or attackers have the same payoffs and strategies but might not be reasonable in a real system owing to the variety in the payoffs and strategies of defenders and attackers.
195
The game theory thus plays an integral part in acquiring an equilibrium strategy for surviving from unpredicted interruptions and attacks because of the interaction among users in cyber communication. Furthermore, in connection with cyber privacy, the game theory also finds applications in information sharing, anonymity, confidentiality and cryptography.
CLASSIFICATION OF GAMES WITH THEIR APPICATIONS IN NETWORK SECURITY
On the basis of perspectives, games can be categorized into several classes. The varied types of games have been discussed in this section below: Figure 4 represents the classification of various game models plus the security issues each class of games handles [70]. Co-operative games: In a cooperative game, every player enforces cooperative behaviour. Such games are between the coalitions of players rather than between two players only. Cooperative games thus accentuate group efficiency, equity and rationality [71].
Non Co-operative games: In non-cooperative games, every player exhibits selfish behaviour without taking into consideration the opponents. The chief goal of every player is gain in payoffs. Non-cooperative games accentuate individual optimal decisions and individual rationality. This game type is the research goal of contemporary game theory; notably, several cybersecurity issues are non-cooperative games [71]. Extensive/Dynamic Games: In dynamic games, every player has some knowledge about the behaviours of the rest of the players, unlike static games; besides, these are multi-stage games. The players in this game make decisions on the basis of the opponent's behaviour. Such games are sequential structures of the decision-making problems that the players of static games come across. The game sequences can be finite or infinite [58].
Complete Information Games:
The games in which every player within the game has comprehensive knowledge about the opponents' behaviour are said to be complete information games. Every player is completely aware of all the opponents' strategies.
Incomplete Information Games:
The game in which any one of the players has zero information about the opponent players. As a result, the players may not be able to make perfect strategies for winning the game.
Perfect Information Games:
In this type of game, every player has the perfect knowledge of all the previous actions of the opponent player before making a move. Chess, go, and tic-tac-toe are examples of perfect information games [58].
Imperfect Information Games:
In such games, there is at least one player who does not know about the past actions of the opponent player, thus making it tough for the player to make a move. Cybersecurity falls in this type of game category.
Bayesian Game: It is well-known that every outcome is valuable to every player in recreational games since the game rules quantify this. On the contrary, when real-world strategic scenarios are modelled, every player might be uncertain about the value of various outcomes to the rest of the players. Such uncertainty can be modelled naturally using Bayesian games. In Bayesian games, every player selects one 'type' from distribution Moreover, Bayesian games may be transformed into standard form, but there shall be an exponential growth in size [72]. This type of game is referred to as Bayesian, owing to the employment of Bayesian analysis in anticipating the outcome [73].
Stochastic Game: These games are multi-player generalizations of Markov decision processes. In every state, an action is chosen by all the players and the profile of actions chosen govern the instant reward to all of them together with their probabilistic transitions [72]. Thus, such games progress as a series of states beginning from a start state. Then, the players acquire a payoff after selecting their actions based on the present game state. This is followed by the transition of the game into a different state with a probability determined by the current state and actions of the players [74]. Stochastic games having just one state are referred to as repeated games in which the same normal-form game is played by the players repeatedly.
Repeated Game: As defined already, repeated games are an interaction of two players that play the game repeatedly [64]. Also known as iterative games, this game comprises of many repetitive stages, and at every step, the present action determines the next action of the players. Repeated games can be either finitely repeated or infinitely repeated. There is a fixed and known time-period in infinitely repeated games. However, it has a limitation that makes the player behave selfishly, and Nash Equilibrium is used to equal minmax payoff. The most popular type is the infinitely repeated game in which the game continues for an infinite period [59].
Zero-sum Game:
It is a kind of non-cooperative game played between two players. One of the players is the maximizer since it attempts to raise its gain to the maximum and the other player is the minimizer since it tries to keep its losses to the minimum. As a result, the zero-sum game can be assumed to be a one-side win game or two-side conflict game in which the overall payoff/utility of the players stays constant throughout the game, being the strategy profile [59].
Non-zero-sum Game: This type of game can be played by multiple players, and the sum of the payoff values of the players does not remain constant during the game [59]. So, all the game players are either maximizers or minimizers without having any constraints on the overall payoff value as is the case with zero-sum games. Thus, all the players in the game can lose or gain together [64].
Evolutionary Game: Evolutionary games are basically applied to biological networks in which pure and mixed strategies might be merged by the players with logical behaviour for enhancing the population characteristics [64]. Notably, evolutionary games have been utilized in the past to model various wireless sensor network applications [59].
Stackelberg Game: Stackelberg games are used for modelling two competing players where one of the players is the initiator (or leader) of the game who opts an action from a specific set 1 , and the other player follows the action of the leader to choose later a move from a different set 2 [59]. Such a situation is prevalent for protecting various wireless sensor networks where the attacker acts as the follower, and the defender plays the role of the leader [75,76].
A distinctive category of games called security games analyse the interaction between defenders and malicious attackers. These games, along with their solutions, form the base of algorithm development and formal decision-making besides the prediction of the behaviour of the attackers. Moreover, security games find vast applications to security issues like intrusion detection and privacy in computer, wireless and vehicular networks [77].
GAME-THEORETICAL METHODS FOR CYBER SECURITY APPLICATIONS
From the perspective of game-theoretic applications in cybersecurity, there are six categories of security applications:
Physical layer security
It is an evolving area of security. Eavesdropping and jamming attacks are commonplace on the communication channel of networks [70]. Notably, these attacks are more alarming for wireless networks in comparison to wired networks.
In this regard, a game-theoretic model was introduced by authors in [78] for examining the communication between the source transmitting valuable data and some available jammers who aid the source in puzzling the eavesdroppers. Those jammers charge some cost from the source for their service and then there is a price trade-off. Stackelberg game has been put forward by the authors plus a distributed algorithm for investigating the result of the game to display the effectiveness of the price trade-off and the friendly jamming service. IJEEI ISSN: 2089-3272
Self-organized network security
A specific application of game theory is the design of security protocols for self-organized networks like mobile adhoc networks (MANETs), wireless sensor networks or vehicular networks. Due to the relatively static configuration and homogenous architecture of self-organized networks, the behaviour of the network tends to be like that of a reasonable decision-maker or a logical economic man, therefore making it consistent with the game theory requirements.
Several previous works take into account only two players in their game models when applying game theory to security. But it might not be practical in case of MANETs that have no centralized administration. In the scenarios with multiple players, a robust mathematical tool is mean-field game theory. In [79], authors have employed the contemporary developments in mean-field game theory for proposing a new game theory-based distributed approach for MANET security with multiple players. Such an approach allows a distinct MANET node to make decisions of strategic defence, and every node needs the knowledge of its state together with the cumulative effect of the rest of the nodes in the MANET.
Intrusion detection and prevention
Intrusion detection is considered to be one of the most broadly applied security research areas in terms of game theory owing to its attack and defence characteristic [70]. With the study of game models, it is possible to optimize the distributed design as well as security configuration of intrusion detection systems. In this regard, defence strategies based on puzzles have been put forth against flooding attacks An automated intrusion response engine has been introduced by authors in [80] that is referred to as Response and Recovery Engine (RRE). This engine makes use of game-theory based response strategies to ward off intruders that are displayed as opponents in a two-player stochastic Stackelberg game with RRE and the attacker attempting to raise their benefits to the maximum by considering the response actions and the optimal opponent respectively. There are many research-works that utilize game theory for intrusion detection as well as prevention. For instance, a collaborative incentive-based game-theoretic method has been designed for intrusion detection in [81], another game theory model has been proposed in [82] for the detection of cooperative intrusion over multiple packets, a protracted Dirichlet based collaborative IDS based on game theory has been given in [83], and authors in [84] present a game-theoretic approach to configure large scale intrusion detection signature dynamically.
Privacy preservation and anonymity
From the perspective of game theory, privacy can be evaluated by the users, and various strategies can be inspected for their desired privacy level setting. Game theory can prove beneficial in analysing the privacy preservation economically and in finding the best compromise between performance and privacy. Several effective location-based services bring convenience to the users but at the cost of the privacy of users. Authors in [85] have been the first to put forward an optimal mechanism that finds the location besides preserving user privacy in location-based service. The Stackelberg Bayesian games are used for modelling the mutual optimization of localization accuracy vs location privacy, and it has been proven that this methodology showed better results in comparison to a direct obfuscation mechanism. Furthermore, the interaction among data collectors, data users and data providers have been modelled in [86] using a game and a general methodology for finding Nash equilibrium.
Economics of cyber security
Since the game theory was set up initially in the theoretical framework of economics, it can be applied chiefly to cybersecurity economics. Various standard economic models and theories find applications to the economic perspective of security like security policy making, security incentive and security investment. Securing the network infrastructure against attacks is a must because the attacks on high-speed data links could cause a delay or loss of large-scale data.
In the works [87,88], authors have studied the incentive mechanism in network security by investigating the network externality that is produced by the price of anarchy (POA) and selfish investment behaviour. They have proved that network security can be improved dramatically by enhancing incentive mechanisms of cybersecurity investment rather than by the enhancement in security preservation methods.
Cloud computing security
Cloud computing is a thriving industry and a well-known concept of information processing. Still, its security problem is complicated due to the use of varied infrastructure elements in every service model [70]. The conventional security mechanisms are not apt for cloud computing because novel cloud concepts like Different public cloud users share a common platform such as the hypervisor. This universal platform intensifies the well-known problem of cybersecurity interdependency, and a user who does not invest in cybersecurity imposes a negative externality on others. This is one of the reasons that many large organizations with sensitive information have been reluctant to join a public cloud.
A framework has been put forward by authors in [10] known as FlipIt game for cloud security that provides details about when devices should trust the cloud hypervisor's commands. This communication is modelled as a game with the device, attacker and the defender as the players. A game theoretic-security risk assessment model has been proposed in [89] for cloud computing that is scalable such that it is assessed if the system risks should be fixed by tenant or cloud provider. Authors in [90] have modelled the problem of cloud security transparency as a non-cooperative dynamic game in which the client and provider are modelled as the game players. Consequently, authors have presented a theoretical examination for the client or provider to compute the best strategy for reaching the Nash equilibrium.
BLENDING GAME THEORY AND CRYPTOGRAPHY
Both game theory and cryptographic protocols deal with the analysis of interactions among mutually suspecting parties. Historically, both the fields have evolved independently of each other within diverse research communities and thus are likely to have different flavours. Nevertheless, there has been growing interest in merging the approaches and techniques of these fields that were inspired by the desire for developing more pragmatic protocols and models of such an interaction. The present research at the relation of cryptography and game theory can be divided into two categories: applying game-theoretic models to cryptographic protocol design and applying cryptographic protocols to the problems based on game theory.
Conventionally, the protocols in cryptography are devised, assuming that a few participants are authentic and follow the protocols faithfully while other participants are malicious and act haphazardly. However, the game-theoretic perception is that every participant is rational and acts according to its best interest. Such a perspective is different from the cryptographic one, according to which the protocol need not avert to irrational behaviour although no one can be trusted to follow the protocol unless it acts in the participant's best interests.
In general, cryptography focusses on guaranteeing that the parties continue to use the authorized service and game-theoretic approach has the same goal. Game-theoretic methods are used for devising incentive mechanisms that aim to avert diversion. An important motive to apply the game-theoretic method in cryptography is the modelling of malicious behaviour of the user. The rationale behind this is that malicious actions are not only more challenging to control than rational actions, but it is also more common and practical for some parties to follow the cryptographic protocol dishonestly. For enhancing the security and efficiency of cryptographic protocols, many new terminologies and approaches have been proposed by researchers. For instance, socio-rational secret sharing, employment of Perfect Bayesian Equilibrium (PBE) or utilization of point-to-point channels rather than trusted mediators. Also, game-theoretic methods have been used in steganography and have proved to be ideal in allowing researchers to assess several design choices like distribution of payload in batch steganography or distortion functions in adaptive steganography. All these approaches exhibit the huge benefit that blending of cryptography and game theory brings in designing defence mechanisms.
FUTURE RESEARCH DIRECTIONS
Possible future research directions of game-theoretic approaches for cybersecurity and privacy may consist of several emerging areas as follows:
Social media
In recent times, social media sites, for instance, Twitter and Facebook, have been emerging as excellent ways of communication. These sites can be used by attackers as novel media for conducting insidious attacks. Owing to the massive centralized user data, increased attention should be paid to privacy protection. The application of game theory to social media can help us identify the objectives of social media users and how they work to achieve them. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester.
Cloud computing
Debatably, cloud computing is assumed to be one of the essential technological shifts in recent times. Nevertheless, while shifting data to clouds, various challenges to security and privacy are posed. Game theory is a potential mathematical framework for analyzing the effects and causes of privacy and integrity issues for [89]. Authors in [91] have put forward methods for reacting automatically to the opponent's behaviour for securing the system through Q-Learning. The researchers, after comparing Q-Learning with the conventional stochastic game, presented simulation results that affirm Naive Q-Learning as a potential approach on confrontation with limited information about adversaries.
Bitcoin
Bitcoin is an electronic decentralized fiat currency that is implemented through peer-to-peer technology and cryptography. The designing of secure and effective mining methods is one of the challenges faced by bitcoins. Minigas, a game, can be modelled among all users through game theory.
Embedded security
At a single point, attacks of malware can occur in the hardware. It's the most beneficial entity which provides the ability for manipulating a computing system. The attackers secretly and deliberately bring modification to electronic devices like integrity circuits for creating hardware Trojans. Game theory is the mathematical treatment of conflict, thus could be utilised for strategically guiding microcircuit testing to balance the risk posed by hardware Trojans.
Cyber-insurance
Techniques of risk management for improving cybersecurity are promising solutions with economic benefits for security software vendors, users and policymakers. The game-theoretic approaches have been employed in various cyber insurance research works for modelling the interactions among the players of cyber market insurance where the retail of cyber-insurance is taken as a defence mechanism. Though game theory may assist in designing mechanisms incentivizing the insurers, more methods are required for answering the questions that cyber insurance poses for improving cybersecurity and privacy.
Internet-of-Things (IoT)
IoT provides connection among numerous smart devices integrated in networks seamlessly for offering services in every aspect of human life. Being susceptible to varied attacks, the research community needs to emphasize on the privacy as well as the security of IoT applications. Due to the speedy development in technology making up the IoT, several new crucial problems and significant security issues in IoT shall be the potential topics for game-theoretic methods.
Device-to-Device communications
Device-to-Device (D2D) is emerging as a new trend in industrial as well as academic communities due to the economic energy consumption and high throughput. It is expected to be a key feature supported by next-generation cellular networks. D2D can extend the cellular coverage allowing users to communicate when telecommunications infrastructure is highly congested or absent. Various potential topics in relevance to D2D communication such as secure transmission, enhanced quality-of-service and energy efficiency can be considered utilizing the utility function maximization game frameworks.
CONCLUSION
Game theory has been found to play a vital role in numerous security situations and has been extensively applied in cybersecurity. The latest research works conducted reveal the effective use of game theory in web security, network security, etc. Game theory allows the defender to assess the security quantitatively and predict the security outcome in addition to offering a mechanism design tool for enabling security-by-design and reversing the advantage of the attacker. Games can be analysed and designed; players' optimal moves can be utilized for determining how effectively security can be approached in the cyber world. However, an essential game-theoretic issue is the ability to find a feasible mathematical solution to the game problem. More systematic solutions that solve the problem of cybersecurity utilizing game theory are recommended which involve practical mathematical solutions. One such approach that is presently being studied by the research community is the employment of linear programming. Furthermore, integer programming can be delved into for offering a more pragmatic solution to distributed denial of service attacks in the cyber world. | 2020-04-27T20:39:32.981Z | 2020-03-11T00:00:00.000 | {
"year": 2020,
"sha1": "873d02dee109a215dbe67514b945f8cba1bbe990",
"oa_license": "CCBY",
"oa_url": "http://section.iaesonline.com/index.php/IJEEI/article/download/1810/497",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f4656c1e3cabc5a795d18c2f5f91a234d13f09c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235219858 | pes2o/s2orc | v3-fos-license | Voluntary Wheel Running Partially Attenuates Early Life Stress-Induced Neuroimmune Measures in the Dura and Evoked Migraine-Like Behaviors in Female Mice
Migraine is a complex neurological disorder that affects three times more women than men and can be triggered by endogenous and exogenous factors. Stress is a common migraine trigger and exposure to early life stress increases the likelihood of developing chronic pain disorders later in life. Here, we used our neonatal maternal separation (NMS) model of early life stress to investigate whether female NMS mice have an increased susceptibility to evoked migraine-like behaviors and the potential therapeutic effect of voluntary wheel running. NMS was performed for 3 h/day during the first 3 weeks of life and initial observations were made at 12 weeks of age after voluntary wheel running (Exercise, -Ex) or sedentary behavior (-Sed) for 4 weeks. Mast cell degranulation rates were significantly higher in dura mater from NMS-Sed mice, compared to either naïve-Sed or NMS-Ex mice. Protease activated receptor 2 (PAR2) protein levels in the dura were significantly increased in NMS mice and a significant interaction of NMS and exercise was observed for transient receptor potential ankyrin 1 (TRPA1) protein levels in the dura. Behavioral assessments were performed on adult (>8 weeks of age) naïve and NMS mice that received free access to a running wheel beginning at 4 weeks of age. Facial grimace, paw mechanical withdrawal threshold, and light aversion were measured following direct application of inflammatory soup (IS) onto the dura or intraperitoneal (IP) nitroglycerin (NTG) injection. Dural IS resulted in a significant decrease in forepaw withdrawal threshold in all groups of mice, while exercise significantly increased grimace score across all groups. NTG significantly increased grimace score, particularly in exercised mice. A significant effect of NMS and a significant interaction effect of exercise and NMS were observed on hindpaw sensitivity following NTG injection. Significant light aversion was observed in NMS mice, regardless of exercise, following NTG. Finally, exercise significantly reduced calcitonin gene-related peptide (CGRP) protein level in the dura of NMS and naïve mice. Taken together, these findings suggest that while voluntary wheel running improved some measures in NMS mice that have been associated with increased migraine susceptibility, behavioral outcomes were not impacted or even worsened by exercise.
Migraine is a complex neurological disorder that affects three times more women than men and can be triggered by endogenous and exogenous factors. Stress is a common migraine trigger and exposure to early life stress increases the likelihood of developing chronic pain disorders later in life. Here, we used our neonatal maternal separation (NMS) model of early life stress to investigate whether female NMS mice have an increased susceptibility to evoked migraine-like behaviors and the potential therapeutic effect of voluntary wheel running. NMS was performed for 3 h/day during the first 3 weeks of life and initial observations were made at 12 weeks of age after voluntary wheel running (Exercise, -Ex) or sedentary behavior (-Sed) for 4 weeks. Mast cell degranulation rates were significantly higher in dura mater from NMS-Sed mice, compared to either naïve-Sed or NMS-Ex mice. Protease activated receptor 2 (PAR2) protein levels in the dura were significantly increased in NMS mice and a significant interaction of NMS and exercise was observed for transient receptor potential ankyrin 1 (TRPA1) protein levels in the dura. Behavioral assessments were performed on adult (>8 weeks of age) naïve and NMS mice that received free access to a running wheel beginning at 4 weeks of age. Facial grimace, paw mechanical withdrawal threshold, and light aversion were measured following direct application of inflammatory soup (IS) onto the dura or intraperitoneal (IP) nitroglycerin (NTG) injection. Dural IS resulted in a significant decrease in forepaw withdrawal threshold in all groups of mice, while exercise significantly increased grimace score across all groups. NTG significantly increased grimace score, particularly in exercised mice. A significant effect of NMS and a significant interaction effect of exercise and NMS were observed on hindpaw sensitivity following NTG injection. Significant light aversion was observed in NMS mice, regardless of exercise, following NTG. Finally, exercise significantly reduced calcitonin gene-related peptide (CGRP) protein level in the dura of NMS and naïve mice. Taken together, these findings suggest that while voluntary wheel running improved some measures in NMS mice that have been associated with increased migraine susceptibility, behavioral outcomes were not impacted or even worsened by exercise.
INTRODUCTION
Migraine is a neurological disorder that presents as throbbing cranial pain, sensitivity to light (photophobia) and sound (phonophobia), nausea, fatigue, irritability, muscle tenderness, and cutaneous allodynia (Dodick, 2018). Migraine is often triggered or exacerbated by stress (Spierings et al., 2001;Nash and Thebarge, 2006) and stress experienced early in life is associated with increased susceptibility to developing migraine in adulthood (Felitti et al., 1998;Anda et al., 2010;Brennenstuhl and Fuller-Thomson, 2015). As such, abnormal activity of the hypothalamic-pituitary-adrenal (HPA) axis, the main stress response system of the body, has been detected in chronic migraine patients (Patacchioli et al., 2006;Rainero et al., 2006;Juhasz et al., 2007). Administration of nitroglycerin (NTG), a known migraine trigger, significantly increased plasma cortisol levels in migraineurs, compared to healthy controls (Juhasz et al., 2007). Mast cells, which are innate immune cells found in close approximation to sensory nerve endings in the dura, express receptors for corticotropin-releasing factor (CRF), the main stress hormone released by the hypothalamus . Activation of mast cells and their subsequent release of histamine, tryptase, and cytokines, can sensitize dural nociceptors and evoke their release of vasoactive neuropeptides, such as calcitonin generelated peptide (CGRP), which has been shown to contribute to migraine pathology (Theoharides et al., 2005).
Animal studies have shown evidence of stress worsening migraine-like behavior and outcomes. Chronic restraint stress induced thermal hyperalgesia that was further exacerbated by NTG injection (Costa et al., 2005), while a 14-day social defeat stress (SDS) paradigm or a 40-day chronic variable stress (CVS) paradigm had similar impacts on increasing hindpaw mechanical allodynia following NTG injection (Kaufmann and Brennan, 2018). Repeated restraint stress induced facial mechanical allodynia and increased MGS in rats and also resulted in transient hyperalgesic priming (Avona et al., 2020). Pretreatment with polyclonal antiserum to CRF significantly reduced mast cell degranulation following restraint stress (Theoharides et al., 1995) and pretreatment with a glucocorticoid receptor antagonist blocked an increase in cortical spreading depression in a transgenic mouse model of familial hemiplegic migraine following treatment with corticosterone (Shyti et al., 2015). A model of secondary traumatic stress during the neonatal period is the only published early life stress model used to study migraine, which has shown increased expression of CGRP, signal transduction proteins, and glial fibrillary acidic protein in the spinal trigeminal nucleus (Hawkins et al., 2018) and increased facial allodynia following exposure to a pungent odor (Peterson et al., 2020).
Although strenuous physical activity is a known migraine trigger, being physically inactive is associated with migraine and submaximal aerobic exercise can reduce the frequency of episodes and improve quality of life in migraineurs (Daenen et al., 2015). Exercised-based improvements in migraine are generally associated with increased serotonin and endogenous opioid levels; however, exercise-induced improvements in common comorbid mood disorders, such as anxiety and depression, can be, at least partially, attributed to normalizing output from the HPA axis (Hearing et al., 2016). Our model of early life stress in mice, using neonatal maternal separation (NMS) demonstrates urogenital hypersensitivity, increased MC degranulation in the affected organs, and reduced expression of stress-related regulatory genes in the hypothalamus and hippocampus, which is a major inhibitory regulator of the HPA axis (Pierce et al., 2014(Pierce et al., , 2016Fuentes et al., 2017). Voluntary wheel running attenuated many of these NMS-related outcomes and increased brain-derived neurotrophic factor (BDNF) expression and neurogenesis in the hippocampus (Pierce et al., 2018;Fuentes et al., 2020). Here, we are using our NMS model to determine if early life stress exposure in mice can increase susceptibility to evoked migraine-like behaviors. We are also testing the impact of voluntary wheel running on molecular and behavioral outcomes related to migraine. Although migraine can affect both sexes, we carried out these studies in female mice as women comprise the majority of migraineurs (Burch et al., 2018).
Animals
All experiments were performed on female C57Bl/6 mice (Charles River, Wilmington, MA, United States) born and housed in the Research Support Facility at the University of Kansas Medical Center. Mice were housed at 22°C on a 12-h light cycle (600-1800 h) and received water and food ad libitum. All research was approved by the University of Kansas Medical Center Institutional Animal Care and Use Committee in compliance with the National Institute of Health Guide for the Care and Use of Laboratory Animals. No attempts were made to control or track the estrus cycle of the mice to avoid potentially confounding stressors and our previous studies have shown no impact of cycle stage on other outcomes related to NMS exposure (Pierce et al., 2014(Pierce et al., , 2016(Pierce et al., , 2018.
Neonatal Maternal Separation
Pregnant C57Bl/6 dams at 14-16 days gestation were ordered from Charles River and housed at the Department of Laboratory Animal Resources at the University of Kansas Medical Center. Litters were divided equally into NMS and naïve groups. NMS pups were removed as whole litters from their home cage for 180 min (11 am-2 pm) daily beginning at postnatal day 1 (P1) until P21. During separation, pups were placed in a clean glass beaker with bedding from their home cage. The beaker was placed in an incubator maintained at 33°C and 50% humidity. Naïve mice remained undisturbed in their home cage except for normal animal husbandry. Entire litters were designated as naïve or NMS to avoid excess stress exposure to the naïve mice. All mice were weaned on P22 and housed 2-5/cage with same sex litter mates and ad libitum access to food and water. All litters also contained male pups, which were similarly handled, Frontiers in Physiology | www.frontiersin.org but not investigated in this study. Three different cohorts of only female mice were used in the following experiments and are depicted on a timeline in Figure 1.
Exercise
At either 4 or 8 weeks of age, NMS and naïve mice were equally divided into exercised (Ex) and sedentary (Sed) groups. Mice at 4 weeks of age were pair-housed and mice at 8 weeks of age were singly housed in a cage with free access to a stainless-steel running wheel (STARR Life Sciences Corp., Oakmont, PA, United States). Pair-housed mice had constant access to the running wheel and were observed running on the wheel simultaneously. Sed mice remained pair-or grouphoused (2-5/cage) in their home cage with no access to a running wheel. Distance ran was recorded by STARR Life Sciences VitalView Activity Software version 1.1.
Dural Mast Cell Staining
Mice were overdosed with inhaled isoflurane (<5%) and intracardially perfused with ice-cold 4% paraformaldehyde (PFA). Dura was removed from the skull and post-fixed in 4% PFA at 4°C for 1 h and cryopreserved in 30% sucrose in phosphate buffered saline (PBS). Dura was whole mounted on a glass microscope slide and stained for 10 min with 1% toluidine blue (TB) solution acidified with 1 M HCL.
Slides were allowed to dry for 2 h in a 37°C oven, washed in 95% then 100% EtOH, fixed in xylene, and cover slipped. Using a light microscope (Nikon eclipse 90i, Nikon Instruments, Inc., Melville, NY, United States) 10 non-adjacent images of each dura sample were taken (QIClick digital CCD Camera, QImaging, Surrey, BC, Canada). The total number and the number of degranulated mast cells were counted in each field (800 um 2 per field) per tissue. The percent of degranulated mast cells was quantified using the following equation: (Degranulated mast cells/total mast cells) × 100.
Western Blots
Mice were overdosed with inhaled isoflurane (<5%) and dura was removed and flash frozen in liquid nitrogen before storage at −80°C. Dural protein was isolated using Cell Extraction Buffer containing Halt protease and phosphatase inhibitors (Thermo-Fisher Scientific, Waltham, MA, United States) and Na 3 VO 4 . Protein concentrations were determined using a Pierce BCA assay (Thermo-Fisher Scientific, Waltham, MA, United States). Samples were reduced by heating to 95°C in the presence of 2-mercaptoethanol, subjected to SDS-PAGE (Criterion 4-12% Bis-Tris gels; Bio-rad laboratories), and transferred to Nitrocellulose transfer membrane (Whatman GmbH, Dassel, Germany) by Criterion Blotter wet transfer (Bio-Rad). The membranes were FIGURE 1 | Representative timeline of evoked migraine experiments and outcome measures. All groups consisted of mice that underwent neonatal maternal separation (NMS) from postnatal day (P) 1 to 21 (depicted as the red shaded period) and non-separated, naïve mice. Mice in Cohort A were singly-housed with running wheels at 8 weeks of age and sacrificed at 12 weeks of age. Dura mater was collected to evaluate mast cell (MC) characteristics and measure protein level using Western Blot. Mice in Cohort B were pair-housed with running wheels at 4 weeks of age. At 8 weeks of age, they received either inflammatory soup (IS) or saline applied to the dura mater and, 1 h later, mouse grimace score (MGS) was measured over 1 h. Forepaw mechanical withdrawal threshold was measured immediately after cessation of MGS. At 40 weeks of age, Cohort B mice received a single intraperitoneal (IP) injection of nitroglycerin (NTG) and 30 min later were assessed for MGS over 1 h, immediately followed by hindpaw mechanical withdrawal threshold. Mice in Cohort C were also pair-housed with running wheels at 4 weeks of age and assessed for MGS, without dural stimulation of NTG, at 12 weeks of age. The mice were given an IP NTG injection at 40 weeks, followed 30 min later by photophobia testing and sacrifice. The level of corticosterone was measured in the serum and the level of calcitonin gene-related peptide (CGRP) was measured in the serum, dura, and trigeminal ganglia (TG).
Dural Injection
Under inhaled isoflurane a modified cannula injector (Plastics One) was inserted at the lambdoidal suture without penetrating the dura (Burgos-Vega et al., 2019). About 10 ml of inflammatory soup (IS; 1 mM histamine, 1 mM 5-hydroxytryptamine, 1 mM bradykinin, and 0.1 mM prostaglandin E2 in PBS) or saline was slowly dispersed onto the dura and the injector was removed.
NTG Injection
Mice received an intraperitoneal (IP) NTG or saline injection at 10 mg/kg.
Mouse Grimace Scale
The mouse grimace scale (MGS) is a measure of facial expressions indicating spontaneous pain-like behavior in mice. It is a set of five behaviors: orbital tightening, nose bulge, cheek bulge, ear position, and whisker change. Each facial expression is rated as "not present (scored 0), moderate (scored 1), or severe (scored 2)" and then an overall score is assigned (Langford et al., 2010). Mice were placed on top of a wire mesh screen elevated 55 cm above a table and enclosed under an overturned 500 ml glass beaker. Behavior was video recorded for 1 h. Facial screen shots were then taken from the videos every 5 min. Photographs of each mouse were randomized and grimace score was assigned to each picture according to a modified version of the MGS established by Langford et al. (2010). Only orbital tightening and ear position were scored due to the difficulty of scoring the other features (nose bulge, cheek bulge, and whisker change) on a C57Bl/6 mouse. An average grimace score from two blinded investigators was quantified for each mouse.
Paw Sensitivity
Before testing for paw mechanical sensitivity, mice were acclimated to a sound-proof room and the testing table for 2 days before the day of testing. This acclimation consisted of 30 min within the sound-proof room followed by 30 min inside a clear plastic chamber (11cm × 5cm × 3.5 cm) on a wire mesh screen elevated 55 cm above a table. Paw mechanical withdrawal threshold was measured using a standard set of graded von Frey monofilaments (1.65, 2.36, 3.22, 3.61, 4.08, 4.31, 4.74 g; Stoelting, Wood Dale, IL, United States) following the up-down method. The 3.22 g monofilament was used to apply force to one paw. If there was no response, the next larger grade of monofilament was used on the next round of application.
If there was a positive response (e.g., raising of the paw from the table, licking paw), the next smaller grade of monofilament was used on the next round of application. After the first positive response, the up-down method was continued on alternating paws for four more applications with a minimum of five or a maximum of nine applications. The withdrawal threshold of each mouse was then quantified as a 50% g threshold for each mouse (Chaplan et al., 1994).
Light Aversion Behavior
A modified force plate actimeter was used to assess light aversion behavior. An opaque insert with an opening in the middle was placed in the center of the box on the actimeter. This allowed the mice to move freely between both sides. The light side of the chamber was equipped with lights affixed to the ceiling and controlled by a single dimmer with low, medium, and high settings. The high setting was used and the intensity on the light side was 950 +/− 20 lux, while the intensity of the light on the dark side was less than 5 lux. The light level in the home cage was 170 +/− 20 lux. Two mice could be tested simultaneously in a 20 min session with a maximum of 15 mice tested in 1 day between 0800 and 1200 h, during the light cycle. Mice were tested 6 and 3 days before the day of NTG treatment to establish a baseline. Mice were always acclimated to the room for 1 h before testing. During the testing period, mice were placed in the lit compartment facing away from the opening and allowed to freely move between the light and dark compartments for 20 min. Movement was recorded using FPARun software (Bioanalytical Systems Inc. West Layfette, IN, United States).
Enzyme-Linked Immunosorbent Assay
Immediately after light aversion testing, mice were overdosed with inhaled isoflurane (>5%) and trunk blood, dura, and TG were collected. Blood was allowed to clot for 1 h on ice and centrifuged at 10,000 rpm for 10 min. Serum was then collected and stored at −20°C until analysis. Dura and TG were immediately frozen in liquid nitrogen and stored at −80°C until total protein was isolated using sonication in RIPA buffer containing protease and phosphatase inhibitors. Serum corticosterone was quantified using an ELISA kit according to the manufacturer's instructions (ALPCO, Salem, NH, United States). Serum, dura, and TG CGRP were also quantified using an ELISA kit according to the manufacturer's instructions (MyBioSource, San Diego, CA, United States).
Statistical Analysis
Calculations of the measurements described above were made in Excel (Microsoft, Redmond, WA, United States) and statistical analyses were performed using GraphPad Prism 8 (GraphPad, La Jolla, CA) or IBM SPSS Statistics 26 (IBM Corporation, Armonk, NY, United States). Differences between groups were determined by non-repeated or repeated-measure mixed-model ANOVA and Fisher's least significant difference (LSD) posttest, as indicated in the figure legends. Statistical significance was set at p < 0.05.
Dural Mast Cell Characteristics
Initial observations were made in naïve and NMS mice that were caged under sedentary conditions (-Sed) or received free access to a running wheel from 8 to 12 weeks of age (−Ex). During this time, naïve and NMS mice ran similar distances per week (naïve: 7.4 km ± 0.93; NMS: 8.0 km ± 0.72; p = 0.77, mixed-effects model). Mast cells in the dura were visualized using toluidine blue and the total number and percent that were degranulated and calculated (Figure 2). There was a non-significant increase in the number of mast cells counted in the NMS-Sed dura (Figure 2A). However, a significant overall NMS/exercise interaction effect was observed on dural mast cell degranulation, such that NMS-Sed mice had a significantly higher degranulation rate compared to either naïve-Sed or NMS-Ex mice ( Figure 2B).
Protease Activated-Receptor 2 and Transient Receptor Potential Ankyrin 1 Protein Levels in the Dura
The protein levels of PAR2 and TRPA1 in the dura from sedentary and exercised naïve and NMS mice were measured using Western blot (Figure 3; Supplementary Figure 1). NMS significantly increased PAR2 protein levels in the dura ( Figure 3A) with no significant impact of exercise. A significant interaction effect of NMS and exercise was observed for TRPA1 protein levels in the dura with NMS-Ex mice having significantly lower TRPA1 levels compared to naïve-Ex mice ( Figure 3B).
Mouse Grimace Score and Forepaw Mechanical Withdrawal Thresholds After Direct Dural Application of Inflammatory Soup
Pair-housed mice that had access to running wheels beginning at 4 weeks-of-age were used for the remainder of the study.
As previously reported (Pierce et al., 2018;Fuentes et al., 2020), we did observe a slight, but non-significant decrease in weekly running distance in NMS mice compared to naïve (naïve: 7.2 km ± 0.49; NMS: 6.1 km ± 0.43, p = 0.13, two-way RM ANOVA). IS or saline was applied directly to the dura, via a modified canula, in sedentary and exercised naïve and NMS mice. Around 1 h later, mice were evaluated for mouse grimace score (MGS), followed by forepaw mechanical sensitivity (Figure 4). An overall significant effect of exercise was observed on MGS score, such that the MGS score in naïve-Ex-Saline, NMS-Ex-Saline, and NMS-Ex-IS mice was significantly higher compared to their sedentary counterparts (Figure 4B). Dural application of IS significantly lowered forepaw withdrawal thresholds across all groups (Figure 4C).
Mouse Grimace Score and Hindpaw Mechanical Withdrawal Threshold Following Nitroglycerin Injection
Sedentary and exercised naïve and NMS mice received an intraperitoneal (i.p.) injection of either saline or NTG (10 mg/ kg) and were assessed for MGS 30 min later, followed by hindpaw mechanical sensitivity (Figure 5). NTG significantly increased MGS across all groups, specifically in naïve-Ex-NTG mice, which had a significantly higher MGS score compared to naïve-Sed-NTG, naïve-Ex-saline, and NMS-Ex-NTG mice ( Figure 5B). NMS-Ex-NTG mice also had a significantly higher MGS score compared to NMS-Ex-saline mice ( Figure 5B). A significant overall effect of NMS and a NMS/exercise interaction was observed on hindpaw withdrawal thresholds ( Figure 5C). Naïve-Sed-saline mice had a significantly higher withdrawal threshold than naïve-Ex-saline and NMS-Sed-saline mice ( Figure 5C). Although, there were no significant differences between the NTG groups, the NMS-Sed-NTG mice had a lower withdrawal threshold than naïve-Sed-NTG (p = 0.067) and NMS-Ex-NTG mice (p = 0.071). It should also be noted Frontiers in Physiology | www.frontiersin.org that our lack of significance could be due to the low animals/ group we had in this set of behavioral experiments.
Mouse Grimace Score in Mice With No Dural Stimulation
Due to the consistent observation that exercised mice had increased MGS scores after dural application of either IS or saline and after NTG, the next group of mice was assessed for MGS following light isoflurane anesthesia. In the absence of dural stimulation or NTG administration, a significant overall effect of exercise on MGS score was observed (Figure 6). Naïve-Ex mice, in particular, had a significantly higher MGS score compared to naïve-Sed mice and NMS-Ex mice trended toward an increase in MGS score compared to NMS-Sed mice (p = 0.0601).
Photophobia-Like Behavior Following Nitroglycerin Injection
Photophobia-like behaviors were measured over 20 min, while mice were in a light/dark box on a force place actimeter. Measurements took place on days 6 [baseline day 1 (BL1)] and 3 [baseline day 2 (BL2)] prior to assessment following treatment with NTG (Figure 7). Time spent in the light and distance traveled were quantified at each time point. Naïve-Sed, NMS-Sed, and NMS-Ex mice all spent significantly less time in the light on the NTG treatment day compared to BL1 ( Figure 7B). Both NMS-Sed and NMS-Ex mice spent significantly less time in the light on the NTG treatment day compared to BL2 ( Figure 7B). Naïve-Sed and NMS-Ex mice traveled significantly farther distances on BL2 compared to either BL1 or NTG (Figure 7C). Following NTG treatment, NMS Frontiers in Physiology | www.frontiersin.org significantly decreased the percent of time spent in the light, particularly in NMS-Sed mice, which spent significantly less time in the light compared to naïve-Sed mice ( Figure 7D). Finally, NMS also significantly increased the change from BL2 in time spent in the light ( Figure 7E).
Calcitonin Gene-Related Peptide and Corticosterone Levels Following Nitroglycerin Injection
The protein levels of CGRP and corticosterone were measured in serum, and CGRP levels were also assessed in the dura and trigeminal ganglia (TG), 2 h after NTG injection (Figure 8).
Exercise had a significant impact on decreasing CGRP levels in the dura, as both naïve-Ex and NMS-Ex mice had significantly reduced CGRP protein levels compared to their sedentary counterparts ( Figure 8A). No significant differences between groups were detected for CGRP levels in the TG or serum (Figures 8B,C). Serum corticosterone levels were also not significantly different between groups ( Figure 8D).
DISCUSSION
Migraine is a debilitating neurological disorder that affects 9.7% of males and 20.7% of females in the United States (Burch et al., 2018). It is a complicated condition with many symptoms including headache, photophobia, phonophobia, widespread allodynia, and nausea. Migraine can be triggered by both endogenous and exogenous factors (Johnson and Krenger, 1992;Anand et al., 2012), making it difficult to understand and treat. One trigger of migraine is stress and early life stress is associated with the development of migraine in adulthood (Felitti et al., 1998;Anda et al., 2010;Brennenstuhl and Fuller-Thomson, 2015). Previous preclinical and clinical research on migraine has led to discoveries of pharmacological migraine treatments that show some success, but there are often harmful off-target side effects associated with these drugs. Therefore, it is important to develop safer therapeutic interventions for migraine. Several groups have studied exercise intervention in migraineurs and found positive benefits such as increased quality of life, less frequent migraine attacks, and lower symptom intensity (Varkey et al., 2009(Varkey et al., , 2011Darabaneanu et al., 2011).
In the present study, we used a mouse model of early life A B C FIGURE 5 | Mouse grimace score and hindpaw mechanical withdrawal threshold was measured in naïve and NMS mice that were Sed or Ex following an intraperitoneal injection of saline or NTG. (A) MGS was measured every 5 min over 1 h, beginning 30 min after NTG injection. Hindpaw mechanical thresholds were measured immediately afterward. (B) A significant overall effect of NTG was observed on MGS. Naïve-Ex-NTG mice had a significantly higher MGS then naïve-Sed-NTG and naïve-Sed-Saline mice. NMS-Ex-NTG mice had a significantly lower MGS compared to naïve-Ex-NTG mice and a significantly higher MGS than NMS-Ex-Saline mice. (C) A significant overall effect of NMS and a NMS/exercise interaction was observed on hind paw mechanical withdrawal threshold. NMS-Sed-Saline mice and naïve-Ex-Saline had significantly lower hind paw withdrawal thresholds compared to naïve-Sed-Saline mice. A trend toward a decreased withdrawal threshold was observed in NMS-Sed-NTG mice compared to naïve-Sed-NTG (p = 0.067) and NMS-Ex-NTG mice (p = 0.071). Bracket indicates a significant effect of NTG ( δδδδ p < 0.0001), NMS ( § p < 0.05), or a NMS/ exercise interaction ( + p < 0.05), three-way ANOVA; # p < 0.05 vs. sedentary, &, &&&& p < 0.05, 0.0001 vs. saline, * p < 0.05 vs. naïve, Fisher's LSD posttest. n = 3-5.
FIGURE 6 | Mouse grimace score was measured in Sed and Ex naïve and NMS mice that were briefly anesthetized and allowed to recover for 1 h. There was a significant overall effect of exercise ( εεε p < 0.001; two-way ANOVA). Naïve-Ex had a significantly greater MGS compared to naïve-Sed mice ( ## p < 0.01; Fisher's LSD posttest). n = 4-5.
Frontiers in Physiology | www.frontiersin.org stress, NMS, to determine if NMS mice displayed molecular and behavioral evidence of increased susceptibility to evokedmigraine. We also used voluntary wheel running as a non-pharmacological intervention to study if exercise had any influence on these measures. Although the underlying pathophysiology of migraine has yet to be established, recent research has focused on the prominent role the trigeminovascular system plays in this neurological disorder (Iyengar et al., 2019). This system includes neuronal cell bodies that reside in the TG and project peripherally through the trigeminal nerve, to tissues including the dura mater, and centrally to the spinal trigeminal nucleus (Noseda and Burstein, 2013). The dura is highly innervated with nociceptive unmyelinated C-fibers and thinly myelinated Aδ fibers projecting from the ophthalmic division of the trigeminal nerve and contains many mast cells (Uddman et al., 1985;Moskowitz et al., 1988;Liu et al., 2004). Dural mast cells can become activated by several molecules including CGRP and substance P (Johnson and Krenger, 1992;Anand et al., 2012). These cells are also highly responsive to activation of the HPA axis, as they express five isoforms of the CRF 1 receptor, a single isoform of the CRF 2 receptor, and contain one of the largest peripheral stores of CRF (Theoharides and Cochrane, 2004). This could explain why stress is a common trigger of migraine; increased peripheral CRF release during a stressful event results in dural mast cell activation and the subsequent release of inflammatory cytokines causing a hypersensitivity reaction (Johnson and Krenger, 1992;Anand et al., 2012) that leads to migraine pain. In support of this, we found that NMS-Sed mice displayed a significant increase in the percent of degranulated mast cells in the dura compared to (C) Total distance traveled was measured during the testing period and there was a significant overall effect of time across all groups (p < 0.0001). Naïve-Sed and NMS-Ex mice displayed significant changes in their distance traveled from BL1 to BL2 and from BL2 to NTG. On testing day, naïve-Ex mice traveled a significantly farther distance compared to either naïve-Sed or NMS-Ex mice. (D) Comparisons between groups revealed a significant impact of NMS on time in the light side following NTG treatment. NMS-Sed mice spent significantly less time in the light compared to naïve-Sed mice. (E) Likewise, when calculated as a percent of BL2 time spent in the light, there was a significant overall effect of NMS. Brackets indicate significant within-group differences between time points (three-way RM ANOVA, α naïve-Sed, β NMS-Sed, γ naïve-Ex, p < 0.05, 0.01, 0.0001, Fisher's LSD posttest, A,B) or NMS ( § p < 0.05) two-way ANOVA. * p < 0.05 vs. naïve, # p < 0.05 vs. sedentary, Fisher's LSD posttest. n = 10.
Frontiers in Physiology | www.frontiersin.org naïve-Sed mice. Similarly, Theoharides et al. (1995) showed rats subjected to restraint stress had increased dural mast cell degranulation, but treatment with polyclonal antiserum to CRF reduced this effect. We also found that exercise normalized the NMS-induced increase in mast cell degranulation. This effect could be due to exercise lowering susceptibility to stress, which has been demonstrated in a preclinical rodent model of uncontrollable tail shock (Greenwood and Fleshner, 2011) and is also seen in clinical research focused on exercise interventions in migraineurs (Varkey et al., 2009;Darabaneanu et al., 2011). In addition to evaluating mast cell degranulation in the dura of our mice, we also measured dural PAR2 and TRPA1 protein levels, which are receptors that are both expressed on C-and Aδ-nociceptors (Julius and Basbaum, 2001). These receptors respond to noxious thermal, chemical, and mechanical stimuli and are often co-expressed on the same neurons (Dai et al., 2007). TRPA1 has been implicated in acute and chronic pain (Andrade et al., 2012) as well as the maintenance of hypersensitive conditions (Nassini et al., 2014). Environmental irritants that activate TRPA1 have been shown to trigger migraine headaches in susceptible individuals (Kelman, 2007). PAR2 is activated by mast cell tryptase (Ossovskaya and Bunnett, 2004) and is associated with the sensitization of TRPA1 channels through several cellular mechanisms (Dai et al., 2007;Chen et al., 2011). Activation of either receptor has also been shown to evoke migraine-like behaviors in rats (Edelmayer et al., 2012;Hassler et al., 2019). We found that there was a significant overall effect of NMS increasing dural PAR2 compared to naïve mice and a NMS/exercise interaction on the level of TRPA1 such that NMS-Ex mice had significantly less dural TRPA1 than NMS-Sed mice. We have previously observed increased PAR2 and TRPA1 protein levels in the bladder of female NMS mice, which was further increased by adult stress exposure (Pierce et al., 2016), indicating that tissue-specific expression levels of these receptors may associate with increased sensitivity. While protein levels do not reflect sensitization, it will be important to understand whether the tryptase released by degranulated mast cells is directly activating PAR2 receptors on nearby sensory nerve endings, potentially sensitizing TRPA1 receptors and increasing neuronal activity and how voluntary wheel running is selectively decreasing TRPA1 levels in NMS mice.
Facial and extracephalic hypersensitivity are commonly seen during a migraine attack in migraineurs (Burstein et al., 2000(Burstein et al., , 2010Cuadrado et al., 2008;Edelmayer et al., 2009) and after migraine-relevant stimuli in rodents (Edelmayer et al., 2009;Wieseler et al., 2010;Stucky et al., 2011). Facial allodynia is likely caused by sensitization of second order trigeminovascular neurons found in the spinal trigeminal nucleus that receive input from the dura and the periorbital skin (Burstein et al., 1998), while extracephalic allodynia is likely caused by central sensitization (Cuadrado et al., 2008;Edelmayer et al., 2009;Burstein et al., 2010). We hypothesized that NMS mice would Frontiers in Physiology | www.frontiersin.org display paw hypersensitivity after evoked migraine compared to naïve mice and that exercise would attenuate this effect. Indeed, application of dural IS significantly lowered forepaw withdrawal thresholds across all groups. Although NTG treatment did not have a significant impact on hindpaw withdrawal thresholds, there was a significant overall effect of NMS on hind paw sensitivity, and NMS-Sed mice had a lower withdrawal threshold compared to naïve-Sed mice, however, this did not reach significance. Similar to these findings, Kaufmann and Brennan (2018) found that both stressed and non-stressed rats developed hind paw mechanical hypersensitivity after NTG injection. These results suggest that stress may not be a factor in the development of IS or NTG evoked widespread hypersensitivity. However, we only measured paw sensitivity at one time point. Future work could measure withdrawal thresholds over time in NMS and naïve mice in these evoked migraine paradigms. It is possible that NMS mice could develop IS-or NTG-evoked mechanical hypersensitivity quicker than naïve mice or that it takes longer for NMS mice to return to baseline measurements. By only measuring withdrawal threshold at one time point, we could have missed significant differences throughout the time course of the IS and NTG effects.
We also assessed grimace in our mice, which is an accepted measure of spontaneous pain-like behavior (Langford et al., 2010). We hypothesized that NMS-Sed mice would have a higher MGS score compared to naïve or exercised mice. Interestingly, there was consistently an increase in MGS score in both naïve-and NMS-Ex groups after dural IS and saline application, as well as in non-injected mice that were subject to isoflurane anesthesia. This finding was surprising because in our previous studies, we found that exercise mitigated chronic urogenital pain symptoms (Pierce et al., 2018;Fuentes et al., 2020), while a higher MGS is thought to demonstrate more pain-like behavior. It is unknown what impact voluntary wheel running has on mouse facial features and how this may impact scoring, particularly in terms of eye squint and ear position. In some instances, exercise has been shown to evoke migraine in humans. However, this is usually after strenuous exercise (Massey, 1982) or if exercise is novel and therefore it is suggested that migraineurs be slowly habituated to exercise (Goadsby and Silberstein, 2013). Our exercised mice had access to running wheels for an extended amount of time before evaluating grimace, and therefore likely did not evoke a migraine attack because of novelty. One factor that could have influenced MGS is the isoflurane anesthesia that was administered prior to MGS testing in this group of mice. Miller et al. (2015) measured MGS in DBA/2 mice before and 30-min after isoflurane anesthesia and found a significant increase in MGS following anesthesia. Although, we measured MGS 1 h after isoflurane administration, it is possible that Ex mice were not able to recover as quickly from the isoflurane and therefore exhibited a higher MGS. Further support of isoflurane influencing the exercised mice MGS is that exercise did not have a significant effect on MGS in our NTG experiment, which were not anesthetized before IP administration.
Mechanical hypersensitivity and MGS are not behaviors specific to migraine; therefore, to follow up these initial studies, light aversion was measured, which is one symptom that meets the diagnostic criteria of migraine according to the International Classification of Headache Disorders (2018). Using a light/dark box placed on a force plate actimeter, similar to the method published by Rossi et al. (2016), the location of the mouse was continuously measured during each 20 min testing period in order to quantify distance traveled and the percent time spent in the light. All groups, with the exception of naïve-Ex, significantly decreased their time spent on the light side of the box from the first baseline day to the treatment day. Both NMS groups, regardless of exercise, significantly decreased their time in the light from the second baseline day to the treatment day and there was an overall significant effect of NMS on reducing the time spent in light following NTG. These data suggest that NMS increases sensitivity to light following NTG injection. Total distance traveled during the two baseline measurements and on the treatment day were also evaluated. Interestingly, naïve-Sed and NMS-Ex mice were the only groups to show a significant increase in activity at BL2 compared to either BL1 or post-NTG. Other groups have evaluated photophobialike behavior in rodent migraine models (Recober et al., 2010;Markovics et al., 2012;Mason et al., 2017); however, to our knowledge this is the first study to combine a stress and exercise component.
There is strong evidence that CGRP is important in the initiation and maintenance of migraine pain (Wattiez et al., 2020). CGRP is expressed in many of the unmyelinated C-fibers that innervate the dura (Eftekhari et al., 2013) and as well as TG neurons (Eftekhari et al., 2010). In addition, the CGRP receptor is expressed on the Aδ fibers that innervate the dura (Eftekhari et al., 2013) and a portion of TG neurons (Eftekhari et al., 2010). Circulating CGRP has also been shown to be elevated during migraine attacks (Goadsby and Edvinsson, 1993). We measured CGRP levels in the serum, dura, and TG of our mice following NTG injection with the hypothesis that NMS mice would display higher CGRP levels, which could explain their increased photophobia-like behavior. Surprisingly, we did not find any differences in CGRP levels between our NMS and naïve groups in any tissue that we analyzed. These results imply that increased CGRP is not the cause of the increased susceptibility to the migraine-like phenotype observed in our NMS-Sed mice. However, we did observe that exercise significantly decreased CGRP levels in the dura of both groups. To our knowledge, we are the first to measure dural CGRP in stressed and exercised mice following NTG administration. But similar to our findings, Nees et al. (2016) found that treadmill exercise decreased CGRP in the spinal cord and decreased mechanical allodynia in spinally injured mice. A limitation to our study is that we only measured dural CGRP in mice that received a NTG injection. It is possible that exercise decreases CGRP in dura regardless of NTG injection. These data could explain the positive benefits of exercise seen in clinical Frontiers in Physiology | www.frontiersin.org migraine studies (Varkey et al., 2009(Varkey et al., , 2011Darabaneanu et al., 2011) and why our NMS-ex mice were less susceptible to dural mast cell degranulation and NTG-induced photophobia-like behavior.
Serum corticosterone following IP NTG injection and light aversion assessment was also measured. Human plasma cortisol levels are increased in migraineurs after NTG injection and this increase significantly correlated with the development of migraine (Juhasz et al., 2007). Due to the fact that NMS and exercise have been shown to influence HPA axis activity, we hypothesized that the combination of NMS and NTG would evoke an increase in circulating corticosterone compared to naïve mice, and exercise would prevent this effect. However, there was no significant effect of NMS or exercise on serum corticosterone level. Although this result implies that corticosterone may not have an effect on evoked migraine-like behaviors, corticosterone level was only measured at one time point. Circulating corticosterone is known to follow a circadian rhythm, which peaks in the early evening just before the dark cycle in laboratory rodents and is lowest at the beginning of the light cycle (Barriga et al., 2001). This is opposite to the circadian rhythm of cortisol in humans, which peaks in the early hours of the morning and decreases throughout the day (Kirschbaum and Hellhammer, 1989). Migraineurs have been shown to have a greater peak cortisol level compared to control patients (Patacchioli et al., 2006). In this study, serum corticosterone was measured when it should have been in a trough. If corticosterone were to be measured during peak hours, we might find something different.
We acknowledge several limitations in our study that may impact the interpretability of our outcomes. Cage density varied between our Ex and Sed groups, which has been shown to impact mouse physiology and behavior, such as sleep and activity levels (Toth, 2015). We chose to maintain our Sed mice in group housing conditions to avoid additional stress caused by single-caging. Our group numbers were also small for some experiments, suggesting that increasing the number of mice may have strengthened the statistical outcomes. Finally, we did not consider sex as a biological variable in this study, however, we have observed significant impacts of NMS and voluntary wheel running on urogenital sensitivity in male mice (Fuentes et al., 2020), similar to outcomes observed in female NMS mice (Pierce et al., 2018), suggesting we may see similar outcomes in both sexes.
In conclusion, we found that NMS-Sed mice had increased dural mast cell degranulation compared to naïve mice, which was attenuated by voluntary wheel running. NMS-Sed mice also appeared to be more susceptible to NTG-induced photophobia-like behavior compared to naïve mice, although exercise had no impact on these behaviors. These are important findings because they highlight the usefulness of NMS as a model to study stress-induced migraine and potentially the limited effectiveness of voluntary wheel running as a therapeutic intervention. Future work is needed to determine the underlying mechanisms of NMS and exercise in this context. Our results imply that increased CGRP is likely not involved, as CGRP level was not different between naïve and NMS groups. Voluntary wheel running decreased CGRP levels and mast cell degranulation rates in the dura; however, it also increased MGS in all of our groups that were anesthetized with isoflurane. This observation calls into question the use of MGS in previously anesthetized and exercised animals.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by the Institutional Animal Use and Care Committee at the University of Kansas Medical Center.
AUTHOR CONTRIBUTIONS
OE, AP, GD, and JC designed the research study. OE, XY, IF, AP, BJ, AB, and RW performed the experiments. OE, XY, AP, BJ, and JC analyzed the data. OE, GD, and JC wrote the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by NIH grants R01 DK099611 (JC), R01 DK103872 (JC), Center of Biomedical Research Excellence (COBRE) grant P20 GM104936 (JC), T32 HD057850 (OE, IF, BJ, and AB), start-up funds and core support from the Kansas Institutional Development Award (IDeA) P20 GM103418, core support from the Kansas IDDRC P30 HD002528, and The Madison and Lila Self Fellowship Program (AP).
ACKNOWLEDGMENTS
We would like to thank Drs. Carolina Burgos-Vega, Andrea Chadwick, Paige Geiger, Kenneth McCarson, and Doug Wright for expert advice and feedback on experimental design, methodology, and interpretation. We acknowledge that part of this work appears in dissertation form. | 2021-05-28T13:43:58.782Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "06949140a898ed7ba45a4755ea8cfea2e80d547d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.665732/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06949140a898ed7ba45a4755ea8cfea2e80d547d",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225315303 | pes2o/s2orc | v3-fos-license | Beyond here and there: (re)conceptualising migrant journeys and the ‘in-between’
ABSTRACT Journeys of refugees and other migrants are typically represented as linear movements between two places with the academic and policy gaze directed primarily towards the places people leave and what is assumed to be their final destination. This linear representation presupposes that people have a specific country in mind when they depart and that everything ‘in-between’ is simply a ‘stepping stone’. This article explores the journeys of Syrians, Nigerians and Afghans drawing on empirical data gathered in Turkey, Greece and Italy during 2015. Our evidence suggests that, even for those who eventually arrived in Europe, the places to which people initially travelled were often destination rather than ‘transit countries’. It was only when life became untenable and a decision was made to move that these places took on a state of ‘in-betweenness’, most commonly as part of a personal narrative mobilised by respondents to make sense of the broader arc of their life experiences. Failure to understand, or even ask questions about, the multiple meanings which places have for people at different points in both their phsycial and metaphorical (life) journeys, undermines conceptual and empirical analysis of migrant journeys and plays into anti-immigrant discourses prevalent across much of the Global North.
Introduction
The journeys of refugees and other migrants 1 are typically represented as linear movements between two placeshere and therewith the academic and policy gaze directed primarily towards the places people leave and those in which they finally arrive (Snel et al, 2021;Kuschminder, 2021). The countries 'in-between' are conceptualised primarily as places of 'transit' in which people are 'stuck' or 'stranded' whilst they explore options for their onward travel (Choplin and Lombard 2013;Askerov, Currle, and Ghazi 2018). Whilst some of the existing literature provides valuable empirical material that allows us to better understand the significance of the places 'in-between' (see, for example, Collyer 2010), much of it reflects and reinforces a particular conceptualisation of migrants as people that are constantly 'on the move' looking for ways to get to the countries of the Global North (Schapendonk 2012).
Nowhere has this representation been more evident than in the narratives associated with Europe's so-called migration crisis which gave the impression of a linear, uninterrupted flow of people heading towards Europe, most commonly represented by straight arrows on a map linking two distinct areas (Mainwaring and Brigden 2016;. Politicians and policy-makers across Europe have largely talked about the arrival of refugees and migrants in 2015 as an unprecedented event, a single coherent flow of people that came 'from nowhere', suddenly and unexpectedly pressing against the continent's southern border. This narrative was echoed by journalists, film-makers and artists who focused on the movement of people, interviewing, photographing and otherwise depicting people on the move. At the same time academics, including the authors of this article, have been funded to document and explain these flows. But this was (and remains) very much a view from Europe, shaped by the politics of the continent and by a particular, narrow, understanding of migration dynamics which ignores journeys as a social process (Mainwaring and Brigden 2016). This linear representation presupposes that people have a specific destination country in mind when they depart from their home countries and that everything 'in-between' is simply a 'stepping stone'. Those living between here and there -'the in-betweeners'are most commonly represented as having difficult, uncomfortable, liminal and often meaningless lives. Such representations ignore the complex social and economic realities associated with migration decision-making. As much as migrants exert agency over their journeys, they are thwarted or facilitated by a multitude of shifting place-based structural factors (Bridgden and Mainwairing 2016). They also misrepresent the scale and direction of migration flows and the fact that millions of people move to other countries without ever intending to move onwards to Europe or elsewhere. For example, whilst around 525,000 Syrians arrived in Europe during 2015 (IOM 2016), a higher number live in Istanbul alone (Kirisci, Brandt, and Murat Erdogan 2018), representing just part of the more than 3.5 million Syrians that have moved to Turkey in search of protection. Similarly, whilst around 210,000 Afghans arrived in Europe in 2015 (IOM 2016) more than 3 million live in Iran. To put this into context, in the same year, over 11 million Europeans moved to live in another EU Member State (EC (2016) 2017). Moreover whilst some undoubtedly aspire to move onwards, many others have no clear aspirations or intended direction (Crawley and Hagen-Zanker 2019). Others are focused primarily on returnroughly one in four migration events involve a return to an individual's country of birth (Azosea and Rafterya 2019) 2or intend to stay put, at least for the foreseeable future.
Drawing on empirical data gathered in Turkey, Greece and Italy during 2015 as part of the ESRC-funded MEDMIG project 3 , this article explores the experiences of Syrians, Nigerians and Afghans who spent considerable periods of time living in Turkey, Libya and Iran before we came into contact with them during the course of our research. All three groupsbut particularly those living in Turkey and Libyawere, and continue to be, positioned by both policy-makers and researchers as being 'in transit' to Europe. Because of this, Turkey and Libya have been the recipients of European financial assistance aimed at preventing further onward migration. However life in these countries constituted, for most, much more than a temporary 'stop over': some of those we interviewed in Turkey had no intention of moving to Europe; others left their home countries months or even years beforehand, only moving on to Europe because life had become untenable . These places only took on the status of being 'in-between' as part of a personal and/or collective narrative mobilised to make sense of the broader arc of an individual's life experiences to the point at which we met them.
In this context, our article raises new and important questions about our understanding of places for people on the move. In so doing it builds on an emerging literature which unpacks the journeys made by refugees and other migrants (BenEzer and Zetter 2014;Kaytaz 2016;Mainwaring and Brigden 2016). We contribute to a growing literature, reflected in 2020, that views refugees and other migrants as active agents in formulating their own plans albeit within often constrained opportunities (Schapendonk 2012(Schapendonk , 2021Kuschminder, 2021). These places provide not just the physical context within which migrants live but also a range of new opportunities, chance encounters and evolving social relationships, including those that are transnational (Caarls, Bilgili, and Fransen 2021) or influenced by technology (Godin and Dona, 2021) within which lives are lived and decisions about the future taken. We argue for the importance of re-situating place as more than a mere back-drop to physical journeys. Failure to understand the significance of places to peopleand of people to placesundermines conceptual and empirical understanding of migration and reduces the analysis of migration journeys to the physical movement itself. This, in turn, silences the experiences of those who never aim for, or reach, other countries (Kaiyoorawongs 2016), and plays into dominant anti-migrant policy and media narratives.
We begin our discussion by reflecting on the politics of the 'in-between' for policymaking and for our own research, as well as the relationship between the two. In particular, we draw attention to the ways in which dominant conceptualisations of the 'inbetween' often determine the ways in which research is designed including the questions that are asked, the journeys that are told and the stories that are (and are not) heard. Our empirical evidence unfolds in the three sections that follow. Firstly, we outline some of the everyday experiences of our respondents in those places typically regarded as the 'inbetween', highlighting their relationships to these places and the ways in which these experiences helped them to feel 'in-place'. Secondly, we analyse the explanatory power of these experiences in helping us to understand the reasons for the decision to move on, exploring the ways in which the notion of being 'in-between' came to be assigned to those places retrospectively. Finally, we situate peoples' experiences of the journey and meanings ascribed to the 'in-between' within their wider historical and geopolitical contexts.
The politics of the 'in-between'
Before engaging with the experiences and perspectives of those who move, it is important to reflect on the politics of the 'in-between' as seen in policies intended primarily to control and contain migration to Europe. An important starting point for this discussion is the concept of 'transit migration', which was invented in the early 1990s when the European Union (EU) introduced stricter border controls and promoted by certain institutions, most notably IOM, ICMPD, the Council of Europe and various UN agencies (Düvell, Molodikova, and Collyer 2014;Zijlstra 2014). Whilst the concept of 'transit migration' is now widely employed by academics and policy-makers alike, its meaning, usefulness and appropriateness remain highly contested (Collyer, Düvell, and de Haas 2011;Düvell 2012;Basok et al. 2015).
Firstly, whilst the idea of 'transit migration' highlights the fact that migration is not, at least for a considerable group of people, a simple movement from A to B, the unidirectional configuration from country of departure → transit country → country of destination does not adequately capture the complex realities of physical migration journeys (Bredeloup 2012;Schapendonk 2012). In particular, by reducing the places in which migrants live to places of 'transit', and their experiences of 'the journey' to the physical process of moving, the social, emotional and ecomonic lives lived and decisions taken in other places become invisible (Bridgden and Mainwairing 2016;Kaiyoorawongs 2016). It is important to remember that most migrants do not move on and are therefore never 'in transit' (Kaytaz 2016). Immobility can result from being 'stranded' or 'stuck' (Schapendonk 2012) but it is more often a choice. Indeed many of those interviewed as part of our research never intended to travel to Europe when they left their countries of origin but were among a minority who, for a variety of reasons, felt compelled to do so (Crawley and Hagen-Zanker 2019).
A second, related, concern is that 'transit migration' started life as a policy concept and has been mobilised for political purposes (Düvell 2012;Düvell, Molodikova, and Collyer 2014). The notion of 'transit migration' contributes to the idea that millions of people are on the move, currently 'stuck' in other countries but ultimately 'heading to Europe'. Whilst 'transit migration' accounts for a relatively small share of arrivals the spectre of potential migration haunts Europe and provides a vehicleand rationalefor policies that aim to manage and control migration flows. This 'myth of invasion ' (de Haas 2008), to which the concept of 'transit migration' is central, allows Europe to justify its policy response which is formulated primarily around two key ideas: the securitisation of migration policies to physically prevent onward migration; and the use of development assistance to encourage and persuade those contemplating onward migration to 'stay put'. Indeed, the fact that the concept of 'transit migration' is used almost exclusively in the context of migration to Europe reinforces concerns that it is primarily a political tool for leveraging policy outcomes rather than a useful analytical or conceptual mechanism for understanding migrant journeys and decision-making (Collyer, Düvell, and de Haas 2011).
Recent EU policy initiatives in both Turkey and Libya illustrate this process. In March 2016, an agreement between the EU and Turkey came into force aimed at stopping the movement of people across the Aegean . According to the EU-Turkey Statement, all new arrivals into Greece were to be returned to Turkey with equivalent EU resettlement places provided in exchange. In return for bostering Europe's external border, Turkey would see visa liberalisation for its nationals, acceleration of its accession to the EU and a payment of €3 billion. Three years later, the arrangement has failed, even on its own terms. 4 Whilst the number of people crossing the Aegean has reduced considerably it has not stopped and thousands of people are living in dire conditions on the Greek islands, and in parts of mainland Greece (Jumber and Tank 2019). Turkey meanwhile has seen little to no action on visa liberalisation or accession. At the same time Libya has become a primary recipient of targeted European efforts to detain and deport people attempting to reach Europe across the Mediterranean. These policies are nothing new: since 2004 the EU has placed Libya at the forefront of its policy to export the management of its borders (Hamood 2008). However, with the so-called migration crisis in Europe, considerable additional funding has been allocated by EU Trust Fund for Africa and individual European countries, most notably Italy, to train and build the capacity of the Libyan coastguard to intercept and return boats off the Libyan coast, increase management at the Southern border, increase detention facilities in Libya and, through IOM, facilitate returns. This is despite the chaotic political situation in Libya and evidence of horrific abuse against refugees and other migrants in Libya to which these policies contribute directly (Human Rights Watch 2019).
In this way ideas around 'transit migration'associated, as they often are, with increased securitisationserve to underplay the responsibility of governments with respect to the rights of migrants living within their territory, many of whom have no intention of moving on (Cherti and Grant 2013). For those countries positioned as 'transit countries' meanwhile, the consequences are double-edged. On the one hand, governments often find themselves subject to growing expectations, and occasional punishments, regarding their role in migration management. At the same time, they are able to leverage the EU's anxieties around migration for their own economic and political gain (de Haas 2008;Bredeloup 2012;Choplin and Lombard 2013). Indeed these countries may exaggerate or 'play up' their assigned role as a springboard or entry point to Europe. In 2010, for example, Gaddafi warned that Europe ran the risk of turning 'black' unless the EU paid Libya at least €5 billion (£4.1 billion) a year to block the arrival of migrants from Africa who were representing as transiting through Libya. 5 More recently Turkey threatened to 'let loose' 15,000 Syrians per month and 'blow the mind' of Europe unless it was given considerable financial support and visa-free access to the EU. 6 Hoist by its own petard, the EU has subsequently committed considerable economic and political resources to Libya and Turkey in order to prevent 'transit migrants' arriving on the shores of Europe.
'You see what you look at': methodological reflections on 'the in-between' This article draws on material gathered through 500 interviews with refugees and other migrants undertaken by a team of researchers between September 2015 and January 2016 in Greece (215 interviews), Italy (205 interviews), Malta (20 interviews) and Turkey (60 interviews) . Our aim was to gather robust research evidence on the factors driving the so-called migration crisis as these events unfolded in order that this evidence could, potentially, feed into policy-making processes. The choice of countries was in response to a funding call which explicitly sought evidence to explain why and how people were crossing the Mediterranean into Europe. In 2015, Greece and Italy were (and remain) the main sites of first arrival of people into the EU (IOM 2016). Turkey was included as a fieldsite because of its role as neighbour to the EU and primary destination for refugees fleeing the conflict in Syria as well as those arriving at its southern border from Iran. Malta was selected as a comparison site because it had recently seen a decline in new arrivals of people since high points in 2013 and 2008. Interviewees were purposefully sampled by nationality and gender to reflect the composition of those then arriving into these countries. .
In reflecting on our methodological approach, it is important to acknowledge that the ways in which the design of research, the questions asked and the location of the fieldwork, can significantly shape understandings of migrant journeys. This, in turn, can inadvertently reinforce and amplify policy narratives about the nature of migration flows to Europe. Although the majority of our fieldwork was undertaken in Europe, we consciously avoided imposing an artificial 'end point' on people's journeys through our own interview process (Mainwaring and Brigden 2016). We constantly reminded ourselvesand othersthat where we interviewed people could be (or become) an 'in-between', either now or at some time in the future. Ontologically, we interpreted 'migration journey' as a social and analytical category rather than only physical process (Buscher and Urry 2009;McHugh 2000). This meant not only asking people about their migration decisions and journeys but purposefully asking about their experiences in the places where they had previously lived (in addition to their countries of origin) and exploring the meanings of those place(s) in their everyday lives. Semi-structured interviews provided an opportunity for respondents to share as much detail as they wished or felt comfortable in doing, and for us to explore what places meant to peoplein the here and now as well as before they came to be in that place (Kaytaz 2016). We asked people to look forwards, to discuss their hopes and aspirations for the future in addition to looking back. Interviews navigated the 'in-between' in respect of people's daily lives where immobility, either chosen or happenstance, rather than mobility is often the norm (Kaytaz 2016).
It is clear that the national and local context in which people make decisionswhether to stay or to leave, where to live, whether to seek legal protection, how to earn moneymatters (Ashutosh and Mountz 2012). We therefore designed the research to collect and analyse data at three scales: the macro (institutional, political, economic); the meso (the everyday interactions with people and organisations which helped facilitate people's journeys and everyday lives) and micro (individuals' decision-making and life experiences) (Sladkova 2013). We selected more than 40 (anonymised) field-sites in the four countries: in Italy (Sicily, Puglia, Piedmont, Milan and Rome); Greece (Lesvos and Athens); Malta (Valletta); and in Turkey (Izmir and Istanbul). As far as was possible, we set up our framework to capture multiple snapshots through locating ourselves across multiple field-sites (Marcus 1995). This provided opportunities for a variety of interactions with respondents, who had multiple variants of physical and metaphorical journeys. Research sites included places where people were on the move (train stations), where they were living (asylum reception centres, community centres, shared apartments) and where they gathered (city squares, cafes, churches, mosques, providers of legal support and assistance). Interviews were conducted by bilingual field researchers or with the support of interpreters. Careful attention was paid to ethical issues including the need for sensitivity in recognising trauma and the assistance which interviewers could provide (BenEzer and Zetter 2014). Informed consent was cautiously sought from participants (Hugman, Bartolomei, and Pittaway 2011), with researchers mindful of the importance of anonymity in the context of potentially clandestine journeys (Mainwaring and Brigden 2016).
This article presents findings from interviews with Syrian, Nigerian and Afghan nationals, about their experiences in Turkey, Libya and Iran respectively. These three sets of interviews provide an opportunity to delve more deeply into the importance and meaning of places which are conceptualised as a place of transit by policy-makers. Forty Syrian refugees living in Izmir were interviewed about their experiences in Turkey after having fled the conflict in Syria ). We do not know whether they subsequently decided to move on to Europe or are still living in Turkey; in other words, whether Turkey became an 'in-between' or remained a destination. Afghans and Nigerians, by contrast, spoke about their experiences in places where they had lived previously (Iran and Libya, respectively), and which had, in a physical sense, been an 'in-between' on their journeys. Forty-one Nigerians -28 men and 13 womenwere interviewed at field-sites across Italy, having arrived between several days and several years earlier. They had spent between one week and five years in Libya, living in Tripoli, Bengazi and Sabha having left Nigeria as a result of Boko Haram activities, localised conflict and disputes with the police or gangs (confraternities), family conflict and political activism which had put the interviewee and /or their family in danger . Fifty-six Afghan respondents, almost all male, were interviewed in Greece and Turkey. Nearly half (43%) had left Afghanistan more than 5 years prior to their interview with us and, of these, a significant proportion (39% of the total) had been living outside Afghanistan, mainly in Iran, for more than 10 years. Most had left their homes in Teheran, Isfahan, Shiraz and Qum because the discrimination they faced in Iran had become intolerable and because they feared being forced to return to the ongoing conflict in Afghanistan.
In this article, we analyse what we recognise to be reconstructed memories of living in places typically considered to be 'in-between'. Memories are reconstructed by individuals and narrated within the context of current experiences (Burrell and Panayi 2006). It is possible, if not probable, that meanings attached by individuals to their journeys differed at the time we spoke to them from that consciously or unconsciously occurred at the time (BenEzer and Zetter 2014). Our research therefore provides an opportunity to explore the meanings of the 'in-between' both to people still living in those places and those who have moved on and are relating their experiences retrospectively. This enables us to compare the multiple meanings which places have for people at different points in both their physical and metaphorical (life) journeys.
Experiences and meanings: everyday life and the 'in-between'
Afghan, Syrian and Nigerian respondents had multi-layered lives in the 'in-between' places, relating tales of family, friends, lovers, schools and work as well as their experiences of daily survival in the face of extreme challenges. These multiple meanings and narratives (BenEzer and Zetter 2014) indicated that they did not spend every day thinking about how to travel onwards to Europe. Indeed for many, and especially the Syrians interviewed in Turkey, this was not even a consideration: 'My plan for the future is to stay here in Turkey and create a new life for myself' (Syrian man, 23). Family life was of paramount importance to many and helped them to feel rooted. Their focus was on earning money and providing housing, education and healthcare for themselves and their children. For example, a Syrian woman who was living in Izmir with her husband and two children told us that her primary concern was ensuring that her children had access to education and that her husband earned enough to provide a life for them in Turkey: I hope to live here in peace with my family. My husband is not registered but he works here as a tailor and earns relatively good [money]. We do not want to return to Syria or go to Europe. (Syrian woman, 30, married with three children) For others, Izmir provided the security needed for them to focus on getting married and having children. Wider kinship and friendship networks also mattered and contributed to the meanings which respondents attributed to that place (Boyd 1989). Families, friends and extended kin were sources of friendship as well as of practical support with accommodation and employment (Caarls, Bilgili, and Fransen 2021). They were especially important where refugees lacked financial support or other means.
We found that, for the most part, Syrians were managing to survive and make lives in Izmir even if these lives were sometimes precarious (see also Illcan, Rygiel, and Baban 2018). There were however variations in their experiences, including by ethnicity. Syrian Kurds and Syrian Turks were especially likely to say that extended kin and friend networks had played an important role in their reason for moving to Izmir as well as in their ability to find work there: My uncle was living in Izmir and he called me to come to Izmir. He knows Izmir very well. I trust him and decided to move Izmir for a better job. My uncle was working in a shoe shop. In Izmir, I settled to Gultepe district with the recommendation of my uncle and also considering that several Turkmen refugees were generally living around Gultepe district. (Syrian man, 29, married with one child) As Kurdish is commonly spoken in Izmir, Syrian Kurds found it easier to find work than others. And as Syrian Turkmen already spoke some Turkish, for them too, building a new life in Izmir was less challenging than for other.
Afghan and Nigerian respondents also attached everyday meanings to their day-by-day lives in Iran and Libya. Whilst the decision to move on dominated many of the interviews, their stories of these places were more complex and nuanced than typically represented, reflecting the notion of the journey as a social as well as physical process (Kaytaz 2016). For Nigerians interviewed in Italy, Libya was a place in which they had experienced significant degrees of violence, fear and trauma, all of which had ultimately contributed to their onward movement. But it was also a place where they had worked, made friends and even fallen in love: even in the midst of conflict and societal breakdown, some Nigerians had been able to live ordinary lives in which friendships played a significant role. Friends helped people find work, lent them money, gave them a place to stay or helped find connections. For those who lived in Tripoli, finding friends who were better connected, who spoke Arabic and knew where to get jobs, were lifelines for their survival: My friend's brother speaks Arabic. He speaks the language and helped me. So I didn't look for a job. I worked for him and stay with him. I'm his boy. Very good man. From same region. Good person. (Nigerian man,32) While friends were usually other Nigerian nationalswith whom they felt safethey also sought and received assistance from other African nationals, especially other Englishspeaking West Africans. These friendships sometimes evolved into something more: one man told us that he fell in love with the woman who later became his wife whilst working in a shop in Tripoli. At the time we met them in Italy, the couple and their baby, conceived in Libya, were living separately in asylum centres but looking to find a way to get back together. Their love for one another had survived the difficult journey.
Many of the Nigerians we spoke with had travelled to Libya for work and had found meaningful and fulfilling jobs. For instance, a woman who had fled Nigeria because of threats attracted by her public role as a political activist, told us of how she had successfully found work on arrival in Libya in 2012: My first job was as a cleaner in a hospital, I was there for 8 months. I was always nice to people. … One day I was holding a door open for a visitor and said 'good day', and the visitor stopped and had a short conversation with me. He said 'your English is very good' and asked if I knew how to teach, I said 'yes!' And he replied 'well go and get your CV!'. He was a very important man in education in Libya and got me a job in a school straight away. (Nigerian widow,37,with two children still in Nigeria) Male respondents told us that even after the conflict intensified, they had travelled to wash cars, work in construction, clean and beg in Sabha, Tripoli and other towns in Libya, believing that they could avoid the conflict or that it would not be any worse than the situation they had left behind. Some even thought that the conflict would generate additional employment opportunities: I learnt to be a muratore (builder) in Libya … it was stable. Everybody is working in Libya because they destroy a house today [in the conflict] and build it again tomorrow. (Nigerian man,25) For many Afghan respondents similarly, Iran had been a place to which they had travelled for work and which came to be regarded as home: it was rarely viewed as a 'transit country'. As one young Afghan man travelling with his wife and child explained: 'I was born in Tehran. I have only been to Afghanistan when I was one or two years old.' Indeed two thirds (66%) of those we spoke to had either never been to Afghanistan or had not lived there for a considerable period of time: seven respondents had not been to Afghanistan for more than 20 years, and some for as long as 35 years. Almost all of those we interviewed spoke at length about the difficulties they experienced during their lives in Iran: difficulties in accessing papers, work, school and University; threats, racism and discrimination; experiences of violence and abuse. However whilst Iran was almost always described as a place in which it was very difficult to make a life, it was almost always represented as somewhere which had been a home rather than a place 'in-between'.
It is clear that the place in which an individual is interviewed has a significant impact on the narrative that is constructed to make sense of the journey and decisions taken along the way. Regardless of where people were interviewed, the meanings attached to places were often similar. For Syrians who had made their home in Izmir, everyday concerns about family, schools, work, predominated. Opportunities to work brought meaning as well as the resources for survival. Even for those interviewed in Europe for whom conversations centred on decisions to move on, these places were more than just a 'stepping stone' regardless of considerations about their physical journeys.
Becoming the physical 'in-between': the importance of place in moving on Migrants narrate discrete physical journeys as embedded within a larger story arc of events and experiences which determine how they travel, where they go and why (Mainwaring and Brigden 2016). In addition to providing a place to live, the places 'in-between' were also described in relation to their contribution to the decision to move on. In other words that were rarely places that were simply 'passed through'.
Nigerian respondents arriving in Libya after mid-2014 described staying alive as a factor which came to dominate their everyday lives and the role this played in their decision to move. For these respondents Libya represented fear, trauma, violence. They recounted how dangerous Tripoli and its environs was for Black Africans during this time, with multiple accounts of randomised violence and robberiesby police, militias and even children: It was terrifying in Libya. … It was not peaceful in any way. You could not walk down the street without risk … It was between life and death. Between you and God at any point. Anything can happen there at any time. Because of the crisis. Guns are everywhere. You have to sleep with your eyes open. (Nigerian man in his 20s) People told us that they had witnessed deaths, including loved ones. Women reported sexual violence to be common (see also Esposito et al. 2016). The frequency of kidnapping involving detention in prisons increased, with respondents' release contingent on them or their family making substantial payments (see also Human Rights Watch 2019). Those who lacked money were kept and sometimes put to work, by those who were responsible for holding them: It's a business, they keep you until you pay to get out … if you are Nigerian, it's not going to be small [the amount to pay] because they think all Nigerians have money. (Nigerian man,25) Several of those we spoke to had been repeatedly detained by those who they understoof to be police and militias. For the majority, life in Libya simply became untenable: After we finished, they gave us an option: life or money. Guns were drawn. If you meet a nice man, you get paid. If not, then not. There is no way of knowing who is a nice man. Libya is a mafia town. Everyone does anything in a free way. Your territory. Your way. You can do anything. (Nigerian man,25) Regardless of how long they had been in Libya and their original intentions, travelling to Europe was described as an inevitability and a necessity to stay alive rather than a deliberate choice. Consistently, they told us that going back through the desert was more dangerousand more traumatisingthan boarding a boat across the Mediterranean: It is too risky to go back across the desert. It is better to cross and risk your life in the sea than go back. … In Libya, if you stay you know that one day you will die. In the desert you will die. It is better to risk your life in the boat. (Nigerian man in his 20s) Afghans interviewed in Greece, similarly described the ways in which life in Iran and the socio-economic inequalities with which it was associated had motived their decision to leave. Their accounts were dominated by a lack of rights accompanied by evidence of severe maltreatment, including summary deportations, physical abuse at the hands of security forces, limited job opportunities outside menial labour, and restricted access to education (Human Rights Watch 2013). As with Syrians living in Izmir, the intersection of place and ethnicity made a difference but this time it was a hindrance rather than a help. For many Afghans living in Iran, particularly those from the ethnic Hazara minority, experiences of severe discrimination, the absence of citizenship rights and a lack of education for children combined with anxieties about what would happen to them if they were to return to Afghanistan influenced the decision to leave. They told us about long and often soul-searching discussions with family members whilst they tried to work out what to do. One young man, let's call him Khalil, was just five years old when he left Afghanistan with his family. He doesn't remember much about his time there but he recalls all too clearly the difficulties of life in Iran: moving between cities in the search for security and opportunities to rebuild a life, the failed attempts to become an engineer, the harassment and discrimination. Life for the family was hard without papers establishing their right to be in Iran. Khalil met an Iranian and together they opened a garage, but when Khalil's friend left his business was closed down by a rival garage owner who knew he didn't have a work permit. Khalil took up construction work, the only option available to him, to support the family and in particular his mother, who was suffering from poor health and needed to make frequent, expensive trips to the hospital. 'An Afghan can only become a manual worker in Iran' he told us, 'all the dirty jobs are done by Afghans, and their salaries are much lower than the Iranians'. Worse still, he said, there are no rights, no freedoms: Afghans don't have a right to drive a motorcycle or a car. You cannot buy a SIM card if you are an Afghan in Iran. And you don't have the right to go to the flea market if you are an Afghan in Iran.
And then there was the violence: 'Iranians treat Afghans as if they are animals. I was stabbed twice while working at a construction site in Iran. But it was only when Khalil talked about his fiancé and their desire to get married that his sense of hopelessness became apparent. "Our lives" he said "slipped through our hands in Iran'. For some, this sense of hopelessness was accompanied by fear: "In Iran I was afraid to go out. They are treating Afghans as if they are dogs" (Afghan man aged 32 travelling with his sister and her husband) It is clear from these stories that respondents had multi-layered lives, ranging from everyday family and social lives, to experiences of violence and insecurity. Feelings about place may relate directly to the decision to move on (as shown here), or independently of decisions about the physical journey (as shown earlier). It in either case it is clear that to fully analyse migration as a social process, we need to more engage more meaningfully with the experiences in the places typically dismissed as 'in-between'.
More than 'stepping stones': why place matters Depicting the countries in which refugees and other migrants had lived prior to their arrival in Europe as 'in-between' conceals not just the multi-layered meanings that migrants attach to place. It also obscures the histories, culture, economies and social lives of places that influence patterns of Afghan, Syrian and Nigerian migration and their experiences. In 2009, two years before the overthrow of the dictator Mu'ammar Gaddafi, Libya hosted on its territory 2.5 million migrants, coming mostly from Africa but also from countries as far as Bangladesh and the Philippines (Toaldo 2015). In 2015, Turkey and Iran were among those countries hosting the largest number of refugees in the world (UNHCR 2016). Yet these factors were largely presented as incidental in Europe's so-called migration crisis.
Turkey, Libya, Iran are historically countries of immigration. For instance, contrary to the way Turkey was perceived by EU policy-makers as a stepping stone to Europe, the previous two decades of economic growth in Turkey meant that it had become a country of immigration in its own right (Düvell 2014;Içduygu and Yükseker 2012). And while the city of Izmir was portrayed by the media as a smugglers' transit hub (BBC 2016), it was also a thriving tourist destination with a strong local economy which attracted workers from across Turkey as well as regionally, including from Syria. Moreover, since the 1980s, Izmir had offered a home for internally displaced and refugee Kurdish peoples and by 2015, was home to an estimated 74,000 Syrians (Yildiz and Uzgoren 2016). These points of connection between people and places are reflected in the accounts we heard about how Syrian people came to be in Turkeyand Izmirin the first place. It is these factors which underpinned the strong social networks and available jobs which helped Syrian refugees move to and settle in Izmir.
For some, journeys from Syria to Turkey had begun years long before the conflict began, illustrating the strength of the historic cross-border ties between the two countries. Many refugees, regardless of their ethnic background, had relatives in Turkey or had previously visited, worked and owned businesses in Turkey. For this group, the journey to Turkey was a continuation of ongoing cross-border mobility both before and after the onset of conflict in Syria: I know Turkey well because I worked in Turkey and before I was often travelling from Kobane to Turkey in the last three years. I worked in Bursa, Istanbul, Malatya and saved some money. (Syrian man,26) A survey conducted in Turkey found that Syrian refugees felt close to Turks and viewed Turkey as a cultural as well as geographical neighbour (Erdogan 2018). Cultural familiarity was reflected upon by respondents in multiple ways, including in relation to a shared religion: 'Turkey is a Muslim country, so we prefer to stay here' (Syrian man, 43). At the same time, the relative security of Izmir and a shared culture was contrasted with the insecurityphysically and culturallyrespondents felt about what life would be like in Europe: I hope to stay in Turkey … I never thought to go to Europe. They are not close to my culture and life. Moreover, the future there is unclear in Europe. Where will they put us? To house, camp or prison? I feel safe here, I wish to stay in Turkey. (Syrian Kurd,29) Similarly, Nigerian respondents followed a well-trodden trail to Libya which, until the fall of Gaddafi, had been a thriving oil autocracy dependent on migrant labour (Bredeloup and Pliez 2011). Nigerian labour migrants had long worked alongside Filipinos, Bangladeshis and Brits in construction, hospitality, teaching and healthcare: One day a friend said we could go to Libya, that there is a lot of work there. I thought if I make money I can build a house to go back to … So I moved to Libya. (Nigerian man,26) Iran meanwhile is not, and never has been, primarily a 'transit country' for Afghans. In 2015 Iran was the fourth largest refugee-hosting country in the world with nearly one million registered Afghan refugees and an estimated two million more who are undocumented (UNHCR 2016). Many arrived following the US-led invasion of 2001 but there is a long history of migration to the country (PRIO 2004). Whilst some of our respondents had passed quickly through the country, this was not the case for most.
'In-between' or 'in-place'? Everyday life beyond migration This article has highlighted the multiple ways in which a neglect of the 'in-between', reflected and reinforced by the concept of 'transit migration', serves to silence the experiences of those living elsewhere, including the ways in which immobility is both enforced and chosen. People living outside their countries of origin are not simply 'passing time' waiting for the opportunity to move on, to Europe or elsewhere. They live, love and work in the places where they reside, in turn contributing toand in some cases transformingthose spaces and their associated social, economic and political processes (Bredeloup 2012;Choplin and Lombard 2013). The decisions they take about whether or not to move on are embedded in the lives that are lived, the relationships that are formed and the opportunities that arise. Places have multiple meanings for people at different points in both their physical and metaphorical (life) journeys. Whilst we have drawn on the accounts of migrants who had lived in Turkey, Libya and Iran, we could equally have included migrants' narratives about life in other places which formed part of their physical journeys.
The evidence in this article contributes to how we think about migrant journeys in two important ways. Firstly, it challenges us to think very carefully about the people with whom we choose to conduct our research, how we define and categorise our respondents, the places where research is undertaken and the questions asked. Migration research always runs the risk of forgetting about significant immobilities, focusing on the people who move rather than those who stay (Kaytaz 2016;Schapendonk 2012). Focusing exclusively, or primarily, on people who move/move on limits our understanding of the lives of migrants and the physical journeys they take. This, in part, is because the labelling of a place as 'in-between', and those who leave it as having been 'in transit', only makes sense post-departure. For those who remain, by choice or otherwise, a life is made 'in-place'. Understanding the lives of migrants 'in-place' draws attention to the structural contexts within which migrant (and other) identities are formed and the ways in which refugees and other migrantsas human beings as well as people who have moved -find ways to exert agency and control over their lives even in conditions of hardship and insecurity. At the same time, asking questions only about the physical journey means that researchers miss all the multi-layered meanings which respondents attach to place beyond the decision to move or not move. Ultimately, they fail to analyse migration as a social process.
Secondly, the focus on what happens 'in-place' moves the gaze away from countries of origin and (assumed) destination and in so doing challenges political and policy assumptions about the linearity of physical migrant journeys. It is clear that the EU policy response to the so-called migration crisis was underpinned by flawed assumptions about the nature of journeys to Europe Crawley and Hagen-Zanker 2019). Our argument here is that the representation of countries outside Europe as places of 'stuckness' and 'un-being' not only limits our understanding of migrant journeys but feeds into anti-immigration discourses across the countries of the Global North. These discourses are increasingly harnessed by the countries of the Global Southparticularly those labelled as 'transit countriesto leverage political and financial power, often at the expense of migrants themselves (Cherti and Grant 2013). Reconceptualising the 'in-between' as a place in which people live, and not merely pass through, opens up the possibility of alternative policy approaches beyond the border control processes on which contemporary policy-making is predominantly focused e.g. addressing structural inequalities that undermine life for migrants and their families in the countries to which they move including access to protection and the right to work. It also enables migrants to construct/narrate their own meanings and experiences about their own journeys (Kaiyoorawongs 2016;BenEzer and Zetter 2014 ORCID Heaven Crawley http://orcid.org/0000-0003-2437-5889 | 2020-09-03T09:10:00.238Z | 2020-09-02T00:00:00.000 | {
"year": 2021,
"sha1": "d60bc06e092ed95291fe11acd153221eadc39a1c",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1369183X.2020.1804190?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f9f64b110ccc488eb7f07d9f0872fdcaf6d8c0cb",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
255987172 | pes2o/s2orc | v3-fos-license | High expression of Collagen Triple Helix Repeat Containing 1 (CTHRC1) facilitates progression of oesophageal squamous cell carcinoma through MAPK/MEK/ERK/FRA-1 activation
Oesophageal cancer is one of the most common malignancies worldwide,and oesophageal squamous cell carcinoma (ESCC) is the predominant histological type both globally and in China. Collagen triple helix repeat containing 1 (CTHRC1) has been found to be upregulated in ESCC. However, its role in tumourigenesis and progression of ESCC remains unclear. Using our previous ESCC mRNA profiling data, we screened upregulated genes to identify those required for proliferation. Immunohistochemistry was performed to determine the level of CTHRC1 protein expression in 204 ESCC patients. Correlations between CTHRC1 expression and clinicopathological characteristics were assessed. In addition, pyrosequencing and 5-aza-dC treatment were performed to evaluate methylation status of CTHRC1 promoter. In vitro and in vivo analyses were also conducted to determine the role of CTHRC1 in ESCC cell proliferation, migration and invasion, and RNA sequencing and molecular experiments were performed to study the underlying mechanisms. Based on mRNA profiling data, CTHRC1 was identified as one of the most significantly upregulated genes in ESCC tissues (n = 119, fold change = 20.5, P = 2.12E-66). RNA interference screening also showed that CTHRC1 was required for cell proliferation. Immunohistochemistry confirmed markedly high CTHRC1 protein expression in tumour tissues, and high CTHRC1 expression was positively correlated with advanced T stage (P = 0.043), lymph node metastasis (P = 0.023), TNM stage (P = 0.024) and poor overall survival (P = 0.020). Promoter hypomethylation at cg07757887 may contribute to increased CTHRC1 expression in ESCC cells and tumours. Forced overexpression of CTHRC1 significantly enhanced cell proliferation, migration and invasion, whereas depletion of CTHRC1 suppressed these cellular functions in three ESCC cell lines and xenografts. CTHRC1 was found to activate FRA-1 (Fos-related antigen 1, also known as FOSL1) through the MAPK/MEK/ERK cascade, which led to upregulation of cyclin D1 and thus promoted cell proliferation. FRA-1 also induced snail1-mediated MMP14 (matrix metallopeptidase 14, also known as MT1-MMP) expression to facilitate ESCC cell invasion, migration, and metastasis. Our data suggest that CTHRC1 may act as an oncogenic driver in progression and metastasis of ESCC, and may serve as a potential biomarker for prognosis and personalized therapy.
Background
With an estimated 455,800 new cases and 400,200 deaths each year, oesophageal cancer is the sixth leading cause of cancer death and the eighth most common cancer worldwide [1]. Oesophageal squamous cell carcinoma (ESCC) is the predominant histological type both in China and around the world. Despite advancements in population screening and standardized multidisciplinary treatment over the last four decades [2], our previous report showed that ESCC remains the fourth leading cause of cancer-related death in China [3], with a dismal 5year survival rate of only 20.9% [4]. The poor outcome of patients is primarily attributed to the high rate of ESCC metastasis, including both regional lymph node and further distant metastases [5]. Therefore, it's vital to identify the underlying molecular mechanisms that drive progression, especially metastasis, of ESCC for predicting patients' prognosis and improving rational design of personalized medicine.
Collagen triple helix repeat containing 1 (CTHRC1) is a secreted glycoprotein that can reduce collagen matrix deposition and promote the mobility of fibroblasts and smooth muscle cells [6][7][8]. Indeed, overexpression of CTHRC1, which has been reported in various malignancies, is suggested to serve as an independent prognostic factor [9][10][11][12]. Recently, a germline mutation in CTHRC1 gene was identified to be associated with Barrett's oesophagus and oesophageal adenocarcinoma [13], and high CTHRC1 expression in ESCC was revealed by expression profiling studies involving a small number of cases [14,15]. However, these findings should be confirmed in studies including larger groups and the cellular function and clinical implications of CTHRC1 in ESCC need to be resolved.
In this study, we sought to confirm in a much larger cohort aberrant elevated expression of CTHRC1 and to investigate its association with clinicopathological characteristics in ESCC. To assess the effect of CTHRC1 on malignant phenotypes of ESCC cells in vitro and in vivo, we then established multiple cell lines with stable depletion or overexpression of CTHRC1. Furthermore, we defined the underlying signalling pathways and transcription factors that depend on CTHRC1 activation and are responsible for ESCC progression.
Patients and tissue specimens
The study design and use of clinical samples were approved by the Ethics Committee of Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College. A total of 204 formalin-fixed and paraffin-embedded (FFPE) ESCC tissue samples were obtained with informed consent and agreement from the biobank of Cancer Hospital of Chinese Academy of Medical Sciences. From 2000 to 2008, specimens were surgically resected from patients with stage I-III ESCC and who did not receive preoperative treatment. The clinicopathological characteristics of these patients are summarized in Table 1. Five tissue microarrays (TMAs) were constructed by incorporating one representative core of each tissue. The microarrays contained 204 primary ESCC tumour tissues, 169 of which were accompanied by adjacent non-tumour epithelial tissues.
Cell culture
All cell lines used in this study were regularly authenticated by short tandem repeat (STR) profiling. KYSE510, KYSE30, KYSE450, KYSE180 and KYSE70 cells were cultured in RPMI 1640 medium supplemented with 10% foetal bovine serum, 100 UI/ml penicillin and 100 UI/ml streptomycin (Gibco, USA). Het1a, a non-malignant immortalized human oesophageal squamous cell line, was cultured in BEGM (Bronchial Epithelial Cell Growth) medium (Lonza, USA). All cell lines were maintained in a humidified incubator at 37°C and 5%CO2.
Real-time PCR (RT-PCR)
RT-PCR was performed as previously described [17]. The primers used are listed in Additional file 1: Table S1.
Western blot
Whole cell lysates were prepared using RIPA buffer supplemented with protease and phosphatase inhibitor cocktail (Thermo, USA) and culture supernatants were concentrated using Microcon centrifugal filters (Millipore, USA). Western blot was performed as previously described [17].
Cell proliferation and colony formation assays
Cell proliferation and colony formation assays were performed as previously described [18]. Cell proliferation was assessed using Cell Counting Kit-8 (CCK8). Images of the colony formation assay results were scanned and the clone number was determined using GeneSys software (Genecompany, China).
Boyden chamber Transwell assay
For invasion and migration assays, we used 24-well Boyden chambers precoated with or without Matrigel matrix (Corning, USA), respectively. The experiments were performed as previously described [19].
Xenograft model and lung metastasis model
All mice used in this study received humane care, and all animal experiments were performed in accordance with the guidelines approved by the Institutional Animal Care and Use Committee of Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College. BALB/c-nu mice and non-obese diabetic (NOD)-SCID mice (female, 4-5 weeks old) were purchased from Huafukang (Beijing, China). For the xenograft model, shRNA-or vector-transfected KYSE510 cells and CTHRC1-or vector-transfected KYSE450 cells were injected into the right dorsal flanks of BALB/c-nu mice (5 × 10 6 cells per animal, 8 mice per group). Tumour formation was monitored every 5 days by measuring tumour size with a calliper. The tumour volume was calculated using the formula: V = (L × W 2 )/2. After 4 weeks, all mice were sacrificed, and the tumours were excised and weighed. For the lung metastasis model, shRNA-or vector-transfected KYSE510 cells and CTHRC1-or vector-transfected KYSE450 cells were injected into NOD-SCID mice through the tail vein (1 × 10 6 cells per animal, 8 mice per group). Ten weeks later, the mice were sacrificed, and the lungs were excised and fixed with Bouin's solution followed by embedding in paraffin for haematoxylin and eosin (H&E) staining. The number of lung surface metastatic nodules was evaluated by gross and microscopic examination.
RNA sequencing
RNA sequencing was performed using KYSE510-shCTHRC1 and KYSE510-vector cells. Total RNA extraction, quality analysis, cDNA library preparation and sequencing were performed at Novogene (Beijing, China). Raw RNA sequences were mapped to the GRCh37.hg19 genome based on TopHat and assembled using Cufflinks. Relative transcript levels are expressed as "fragments per kilobase of transcript per million mapped" (FPKM). Differentially expressed genes (DEGs) were identified using Cuffdiff. To verify the RNA sequencing data, we assessed the transcriptional level of twenty genes using RT-PCR (Additional file 2: Table S2).
Statistical analysis
Statistical analyses were performed using Prism Graph-Pad version 6.0 (GraphPad Software Inc., San Diego, USA). Correlations between mRNA expression levels were analysed using Pearson's correlation coefficient. A chi square test was performed to determine the association between clinicopathological variables and CTHRC1 expression. Survival analysis was carried out using a log-rank test. A Cox proportional hazards model was used to identify independent prognostic factors. The significance of differences between groups was analysed using two-tailed Student's t-test and the results are expressed as the mean ± SD. Differences were considered significant when P < 0.05. *P < 0.05 and **P < 0.01.
Endogenous expression levels of CTHRC1 in the ESCC cell lines were higher than in an immortalized oesophageal epithelium cell line (Additional file 4: Figure S2). IHC showed slight cytoplasmic staining of CTHRC1 protein in normal oesophageal epithelial cells, whereas moderate to strong staining in the cytoplasm and extracellular space was observed in most ESCC tumour tissues (Fig. 1b). Compared to matched nontumor tissues, 94% (158/169) of tumour tissues exhibited stronger staining of CTHRC1 (Fig. 1c). Therefore, we focused on the role and mechanism of CTHRC1 in ESCC progression in this study.
High expression of CTHRC1 in ESCC tumour tissue predicts poor prognosis
As CTHRC1 is almost universally overexpressed in tumour tissue compared to normal oesophageal epithelial tissue, we divided the sample set into two groups based on the CTHRC1 expression level (low or high) in tumour tissues and examined significant differences in clinicopathological characteristics between these two groups (Table 1). Notably, higher expression of CTHRC1 was significantly associated with advanced T stage (P = 0.043, chi square test), lymph node metastasis (P = 0.023, chi square test) and TNM stage (P = 0.024, chi square test). Patients exhibiting a high level of CTHRC1 expression had shorter overall survival than those with low CTHRC1 expression according to both Kaplan-Meier analysis (P = 0.020, log-rank test; Fig. 1d) and univariate Cox regression analysis (Table 2). However, CTHRC1 expression was not independently associated with overall survival by multivariate Cox regression analysis in this cohort after adjustment for age, histology grade, lymph node metastasis and TNM stage ( Table 2).
Promoter hypomethylation may participate in upregulation of CTHRC1
Previous genomic studies of ESCC didn't show significant evidence of CTHRC1 gene amplification [21]. We assessed whether promoter hypomethylation contributes to the elevated expression of CTHRC1 in ESCC.
Methylation array profiling of paired ESCC tissues revealed significantly lower methylation of cg07757887 (−1220 bp in the CTHRC1 genomic region) in ESCC tumour tissues compared with non-tumour tissues (n = 67, Δß =−0.19; FDR = 1.34E-23, unpublished data). Pyrosequencing was performed for further validation, and as expected, cg07757887 methylation was significantly lower in ESCC tumour tissues compared with corresponding non-tumour tissues (n = 50, P < 0.0001, t-test, Fig. 2a), with 92% (46/50) of tumour tissues showing CTHRC1 hypomethylation (Fig. 2b). With the exception of KYSE510 cells, treatment with the DNA methyltransferase inhibitor 5-aza-dC resulted in dramatically increased CTHRC1 mRNA expression and protein production in five ESCC cell lines (Fig. 2c, d). Moreover, pyrosequencing confirmed distinctly increased methylation of cg07757887 in these cell lines except for KYSE510 cells (Fig. 2e), supporting the notion that CTHRC1 expression may be closely associated with promoter methylation in ESCC.
CTHRC1 promotes ESCC cell proliferation and tumour growth in vitro and in vivo
To investigate the effect of CTHRC1 on the malignant phenotypes of ESCC cells, we established cell models with CTHRC1 depletion or overexpression using three ESCC cell lines, and verified changes in expression by RT-PCR and western blot analyses (Fig. 3a, b). CTHRC1 depletion significantly attenuated cell proliferation and colony formation in KYSE510 and KYSE30 cells. Consistently, KYSE450 cells overexpressing CTHRC1 exhibited a significantly higher proliferation rate and colony formation capacity compared with KYSE450 cells transfected with the empty vector (Fig. 3c-e).
In agreement with the in vitro data, tumour size and weight were markedly reduced in the KYSE510-ShCTHRC1 group compared with the vector group (P < 0.0001, t-test, Fig. 3f ), and KYSE450 cells with enhanced CTHRC1 expression formed significantly larger and heavier tumour xenografts compared to vector cells (P = 0.0002, t-test, Fig. 3g). Taken together, these results Fig. 2 Promoter methylation is involved in regulating CTHRC1 expression in ESCC. a Promoter methylation of CTHRC1 in paired tissue specimens from 50 ESCC patients was detected by pyrosequencing assay. P < 0.0001, paired two-tailed Student's t-test. b The number and percent of patients with higher or lower promoter methylation of CTHRC1 in ESCC tumour tissues compared with non-tumour tissues. T: tumour tissue; N: non-tumour tissue. c RT-PCR was performed to analyse the mRNA level of CTHRC1 in ESCC cells treated or not with the demethylating agent 5-aza-dC (10 μM) for 72 h. The significance of difference between groups was analysed using two-tailed Student's t-test. Data are presented as the mean (n = 3) ± SD. *P < 0.05 and **P < 0.01. d Western blot was performed to detect the protein level of CTHRC1 in culture supernatants of cells with or without 5-aza-dC treatment. e Promoter methylation of CTHRC1 in ESCC cells with and without 5-aza-dC treatment was detected using a pyrosequencing assay The results are shown as the mean ± SD. The significance of the difference between groups was analysed using two-tailed Student's t-test. *P < 0.05 and **P < 0.01 support the notion that CTHRC1 expression is critical for cell proliferation and tumour growth both in vitro and in vivo.
CTHRC1 promotes migration and invasion of ESCC cells in vitro and in vivo
To investigate the effect of CTHRC1 on the migration and invasion of ESCC cells, we conducted Boyden chamber Transwell assays. Overall, compared to cells transfected with the empty vector, migratory and invasive capacities were significantly suppressed in KYSE510 and KYSE30 cells with CTHRC1 knockdown (Fig. 4a, b) and remarkably enhanced in KYSE450 cells with CTHRC1 overexpression (Fig. 4c). A lung metastasis model in NOD-SCID mice constructed via tail vein injection of cells showed a significantly lower incidence of and fewer pulmonary metastasis nodules in mice with KYSE510-ShCTHRC1 cell injection than in the control group (Fig. 4d). On the other hand, the incidence and number of pulmonary metastatic nodules in mice with KYSE450-CTHRC1 cell injection were higher than in the control group (Fig. 4e).
CTHRC1 facilitates ESCC cells aggressiveness primarily via activation of the MAPK/MEK/ERK pathway
We next explored the downstream signalling pathways responsible for CTHRC1-mediated ESCC cell aggressiveness using RNA sequencing with KYSE510 cells carrying ShCTHRC1 or the empty vector. A total of 3430 significantly upregulated (more than 2-fold) and 4377 downregulated (less than 50%) genes were selected for Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis [22]. The results indicated the PI3K-Akt and MAPK pathways as the top two pathways most significantly affected by CTHRC1 knockdown (Fig. 5a). The results are shown as the mean ± SD. The P value was generated using two-tailed Student's t-test. *P < 0.05 and **P < 0.01 Western blot verified that Akt phosphorylation was decreased in CTHRC1-depleted KYSE510 and KYSE30 cells, and increased in CTHRC1-overexpressing KYSE450 cells, but the changes were relatively minor (Additional file 5: Figure S3). We observed phosphorylation of core members of the classical MAPK pathway, c-raf, MEK1/2 and ERK1/2, to be strongly decreased in KYSE510-ShCTHRC1 cells and KYSE30-ShCTHRC1 cells, and increased in KYSE450-CTHRC1-overexpressed cells compared with their control cells (Fig. 5b, c). Furthermore, treatment with a MEK1/2 inhibitor (U0126 at 10 μM) significantly reversed CTHRC1-induced proliferation, migration and invasion of KYSE450 cells (Fig. 5d-f), indicating that MAPK/MEK/ERK activation may underlie the phenotypes induced by CTHRC1 in ESCC cells.
FRA-1 is the principle effector mediating activation of MAPK/MEK/ERK by CTHRC1 and upregulation of cyclin D1 and snail1/MMP14 expression RNA sequencing analysis revealed FOSL1 (Fos-related antigen 1, also known as FRA-1), an extensively studied MAP Kinase target [23][24][25], to be among the most significantly downregulated genes (80% off ); CCND1 (cyclin D1) (90% off ), SNAI1 (snail1) (60% off ) and a known target of snail1, MMP14 (matrix metallopeptidase 14, also known as MT1-MMP) (90% off ) [26][27][28], were also significantly downregulated. Their dependency on CTHRC1 was repeatedly confirmed by RT-PCR and western blot in KYSE510 and KYSE30 cells with depleted CTHRC1 expression and in KYSE450 cells with CTHRC1 overexpression (Fig. 6a-c). Furthermore, administration of the MEK1/2 inhibitor U0126 abolished a Top-ranked KEGG pathway terms using DAVID. b Western blot was conducted to detect the protein levels of CTHRC1, c-raf, MEK1/2, ERK1/2 and phosphorylation of c-raf, MEK1/2, ERK1/2 in CTHRC1-knockdown KYSE510 and KYSE30 cells and corresponding vector control cells. c Western blot was conducted to detect the protein levels of CTHRC1, c-raf, MEK1/2, ERK1/2 and phosphorylation of c-raf, MEK1/2, ERK1/2 in CTHRC1-overexpressing KYSE450 cells and vector control cells. d-f KYSE450 with enhanced CTHRC1 expression and vector control cells were treated with the MEK inhibitor U0126 (10 μM) or dimethyl sulfoxide (DMSO). d Cell viability was measured using the CCK8 assay. e Colony formation assays were performed to measure the clonogenic capacity of cells. f Migration and invasion of cells were investigated using transwell assays. The results are shown as the mean ± SD. The difference between groups was analysed using two-tailed Student's t-test. *P < 0.05 and **P < 0.01 the increased phosphorylation of FRA-1 and increased protein levels of FRA-1, cyclin D1, snail1, and MMP14 induced by enhanced expression of CTHRC1 in KYSE450 cells (Fig. 6c).
Moreover, knockdown of FRA-1 using siRNA reversed the promotion of proliferation, migration and invasion by CTHRC1 (Fig. 6d, e), and western blot showed that knockdown of FRA-1 attenuated the upregulation of cyclin D1, snail1, and MMP14 in KYSE450 cells induced by CTHRC1 overexpression. Additionally, knockdown of snail1 reversed the increased expression of MMP14 induced by CTHRC1 (Fig. 6f ), indicating that cyclin D1 and snail1 were downstream effectors of FRA-1; in turn, snail1 induced high expression of MMP14 in CTHRC1overexpressing ESCC cells (Fig. 6g).
Consistent with the above in vitro studies, the level of CTHRC1 mRNA was significantly positively correlated with those of CCND1 and SNAI1 in ESCC tumour tissues (n = 119, r = 0.227, P = 0.013; r = 0.550, P < 0.0001, Pearson's correlation coefficient; Fig. 7a,b). Significant positive correlation between SNAI1 and MMP14 mRNA levels was also found (r = 0.318, P = 0.0004, Pearson's correlation coefficient; Fig. 7c). Moreover, expression of CTHRC1 was positively associated with that The protein levels of CTHRC1, p-FRA-1, FRA-1, cyclin D1, snail1 and MMP14 were detected by western blot. d, e CTHRC1-overexpressing KYSE450 cells and control cells were transfected with FRA-1 siRNA or negative control (NC) siRNA. d Cell viability was measured using the CCK8 assay. e Migration and invasion of cells were investigated using transwell assays. The results are shown as the mean ± SD. The difference between groups was analysed using two-tailed Student's t-test. *P < 0.05 and **P < 0.01. f CTHRC1-overexpressing KYSE450 cells were transfected with FRA-1 siRNA or snail1 siRNA. The protein levels of CTHRC1, FRA-1, cyclin D1, snail1 and MMP14 in KYSE450 cells with different treatment were detected. g Schematic diagram illustrating the proposed CTHRC1-mediated activation of ERK1/2 signalling pathway and its role in ESCC cells of cyclin D1, as well as MMP14, at the protein level (n = 204, P = 0.018, chi square test; P = 0.022, chi square test; Table 1). In addition, Kaplan-Meier analysis revealed a significant association between shorter overall survival in ESCC patients and high expression of cyclin D1 (n = 204, P = 0.049, log-rank test, Fig. 7d), or MMP14 (n = 196, P = 0.0062, log-rank test; Fig. 7e).
Discussion
This is the first study to present a comprehensive set of clinical and experimental evidence establishing CTHRC1 as an oncogenic factor that facilitates ESCC tumour progression and metastasis, resulting in poor prognosis.
These data indicate that CTHRC1 may serve as a potential prognostic biomarker and treatment target in ESCC.
We also investigated the possible regulation mechanism of CTHRC1 in ESCC. Treatment with a demethylation agent (5-aza-dC) markedly elevated CTHRC1 expression in most ESCC cell lines, which was in agreement with previous reports [16,29,30]. A pyrosequencing assay revealed a CpG site (cg07757887, -1220 bp in the CTHRC1 genomic region) hypomethylated in ESCC tumour tissues, which has not been previously reported as being related to cancer. Although demethylation of the CTHRC1 genomic region (-391 to +4 bp) in gastric cancer cells [30], in the first exon in colon cancer [16], and at -628 to -269 of the promoter region in Fig. 7 mRNA and protein level of CTHRC1, FRA-1, cyclin D1, snail1 and MMP14 in ESCC tissues. a Correlation between CTHRC1 and CCND1 according to transcriptome-wide microarray profiling data (n = 119). Pearson's correlation coefficient. b Correlation between CTHRC1 and SNAI1 according to transcriptome-wide microarray profiling data (n = 119). Pearson's correlation coefficient. c Correlation between SNAI1 and MMP14 according to transcriptome-wide microarray profiling data (n = 119). Pearson's correlation coefficient. d Representative IHC images of FRA-1, snail1, cyclin D1 and MMP14 staining in ESCC tumour tissues. e, f Overall survival analysis based on the level of cyclin D1 and MMP14 expression, as measured by IHC, in ESCC patients. Survival rates were determined using Kaplan-Meier survival analysis hepatocellular carcinoma [29] has been reported, we did not find significant demethylation at those CpG sites in ESCC tumour tissues. Therefore, methylation of the CpG site involved in regulating CTHRC1 may vary in different types of cancer.
Previous reports have suggested that other mechanisms may be involved in regulation of CTHRC1, such as TGF-β and Wnt3a pathway activation in gastric and oral squamous cell carcinoma, respectively [30,31]. In addition, CTHRC1 was reported to be regulated by microRNA and long noncoding RNAs, such as let-7b and MALAT-1 [32,33], which might explain the oncogenic role of MALAT-1 in ESCC [34]. Evidence to date supports the hypothesis that CTHRC1 integrates multiple pro-aggressiveness signalling pathways.
We also reveal for the first time that CTHRC1 exerts its effect on ESCC progression mainly through the Raf/ MEK/ERK pathway, with dependence on the induction and activation of FRA-1, a FOS family transcription factor that binds to JUN-family proteins to form the AP-1 complex [35]. Transcriptional induction and posttranslational stabilization of FRA-1 via MEK/ERK signalling increases the abundance of FRA-1, which has been causally linked to more aggressive behaviours of multiple cancer cell types [36][37][38][39][40], but not through a CTHRC1dependent pathway.
There has been accumulating evidence for the significant role of MEK/ERK pathway in cancer development [41][42][43][44]. In accordance with the results of our study, the MEK/ERK pathway has been related to CTHRC1 in pancreatic cancer, without identification of any downstream effectors [45]. Another study suggested that CTHRC1 upregulated MMP9 via ERK activation in colorectal cancer [16]; however, alteration in MMP9 expression was not indicated in our RNA sequencing data. Through transcriptome sequencing and extensive step-by-step in vitro analyses, we identified Cyclin D1 and snail1 as major downstream effectors of FRA-1, accounting for the CTHRC1-mediated regulation of proliferation and motility in ESCC cells. The most prominent function of snail1 in cancer cells is to induce the epithelialmesenchymal transition (EMT) [46,47], and it was recently reported that CTHRC1 upregulated snail1 to induce EMT by activating the Wnt/β-catenin signalling pathway in epithelial ovarian cancer [48]. Interestingly, we did not observe any meaningful alteration in βcatenin expression or in hallmarks of EMT [49], namely, E-Cadherin and vimentin, after CTHRC1 knockdown in ESCC cell lines (Additional file 5: Figure S3), suggesting that an alternative hypothesis is needed to explain findings for ESCC. Indeed, a few recent studies invoked other possible mechanisms by which snail1 could regulate cell migration and invasion, such as MMP14mediated pro-invasive and metastatic activities [26][27][28].
However, the respective upstream mechanisms were not elucidated. Here, we not only show that MMP14 can be upregulated by snail1 activation, but also demonstrate it under regulation of CTHRC1/MAPK/MEK/ERK/FRA-1 signalling in ESCC.
It should be acknowledged that there was one limitation related to this study: we did not clarify how CTHRC1 activates the MAPK/MEK/ERK pathway. It was recently demonstrated that EGFR inhibitors attenuated the promoting effect of CTHRC1 on epithelial ovarian cancer invasion and that phosphorylation of EGFR and ERK1/2 was reduced in CTHRC1-silenced ovarian cancer cells [50]. Since CTHRC1 is a secreted protein, it is worth investigating in future studies whether CTHRC1 acts as a ligand of EGFR to activate the MAPK/MEK/ERK pathway in ESCC.
Conclusions
In summary, our findings reveal that CTHRC1 plays a pivotal oncogenic role in ESCC proliferation, invasion, and metastasis by upregulating cyclin D1, snail1 and MMP14 through the Raf/MEK/ERK/FRA-1 pathway. Patients with high expression of CTHRC1 are possible candidates for biologic agents that affect the oncogenic circuit we found in ESCC, such as MEK1/2 inhibitors and CDK inhibitors. Additionally, the newly elucidated clinical implications of CTHRC1 in our cohort support its use as a potential prognostic marker for ESCC patients. | 2023-01-19T21:27:30.562Z | 2017-06-23T00:00:00.000 | {
"year": 2017,
"sha1": "c09f14a4c3c71e25f64491fcd66a4b98c28e9507",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13046-017-0555-8",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c09f14a4c3c71e25f64491fcd66a4b98c28e9507",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
33152730 | pes2o/s2orc | v3-fos-license | Mitigation of nonlinear transmission effects for OFDM 16-QAM optical signal using adaptive modulation
The impact of the fiber Kerr effect on error statistics in the nonlinear (high power) transmission of the OFDM 16-QAM signal over a 2000 km EDFA-based link is examined. We observed and quantified the difference in the error statistics for constellation points located at three power-defined rings. Theoretical analysis of a trade-off between redundancy and error rate reduction using probabilistic coding of three constellation power rings decreasing the symbol-error rate of OFDM 16-QAM signal is presented. Based on this analysis, we propose to mitigate the nonlinear impairments using the adaptive modulation technique applied to the OFDM 16-QAM signal. We demonstrate through numerical modelling the system performance improvement by the adaptive modulation for the large number of OFDM subcarriers (more than 100). We also show that a similar technique can be applied to single carrier transmission. © 2017 Optical Society of America OCIS codes: (060.4370) Nonlinear optics, fibers; (060.2330) Fiber optics communications; (060.4080) Modulation. References and links 1. C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27, 379–423, 623–656 (1948). 2. A. Splett, C. Kurtzke, and K. Petermann, “Ultimate transmission capacity of amplified fiber communication systems taking into account fiber nonlinearities,” Proc. of 19th European Conference on Optical Communication (ECOC), MoC2.4 (1993). 3. A. D. Ellis, Z. Jian, and D. Cotter, “Approaching the non-linear Shannon limit,” J. Lightw. Technol. 28(4), 423–433 (2010). 4. D. J. Richardson, “Filing the Light Pipe,” Science 330(6002), 327–328 (2010). 5. E. Temprana, E. Myslivets, B. P.-P. Kuo, V. Ataie, N. Alic, and S. Radic, “Overcoming Kerr-induced capacity limit in optical fiber transmission,” Science 348(6242), 1445–1448 (2015). 6. P. J. Winzer, “Scaling Optical Fiber Networks: Challenges and Solutions,” Opt. Photon. News 26, 28–35 (2015). 7. R. Essiambre, G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel, “Capacity limits of optical fiber networks,” J. Lightw. Technol. 28(4), 662–701 (2010). 8. E. Agrell, G. Durisi, and P. Johannisson, "Information-theory-friendly models for fiberoptic channels: A primer". IEEE Information Theory Workshop (2015). 9. E. Agrell, A. Alvarado, G. Durisi, and M. Karlsson "Capacity of a nonlinear optical channel with finite memory," J. Lightwave Technol. 16, 2862-2876 (2014). 10. B. P. Smith and F. R. Kschischang, “A pragmatic coded modulation scheme for high-spectral-efficiency fiber-optic communications,” J. Lightw. Technol. 30(13), 2047–2053 (2012). 11. L. Beygi, E. Agrell, J. M. Kahn, and M. Karlsson, “Rate-adaptive coded modulation for fiber-optic communications,” J. Lightw. Technol. 32(2), pp. 333–343 (2014). 12. M. P. Yankov, D. Zibar, K. J. Larsen, L. P. B. Christensen, “Constellation shaping for fiber-optic channels with QAM and high spectral efficiency,” IEEE Photon. Technol. Lett. 26(23), 2407–2410 (2014). 13. T. Fehenberger, G. Böcherer, A. Alvarado, and N. Hanik, “LDPC coded modulation with probabilistic shaping for optical fiber systems,” Proc. of Optical Fiber Communication Conference (OFC), Th.2.A.23 (2015). 14. T. Fehenberger, D. Lavery, R. Maher, A. Alvarado, P. Bayvel, N. Hanik, “Sensitivity gains by mismatched probabilistic shaping for optical communication systems,” IEEE Photon. Technol. Lett. 28(7), 786–789 (2016). 15. F. Buchali, F. Steiner, G. Bocherer, L. Schmalen, P. Schulte, and W. Idler, “Rate adaptation and reach increase by probabilistically shaped 64QAM: An experimental demonstration,” Journal of Lightwave Technology, 34(7), 1599–1609 (2016). 16. C. Diniz, J. H. Junior, A. Souza, T. Lima, R. Lopes, S. Rossi, M. Garrich, J. D. Reis, D. Arantes, J. Oliveira, and D. A. Mello, “Network cost savings enabled by probabilistic shaping in DP-16QAM 200-Gb/s systems,” Proc. Optical Fiber Communication Conference (OFC), Tu3F.7, (2016). 17. C. Pan and F. R. Kschischang, “Probabilistic 16-QAM Shaping in WDM Systems,” J. Lightw. Technol. 34(18), 4285 – 4292 (2016). 18. A. Shafarenko, A. Skidin, and S. K. Turitsyn, “Weakly-constrained codes for suppression of patterning effects in digital communications,” IEEE Trans. Commun. 58(10), 2845–2854 (2010). 19. A. Shafarenko, K. S. Turitsyn, S. K. Turitsyn, “Information-theory analysis of skewed coding for suppression of pattern-dependent errors in digital communications,” IEEE Trans. Commun. 55(2), 237–241 (2007). 20. A. Alvarado, E. Agrell, D. Lavery, R. Maher, and P. Bayvel, “Replacing the Soft-Decision FEC Limit Paradigm in the Design of Optical Communication Systems,” J. Lightw. Technol. 33(20), 4338–4352 (2015). 21. B. Djordjevic, and B. Vasic, “Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems,” Opt. Express 14, 4625-4635 (2006). 22. N. Kashyap, P. H. Siegel, and A. Vardy, “Coding for the optical channel: the ghost-pulse constraint,” IEEE Trans. Inf. Theory 52(1), 64–77 (2006). 23. S. K. Turitsyn, M. P. Fedoruk, O. V. Shtyrina, A. V. Yakasov, A. Shafarenko, S. R. Desbruslais, K. Reynolds, and R. Webb, “Patterning effects in a WDM RZ-DBPSK SMF/DCF optical transmission at 40Gbit/s channel rate,” Opt. Commun. 277(2), 264–268 (2007). 24. B. Slater, S. Boscolo, A. Shafarenko, and S. K. Turitsyn, “Mitigation of patterning effects at 40 Gbits/s by skewed channel pre-encoding,” J. Opt. Netw. 6(8), 984–990 (2007). 25. S. T. Le, M. E. McCarthy, S. K. Turitsyn, “Optimized hybrid QPSK/8QAM for CO-OFDM transmissions,” Proc. of 9th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP), 763–766 (2014). 26. X. Zhou, L. E. Nelson, P. Magill, R. Isaac, B. Zhu, D. W. Peckham, P. I. Borel, and K. Carlson, “High Spectral Efficiency 400 Gb/s Transmission Using PDM Time-Domain Hybrid 32-64 QAM and Training-Assisted Carrier Recovery,” J. Lightw. Technol. 31(7), 999–1005 (2013). 27. S. O. Zafra, X. Pang, G. Jacobsen, S. Popov, S. Sergeyev, “Phase noise tolerance study in coherent optical circular QAM transmissions with Viterbi-Viterbi carrier phase estimation,” Opt. Express 22(25), 30579–30585 (2014). 28. P. J. Winzer, “High-spectral-efficiency optical modulation formats,” J. Lightw. Technol. 30(24), 3824–3835 (2012).
Introduction
In modern optical fiber links the nonlinear transmission effects are one of the major factors limiting system performance. As opposed to linear channels, where performance degradation due to noise can be mitigated by a signal power increase [1], in optical fiber communications the increased signal power leads to new (nonlinear) sources of distortions and loss of information (see e.g. [2][3][4][5][6][7] and numerous references therein). Exploitation of modern communication systems using a large number of WDM-channels (or super-channels) assumes an increase of the total signal power in the fiber leading to a growing impact of nonlinear transmission effects. The operation of optical communication systems in such nonlinear regimes is rather different from the conventional lower power mode. This calls for the development of new approaches and techniques to better understand the peculiarities of high signal power transmission regimes and the root cause of errors.
From the view point of classical information theory, the important challenge is to evaluate the Shannon capacity of optical communication channels and to find the capacity-achieving input signal distribution [1,[7][8][9]. Most optical systems currently in operation use the uniform input signal distribution with relatively simple alphabets. The performance of such systems can be improved by using more advanced input signal distributions, though this might require additional complexity of the transceivers and receivers. For instance, improvement can be achieved by modifying the shape of the transmitted signal by changing the constellations or by using non-uniform distribution for the occurrence probability of the symbols in a pre-selected constellation. These two kinds of signal shaping are often distinguished as geometric and probabilistic shaping [10][11][12][13][14][15][16]. Geometric shaping corresponds to non-uniformly distributed constellation points with equiprobable symbols, while probabilistic shaping corresponds to standard uniform constellations with varying probabilities of constellation points. In optical communication probabilistic shaping can be used to reduce the frequency of occurrence of high-power symbols in order to suppress nonlinear transmission effects. Probabilistic shaping can be performed using modified forward-error correction (FEC) codes [12][13][14]. Forward error correction and modulation methods can be jointly employed to improve practically achievable rates.
The powerful FEC techniques are critically important in the modern optical communications systems for the provision of an ultra-low bit-error rate (BER) requirement. FEC allows optical engineers to operate systems at relatively high BER that after FEC decoding is improved to BE R = 10 −12 or even BE R = 10 −15 . Most of the FEC techniques are developed for the memoryless and linear additive white Gaussian noise channel. A nonlinear fiber channel is not Gaussian and it has memory that results in inter-symbol interference and patterning effects. At low BER levels the FEC methods can cope with errors due to both the channel noise and patterning effects. However, as the inter-symbol interference (and corresponding patterning effects) grows stronger, at some point (BER threshold) the FEC scheme starts to deteriorate fast and it is at that vital point that error prevention becomes an essential issue in maintaining the low-BER operational regimes. Operation close to the BER threshold is not desirable and any additional margins on top of the FEC are vitally important. The application of weakly-constrained codes (skewed coding) [18,19] was proposed to provide extra margin in addition to FEC. In this approach a weakly constrained code is employed to decrease the frequency of occurrence of undesirable patterns in order to reduce the part of the BER that occurs due to inter-symbol interference and, consequently, bring the combined BER back under the FEC break-down threshold. Since the FEC threshold is typically very sharply defined (see an important discussion concerning the BER threshold in [20]), the most economical pre-encoding scheme has to be tuneable: any extra redundancy below the threshold is more effective when it is utilized by the FEC itself. Weakly constrained codes decrease the frequency of occurrence of patterns of a certain types. The amount of reduction is defined by a trade-off with the code redundancy and is controllable by a parameter that can be varied almost continuously. This makes this technique [18,19] ideally suited for the control of patterning effects reduction for the purposes of FEC operation close to the BER threshold.
In this work we examine the power-affected error statistics for both the single-carrier 16-QAM and for the multiple-carrier OFDM 16-QAM signal and explore the possibilities to mitigate the nonlinear effects at high signal powers through specific modulation and coding approaches. We propose and apply here the adaptive modulation technique that aims to reduce the impact of the nonlinear transmission effects at high signal powers, when statistics of errors are affected by nonlinear interactions. Our approach has some similarities with the use of weakly-constrained stream and block codes with tunable pattern-dependent statistics [18,19,[21][22][23][24], rate-adaptive coded modulation [10,11], the hybrid QAM [25,26], and probabilistic signal shaping using low-parity-density codes [12][13][14]. Our focus here is on the relatively high power signal regimes that substantially affect error statistics. We demonstrate the feasibility of the technique and quantify the improvement for the OFDM 16-QAM modulation format. We also analyze the trade-off between the reduction of symbol error rates and redundancy due to adaptive signal modulation.
Simulation Set-up and System Parameters
To examine the impact of nonlinear effects on error statistics we consider two types of systems: the single carrier 16-QAM and the OFDM 16-QAM with a varying number of subcarriers. We study the transmission link shown in Fig. 1. Each span of the transmission system includes the standard-monomode fiber (SMF) of 100 km and the Erbium-doped fiber amplifier (EDFA) that exactly compensates the signal power attenuation in the fiber preceding the amplifier. The transmitter generates the 16-QAM signal with the baud rate R s ; the number of samples per one OFDM symbol is 16 (i.e., the sampling rate is 16R s ). For pulse shaping the raised-cosine filter is employed. The noise loading is performed after each amplifier such that it corresponds to the amplified spontaneous emission added by EDFA. The chromatic dispersion of the link is compensated at the receiver before signal processing is used to recover the signal phase. The fiber links consisting of 10, 15, and 20 spans are examined. The propagation of a signal along the fiber span is modelled by the standard nonlinear Schrödinger equation (NLSE): where A(z, t) is the slowly-varying envelope of a signal. The NLSE is solved numerically using the well-known split-step Fourier method (SSFM). The following parameters are used in the numerical simulations: the over-sampling factor q = 16, the number of symbols N S = 2 18 , fiber losses α = 0.2 dB/km, the fiber nonlinearity coefficient γ = 1.4 W −1 km −1 , the chromatic dispersion β 2 = −25 ps 2 /km, the signal wavelength λ = 1.55 nm, the amplifier noise figure is N F = 4.5.
Using numerical modelling the dependence of BER on input signal power (without using any coding) has been computed for a varying number of spans, as is depicted in Fig. 2. It can be seen, that the optimal power is about 3 dBm.
Nonlinear Distortion of the 16-QAM Signal
Next we numerically examine the error statistics in highly nonlinear regimes. To yield statistically significant results, we have performed 100 runs (with different noise realisations) with 2 18 16-QAM OFDM symbols in each run. Thus, the total number of transmitted 16-QAM symbols is 100 · 2 18 · K = 2.62 · 10 7 · K, where K -number of modulated subcarriers. The propagation distance was 1000 km (i.e. 10-span link). Symbol time interval was varying with the number of subcarriers following the relation T s = K/BW with BW = 100 GHz, i.e. T s = 1 ns for K = 100. We would like to stress that here we did not use the standard signal optimization over power to choose the best operational point with the lowest bit-error rate, as shown in Fig. 2. Instead, the initial power of a signal is chosen from Fig. 2 to have bit-error rate level BE R = 10 −2 at high powers. This makes it possible to study error statistics in a somewhat artificially created "nonlinear" regime, where the signal power is higher than the optimal one that is around 3 dBm. Also the threshold of BE R = 10 −2 is chosen as the level from which we aim to reduce using the proposed approach BER down to the hard FEC threshold [20], where errors can be corrected using the forward-error correction methods. Figure 3 illustrates the dependence between the symbol-error rate (SER) and the baud rate for the nonlinear regime in case of a single carrier (K = 1) 16-QAM transmission. All the errors are divided into three categories by the powers of constellation points and corresponding three power rings. These rings are shown as dashed circles in Fig. 4. In these simulations all the constellation points have equal probabilities, this corresponds to uniform input signal distribution. As can be seen in Fig. 3, as expected, the outer ring is the most error-prone, followed by the middle ring. This can be qualitatively understood through the observation that in the outer ring the high-power symbols are more affected by the nonlinear effects. When the baud rate grows, the difference in errors between rings becomes smaller due to spreading pulses and an effective averaging over the data stream.
Next, we simulate the propagation of an OFDM 16-QAM signal in the same fiber link depicted in Fig. 1. Again, here all the constellation points have equal probabilities. The only difference is that the OFDM modulator and demodulator are used as the transmitter and the receiver, respectively. For the OFDM modulator we use the following parameters: the maximum number of subcarriers is 1024, the bandwidth is of BW = 100 GHz. The actual number of the modulated subcarriers K is varied in this case. Figure 5 shows how SER for different rings depends on the number of modulated OFDMsubcarriers. It looks similar to Fig. 3: for the relatively small number of subcarriers the distinction is significant between the different rings. When the number of subcarriers increases, the difference disappears.
We would like to stress again, that the results of massive numerical modelling presented in Figs. 3 and 5 should be understood not as optimization modelling. In all points presented in these figures we choose power from the target condition of having BER=0.01 in the nonlinear regime. This gives us a possibility to analyze the error statistics in such highly nonlinear transmission.
One can see that for a single carrier the main difference between error probabilities of rings is observed at lower baud rates. In the case of the OFDM 16-QAM the greater variations of the error statistics are for a lower number of subcarriers. The observed asymmetry in the error probabilities for different rings calls for applications of constrained coding to improve system performance operating near the FEC BER threshold.
Theoretical Analysis
The 16-QAM constellation points can be divided into three sets with different powers, i.e. three power rings. For the sake of clarity, we enumerate the constellation sets in the ascending order of amplitude of the points they consist of (i.e. the first set consists of the points that belong to the "inner" ring on the constellation diagram, the second set contains the points from the "middle" ring, and finally the third set includes the points from the "outer" ring). This can easily be seen in Fig. 4, s 1 = 4, s 2 = 8, s 3 = 4, where s i is the number of constellation points in the ith set.
It should be noted that the constellation points of a 16-QAM modulation format can be put on the phase plain in different ways (see e.g. discussions in [27,28]). For instance, the constellation points of 16-QAM formats that are widely employed in practice, form either a square or a circle. Below we consider only the "squared" 16-QAM modulation format. However, the theoretical approach proposed here can be applied to any modulation format irrespective of the way the constellation points are arranged on a phase plain.
To estimate the impact of the nonlinear effects on a QAM-modulated optical signal we assume that the error rate of a symbol depends only on its power. Below the error rate of a symbol from the i-th set is denoted by q i , and the probability of a symbol from the i-th set to appear in a data stream is denoted by P i .
The symbol error rate (SER) in a data stream can be found as follows: Since P 3 = 1 − P 1 − P 2 , the SER value effectively depends only on two unknown probabilities. Particularly if q 1 = q 2 = q 3 = q, formula (1) becomes trivial, and SE R = q. This is the case, when the error rate does not depend on the signal power as it is observed, for example, in linear or effectively linear channels. As we have shown above, in a nonlinear optical communication channel the error probabilities for various symbol sets differ from each other as a result of the impact of the nonlinear Kerr effect.
Our goal is to reduce the number of errors by varying the probabilities that the symbols from the i-th set appear in a data stream. This process can generally be referred to as the adaptive modulation. It is also sometimes named the hybrid QAM modulation format [25,26]. In this consideration we prefer the term "adaptive modulation", because the proposed approach can be used with any modulation format, not only with QAM. Our approach, changing the input signal distribution, is a version of the probabilistic shaping technique (see recent publication [17] and references therein). The advantage of the proposed adaptive modulation is the use of the error statistics to modify in a flexible and adaptive way probabilities of occurrences of symbols. The use of the detailed error statistics enables to take into account subtle difference in the transmission of various constellation symbols that, in turn, allows to mitigate the channel impairments with a relative small redundancy.
The adaptive modulation is that the symbols at various positions in a data stream are modulated by different "virtual" modulation rules derived from the original modulation format simply by excluding the constellation points that are more prone to the nonlinear induced errors. Here the symbol position means its position in time (in the data stream). For example, odd symbols can be modulated using only the four symbols from the "inner" 16-QAM ring, and the even symbols can be modulated using the 16-QAM format itself, without any restriction. Of course, this example is mentioned for illustrative purposes only, and the adaptive modulation scheme that enables the system performance to be improved is in general more complex.
It should be noted that when we vary the probabilities that the symbols from the i-th class appear in a data stream, we, in general, reduce the information entropy of a data stream and, in turn, increase the redundancy of a transmitting message. This results in the reduction of an actual channel rate. The entropy of a data stream per one symbol can be found as follows: where log x = log 16 x, p j is the probability of 16-QAM symbol j ( j = 1, 2, ..., 16). Since it is assumed in our consideration that the error rate of a 16-QAM symbol depends only on its power, then p j = p k if symbols j and k belong to the same set i, thus P i = s i p j . Consequently, for the 16-QAM format the information entropy H (P 1 , P 2 ) can be expressed using the following equation:
Reduction of SER through adaptive modulation of the 16-QAM channel
As was explained above, the nonlinear effects might result in the dependence of the symbol error rate on the symbol power. This gives a possibility to reduce the symbol error rate by means of the constrained encoding reducing the number of error prone symbols. Encoding is understood here in a broad sense, as a method to process and alter the data to be transmitted, regardless of the implementation details.
From Eq. (1) it can be derived that the initial symbol error rate (i.e. the symbol error rate before any encoding is applied) is as follows: In general the encoded signal has a different symbol error rate SE R C . Our goal is to find the input signal distribution probability vector − → P = (P 1 , P 2 , P 3 ) that minimizes the symbol-error rate, for a given set − → q = (q 1 , q 2 , q 3 ) (i.e. for a given error distribution across 16-QAM constellation rings), and for a given entropy 0 ≤ H 0 ≤ 1, i.e. for a given code rate C 0 = 1 − H 0 . This allows to evaluate a trade-off between system performance improvement and data redundancy. The problem can be solved by using the Lagrange multipliers method. Let us consider the Lagrange function L(P 1 , P 2 , λ) = SE R(P 1 , P 2 ) + λ · (H (P 1 , P 2 ) − H 0 ).
We assume that q i s in function SE R(P 1 , P 2 ) are not equal to each other. This assumption does not imply the loss of generality, however it makes analysis easier. The stationary points of function (5) can be found by solving the following system of equations: One can establish that ∂H (P 1 , P 2 ) , and ∂H (P 1 , P 2 ) . From the first two equations of system (6) one can find the relationship between P 1 and P 2 : where α = q 3 −q 1 q 3 −q 2 . Since q 1 q 2 q 3 (as it was assumed above), we do not have divergence of α. If α ≤ 0, it can be shown that equation (7) has a single root P 2 (P 1 ) for any fixed P 1 ; on the other hand, if α > 0, there exists only one root P 1 (P 2 ) for any fixed P 2 . Given this, the dependence between P 1 and P 2 can be quickly estimated numerically without the need for an exhaustive search.
Any solution of equation (7) yields the stationary point of function (5), if H (P 1 , P 2 ) = H 0 is met. Thus, the SER minimum value can be found by substituting the stationary points into equation (1). However, it can be derived that the stationary points of function (5) are always of the same type, i.e. they are all either the minimum or the maximum points. This is because the sign of a second differential strongly depends on the sign of λ that can generally take both the negative and positive values depending on the values q 1 , q 2 , and q 3 .
It is noteworthy that the same approach can be used for other modulation formats with a constellation diagram consisting of many distinct power levels. In such a case, the Lagrange function would look like this: L(P 1 , P 2 , ..., P N , λ) = SE R(P 1 , P 2 , ..., P N )+λ·(H (P 1 , P 2 , ..., P N )− H 0 ), and the entropy H (P 1 , P 2 , ..., P N ) = − P i log (c i · P i ), i.e. in mathematical sense these formulae are almost identical to equations (5) and (3), respectively.
Adaptive Modulation Scheme
To illustrate how the theoretical optimization can be practically utilised, we use the simple adaptive modulation scheme in which different time slots are modulated using different modulation formats that include both the 16-QAM format itself, and the "restricted" modulation formats that are obtained from the 16-QAM format as shown in Fig. 6. In this scheme, the number of symbols that use a specific modulation pattern may vary according to the desired distribution of the 16-QAM symbols in a resulting data stream.
Unlike the probabilistic shaping method [11,12], the proposed technique improves the data transmission by varying not only the average power of a signal, but also the average distance between different constellation points. The latter also affects the symbol error rate, because the smaller average distance between adjacent points on a constellation diagram (i.e. the smaller average power of a signal) results in an increased number of errors due to the "linear" noise. On the contrary, the large distance between constellation points means that the main cause of the errors is the prevalence of nonlinear effects. The adjustable SER reduction can be achieved by using the block-based approach to produce adaptively modulated data. To accomplish this, the output data stream is divided into separate data blocks of length N symbols, where i-th symbol in a data block, i = 1, .., N, is modulated by a deliberately selected modulation pattern with the number m i . These patterns are selected out of the patterns shown in Fig. 6. Obviously, m i ∈ {1, 2, 3, 4}.
Let us denote by C the desirable code rate of our adaptive modulation scheme. It can be selected in such a way as to obtain the desired SER. That is, the code rate is treated as one of the input parameters for the adaptive modulation scheme [10,11]. Another input parameter is the optimal probability vector − → P. Denote by n i the number of symbols in a data block that use the i-th modulation pattern from Fig. 6. Obviously, n 1 + n 2 + n 3 + n 4 = N, and 0 ≤ n i ≤ N. It can easily be seen that where c i is the number of bits that the i-th modulation pattern conveys. Since the number of constellation points used in one data block is proportional to N, the capacity of such scheme The values n i for a given − → P can be obtained by solving the following linear system: It can be found that The system of equations (9) can be solved if the right-hand sides of equations (9) are positive. However, if we deal with a particular set of probabilities − → P, it cannot be expected that this requirement is met for any 0 ≤ C ≤ 1. In fact, this means there are the probability vectors − → P for which it is impossible to build a code of the desirable code rate. In this case it is necessary either to obtain the values n i that give the probability distribution close to − → P, or to vary the code rate in order to make system (9) consistent.
Note that though the presented theoretical results are strict and give exact trade-off between redundancy and improvement of performance in systems with symbol dependent errors, this simple theory is not applied directly to optical fiber systems because statistics of errors is also affected by the change of probabilities of different input signal power (rings). This can be taken into account, but consideration of this effect is beyond the scope of our current paper.
Here, instead, we use analytical results only as a qualitative guidance in the direct numerical optimization of system performance.
Numerical Modeling Results and Discussion
In this section we apply the adaptive modulation in order to reduce the nonlinear transmission impairments in the OFDM-system employing the 16-QAM modulation format. Denote by κ the target SER reduction rate (0 ≤ κ ≤ 1) that is defined as follows: This coefficient can be treated as a measure of the encoding performance. Evidently, there is a trade-off between reduction in SER and redundancy in the data stream required to implement such coding. The performance of the nonlinear transmission systems is defined by the interplay between the effects of noise and nonlinear effects on the signal. Optical signal power always corresponds to the minimal BER.
To estimate the efficiency of the adaptive modulation in a practical implementation, we consider the signal propagation after 2000 km. We have selected the transmission distance in such a way as to reach the bit-error rate close to the forward-error correction limit. Currently this value lies in the range between 5 · 10 −3 to 10 −2 [20], depending on the error correction code. From Fig. 2 we have found the transmission distance where the minimum BER is about 10 −2 . The signal power is set to an optimal one, i.e. P in = 3 dBm. After transmitting the signal, we have obtained that q 1 = 0.030, q 2 = 0.037, q 3 = 0.035. At first glance, there is no significant difference between error rates from the various QAM modulation "rings". However, when applying the adaptive modulation, it turns out that even a small skew in the error rates yields a significant symbol-error rate improvement. Figure 7(a) shows that the number of errors can be reduced by half at the cost of 12% redundancy. These results are averaged over 100 numerical runs with different noise realisations. In Fig. 7(b) the mutual information dependence on the adaptive modulation redundancy is shown. As it is expected, the mutual information gradually decreases as the redundancy grows. However, for small values of redundancy (less than 5%) the mutual information falls slowly compared to the reduction of the actual code rate. Consequently, the low-redundant adaptive modulation can be the optimal choice for the systems where the bit-error rate of the QAM-modulated signal is near to the FEC code limit. Figure 8 shows the possible BER improvement as a function of the signal power. It can be seen that even if the redundancy is relatively small, the bit-error rate can be reduced significantly. It should also be noted that the optimal power gradually increases as the redundancy grows. Thus, the adaptive modulation makes it possible to effectively use large signal powers. As it can be seen from Fig. 8 (red curve), the BER improvement from BE R = 10 −2 to BE R = 10 −2.5 can be achieved using the 12%-redundant adaptive modulation. This enables to apply the FEC encoding with an overhead from 5 to 12% to the adaptively modulated data. The main difference between BE R = 10 −2 and BE R = 10 −2.5 = 3 · 10 −3 is that for small BERs (below 5 · 10 −3 ), any modern FEC code is able to reduce BER to 10 −9 and less. For larger BERs (especially for BE R > 10 −2 ), the correction code ability falls drastically, and more sophisticated coding should be applied. Adaptive modulation allows to use more practical and well established codes.
The system improvement is also shown in Fig. 9 as a Q-factor improvement. The Q-factor is calculated from BER using the standard formula: Q = 20 · log 10 ( √ 2 erfcinv(2 · BE R)).
From Fig. 9 we see that a Q-factor improvement of 1 dB can be achieved for any transmission distance between 1000 and 2000 km. It is also to be noted that the using of the adaptive modulator allows to increase the propagation distance up to 500 km compared to the signal without coding for Q-factor close to the forward-error correction limit. Fig. 9. The Q-factor for various transmission distances.
Conclusion
Rate-adaptive coded modulation [10,11], probabilistic signal shaping [10][11][12][13][14] and skewed signal coding [18,19] are techniques used to mitigate nonlinear effects and improve system performance either by modifying the size of the alphabet (and probabilities) of the transmitted constellation points or by applying non-uniform distribution for the occurrence probability of the symbols of a given constellation. In optical communication these approaches are used to remove most error prone patterns or symbols, that typically occur due to power dependence of the error probabilities. Probabilistic shaping of input signal can be implemented using modified FEC codes [12][13][14] or using reshaping of constellations [11]. We first examined here the impact of the fiber Kerr effect on error statistics in a highly nonlinear transmission of the OFDM 16-QAM signal over a 1000 km EDFA-based link. Based on these observations, we presented the theoretical framework for the probabilistic coding of three constellation power rings to minimize the symbol-error rate of such a signal. We proposed the adaptive modulation technique to produce the OFDM 16-QAM signal that is more tolerant to the nonlinear impairments compared to the initial signal. We demonstrated that the significant performance improvement can be achieved for a large number of OFDM subcarriers (more than 100) using the proposed adaptive modulation scheme. Similar techniques can be applied to single carrier transmission and various modulation formats. The proposed theoretical optimization approach can be applied to the polarization-multiplexed data formats and various correlated data streams.
Funding
The work was supported by the EPSRC project UNLOC, the work of A.S and M.P. | 2018-04-03T02:18:52.966Z | 2016-12-26T00:00:00.000 | {
"year": 2016,
"sha1": "a63fb6d1a452747cf0183d788793d2ee99380933",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.24.030296",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a63fb6d1a452747cf0183d788793d2ee99380933",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
269272730 | pes2o/s2orc | v3-fos-license | A CT texture-based nomogram for predicting futile reperfusion in patients with intraparenchymal hyperdensity after endovascular thrombectomy for acute anterior circulation large vessel occlusion
Background Post-thrombectomy intraparenchymal hyperdensity (PTIH) in patients with acute anterior circulation large vessel occlusion is a common CT sign associated with a higher incidence of futile reperfusion (FR). We aimed to develop a nomogram to predict FR specifically in patients with PTIH. Methods We retrospectively collected information on patients with acute ischemic stroke who underwent endovascular thrombectomy (EVT) at two stroke centers. A total of 398 patients with PTIH were included to develop and validate the nomogram, including 214 patients in the development cohort, 92 patients in the internal validation cohort and 92 patients in the external validation cohort. The nomogram was developed according to the independent predictors obtained from multivariate logistic regression analysis, including clinical factors and CT texture features extracted from hyperdense areas on CT images within half an hour after EVT. The performance of the nomogram was evaluated with integrated discrimination improvement (IDI), category-free net reclassification improvement (NRI), the area under the receiver operating characteristic curve (AUC-ROC), calibration plots, and decision curve analyses for discrimination, calibration ability, and clinical net benefits, respectively. Results Our nomogram was constructed based on three clinical factors (age, NIHSS score and ASPECT score) and two CT texture features (entropy and kurtosis), with AUC-ROC of 0.900, 0.897, and 0.870 in the development, internal validation, and external validation cohorts, respectively. NRI and IDI further validated the superior predictive ability of the nomogram compared to the clinical model. The calibration plot revealed good consistency between the predicted and the actual outcome. The decision curve indicated good positive net benefit and clinical validity of the nomogram. Conclusion The nomogram enables clinicians to accurately predict FR specifically in patients with PTIH within half an hour after EVT and helps to formulate more appropriate treatment plans in the early post-EVT period.
Introduction
Acute ischemic stroke (AIS) is the second leading cause of death and disability worldwide (1).Endovascular thrombectomy (EVT), which has been shown to be the primary therapy for AIS due to large vessel occlusion, significantly improves functional outcome and reduces mortality (2).Nonetheless, a considerable number of patients (41-55%) treated with EVT failed to achieve a favorable outcome at 3 months, despite successful reperfusion, which is known as futile reperfusion (FR) (3).
Post-thrombectomy intraparenchymal hyperdensity (PTIH) is a common CT sign, with an incidence of 31%-84% in previous studies (4), and is associated with increased mortality and worse clinical outcome (4,5).Therefore, we should pay more attention to patients with PTIH after EVT.The presence of PTIH indicates successful reperfusion but is inevitably accompanied by reperfusion injury.Notably, the efficacy of reperfusion after EVT in patients with PTIH varied widely with different degrees of reperfusion injury, making it difficult for clinicians to accurately predict the efficacy of reperfusion after EVT in these patients at an early stage.Therefore, there is an urgent requirement to accurately stratify the risk of FR in patients with PTIH and to offer further intervention after EVT to high-risk patients to optimize their prognosis.However, prognostic models specific to patients with PTIH are lacking.
Radiomics is the quantitative analysis of medical images that can be used to accurately and non-invasively diagnose and predict prognosis (6).CT Texture Analysis (CTTA), a part of radiomics, is based on texture features of CT images to extract the wealth of information hidden inside CT images (7).There is emerging evidence that radiomics may be useful in predicting functional outcome at 3 months in patients with AIS.Radiomic features extracted from infarct lesions on diffusion-weighted imaging (DWI) showed good performance in predicting functional outcome in patients with AIS (8).CTTA with early ischemic CT signs as the a region of interest (ROI) was recently shown to be effective in predicting functional outcome in AIS patients undergoing EVT (9).The relationship between CTTA based on hyperdense areas on post-thrombectomy NCCT images and FR after EVT remains unknown.Therefore, the aim of this study was to determine the role of CTTA in predicting FR after EVT in patients with PTIH and to construct and validate a model to predict FR specifically in patients with PTIH.
Subjects
The retrospective study was approved by the Ethics Committee of Huaian NO. 5) patients with initial CT images over 0.5 h; (6) patients undergoing surgery after EVT before identification; (7) patients with severe CT artifacts; (8) patients with mTICI < 2b; (9) patients lacking mRS score at 3 months after EVT.Patients with PTIH were finally randomized in a 7:3 ratio to the development and internal validation cohorts.92 patients with PTIH from another stroke center (Xuzhou Medical University Affiliated Hospital of Huai'an) were assigned to the external validation cohort according to the same inclusion and exclusion criteria.The distribution diagram of the enrolled subjects in the training cohort, internal validation cohort and external validation cohort is shown in Figure 1.
Endovascular thrombectomy
All enrolled patients were selected according to stroke guidelines and underwent EVT under local anesthesia by two neurointerventionists with 10 years of experience.EVT was performed using stent retrievers [Solitaire AB (Covidien/ev3, Irvine, United States) and Solitaire FR (Covidien/ev3, Irvine, United States)] or react suction devices (Covidien/ev3, Irvine, United States).The physician performing the neurointervention regularly reported the number of stent retriever passes.If targeted arterial recanalisation failed, rescue therapies such as stent implantation, balloon angioplasty, intracatheter tirofiban administration or intra-arterial thrombolysis would be used.
Definition of PTIH and futile reperfusion
PTIH was defined as a new hyperdense area compared to surrounding brain tissue on NCCT images within 0.5 h after EVT.Good reperfusion was defined as patients who achieved successful reperfusion after EVT (mTICI at level 2b-3) with a favorable functional outcome at 3 months (mRS score ≤ 2).Futile reperfusion was defined as patients who achieved successful reperfusion after EVT but had an unfavorable outcome (mRS score ≥ 3).
Clinical data collection
The following baseline clinical data was collected, including demographic data (age and gender), vascular risk factors (smoking, drinking, atrial fibrillation, diabetes mellitus, hypertension, hyperlipidemia, coronary artery disease and previous stroke), systolic and diastolic blood pressure (SBP and DBP), baseline National Institutes of Health Stroke Scale (NIHSS) score, Alberta Stroke Program Early Computed Tomography (ASPECT) score, etiology of stroke (large-artery atherosclerosis, cardioembolism, stroke of other determined etiology and stroke of undetermined etiology), occlusion site (internal carotid artery and middle cerebral artery), endovascular therapy information (onset to door time, door to reperfusion time, onset to reperfusion time, cases treated with tirofiban, cases of thrombolysis, number of thrombectomy).
CT image acquisition and texture analysis
All CT images were acquired on the UCT 710 scanner.The scanning parameters were as follows: slice thickness: 5.0 mm, matrix size: 512 × 512, field of view: 21.1 × 21.1 cm, tube current: 199 mA, tube voltage: 120 kV.Finally, the scanned CT images are exported in Digital imaging and communications in medicine (DICOM) format.
ROI was manually segmented along the contour of the hyperdense area on the NCCT image by two experienced radiologists, using 3D-Slicer software (version 5.0.3).A total of 18 first-order histogram features were extracted from the ROI.Radiologists with 7 and 12 years of experience in diagnostic neuroradiology randomly selected 30 patients' CT images and segmented them.The data were segmented twice with an interval of 1 week between segmentations by the radiologist with 7 years of experience.The data were then segmented again by a radiologist with 12 years of experience.The intraobserver and intergroup consistency of the ROI was tested using the intraclass correlation coefficient (ICC) (ICC > 0.8 indicates good agreement).
Statistical analysis
SPSS 24.0 software was used for statistical analysis.Continuous variables with normal distribution were expressed as mean ± S.E.M., and the two-tailed t-test was used to compare between different groups.Variables with skewed distribution were described by the median (interquartile range), and the Mann-Whitney U-test was used to compare differences between different groups.For categorical variables presented as frequency and percentage, the Chi-square test or Fisher's exact test was used to analyze differences between groups.Univariate and multivariate logistic regression analysis was used to investigate the independent predictors of FR.No collinearity was assumed between variables included in the multivariate logistic regression if the tolerance (the tolerance range was 0-1) was greater than 0.1 and the variance inflation factor was less than 10.Multivariate logistic regression analysis estimated the odds ratio (OR) and 95% confidence interval (CI) for each variable.A p value <0.05 was considered statistically significant.
The nomogram was developed and validated using R software (version 4.2.3).Category-free net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were used to compare the predictive power of the new and old models.The area under the receiver operating characteristic curve (AUC-ROC) was calculated to evaluate the discriminative ability of the nomogram.We calculated the positive predictive value (PPV), negative predictive value (NPV), and accuracy (ACC) of the model according to standard definitions.Calibration of the nomogram was performed using the Hosmer-Lemeshow test and a calibration plot with bootstraps of 1,000 resamples.Decision curve analysis (DCA) was used to evaluate the clinical net benefits of the nomogram.
Baseline characteristics
A total of 651 patients with AIS underwent EVT from August 2017 to January 2023, and 181 patients were excluded according to the exclusion criteria (Figure 1).We analyzed the incidence of FR in 470 patients, including 306 patients with PTIH and 164 patients without PTIH.The results showed that the incidence of FR was significantly higher (p < 0.001) in patients with PTIH (68.3%) than in patients without PTIH (39.3%).In addition, the mortality was significantly higher (p < 0.001) in patients with PTIH (23.9%) than in patients without PTIH (7.9%).Of these deaths, 93% were stroke-related.The distribution of the 3-month mRS scores for patients with and without PTIH is shown in Figure 2.
Subsequently, 306 patients with PTIH were randomized in a 7:3 ratio to the development and internal validation cohorts, with 214 and 92 patients in the development and internal validation cohorts, respectively.92 patients with PTIH from another stroke center were assigned to the external validation cohort.The baseline characteristics of the development and internal validation cohorts are shown in Table 1.The mean age of all patients was 69.26 ± 11.15 years and 164 (53.6%) patients were male.The incidence of FR was 68.3% (209 of 306) in all patients enrolled, 68.7% (147 of 214) in the development cohort, 67.4% (62 of 92) in the internal validation cohort.
Predictors for FR in patients with PTIH
We first screened for clinical predictors for FR in patients with PTIH.There were significant differences between the two groups in age, NIHSS score, ASPECT score, door to reperfusion time (DRT) and onset to reperfusion time (ORT), as shown in Table 2.These five variables were further analyzed with univariate analysis, and the results revealed that age, NIHSS score, ASPECT score and ORT were associated with FR (Table 3).Similarly, we then analyzed for CT texture features extracted from hyperdense areas on NCCT images within half an hour after EVT.A total of 14 of the 18 first-order histogram features were significantly different between the two groups (Table 4), and these were further analyzed using univariate analysis.12 features were found to be associated with FR (Table 3).Finally, age (OR: 1.045, 95% CI: 1.006-1.086),NIHSS score (OR: 1.194, 95% CI: 1.103-1.292),ASPECT score (OR: 0.459, 95% CI: 0.322-0.654),entropy (OR: 2.321, 95% CI: 1.104-4.879)and kurtosis (OR: 1.492 95% CI: 1.100-2.024)were independently related to FR in the multivariate logistic regression analysis, with ASPECT score being a protective factor, whereas NIHSS score, entropy and kurtosis were risk factors (Table 3).
Development of the nomogram to predict FR in patients with PTIH
The following five independent predictors of FR based on the results of multivariate logistic regression: age, NIHSS score, ASPECT score, entropy and kurtosis, were used to construct the nomogram for predicting FR in patients with PTIH (Figure 3).The nomogram, which is a graphical statistical tool that can calculate and estimate the probability of clinical outcome for individuals using a continuous score, consists of a series of higher and lower scoring line segments representing the contribution of predictors to FR, with each predictor being assigned a specific value based on the scale on the line segment.The total score is obtained by summing the scores of each predictor.Finally, a vertical line is drawn from the total score line to the risk line.The risk corresponding to the total score represents the estimated probability of FR.For example, a patient with PTIH aged 83 has a baseline NIHSS score of 22, ASPECT score of 10, entropy of 1.90 and kurtosis of 2.41.The point corresponding vertically to each variable is 30 for age, 50 for NIHSS score, 0 for ASPECT score, 20 for entropy and 8 for kurtosis.The total point for this patient is 108, corresponding to 0.52 on the risk line, indicating a 52% probability of FR for this patient.
Validation of the nomogram
To compare the predictive power of the novel nomogram (combining CT textures and clinical factors) and the old clinical model (including clinical factors only), we analyzed the category-free NRI and IDI.The category-free NRI was 0.204 (95% CI: 0.073-0.334,p = 0.002) in the development cohort, 0.316 (95% CI: 0.090-0.542,p = 0.006) in the internal validation cohort and 0.458 (95% CI: 0.247-0.670,p < 0.001) in the external validation cohort.The IDI was 0.063 (p < 0.001) in the development cohort, 0.105 (p = 0.003) in the internal validation cohort and 0.170 (p < 0.001) in the external validation cohort.These results demonstrated the superior predictive power of the novel nomogram combining CT textures and clinical factors over the old clinical model.
The discriminating performance of the nomogram was assessed by the area under the receiver operating characteristic curve (AUC-ROC) in the development and validation cohorts.The AUC-ROC was 0.900 (95% CI: 0.853-0.946) in the development cohort, 0.897 (95% CI: 0.835-0.959) in the internal validation cohort and 0.870 (95% CI: 0.793-0.946) in the external validation cohort, which indicating moderate predictive power (Figures 4A-C).In addition, for the training, internal validation, and external validation cohorts, at the best threshold, the PPV was 91.8, 87.9, and 88.9%, respectively; the NPV was 70.0%, 76.4%, and 73.7%, respectively; and the ACC was 83.7, 83.5, and 82.7%, respectively.The Hosmer-Lemeshow test indicated high goodness of fit between predicted and observed probability for the development cohort (χ 2 = 12.121, df = 8, p = 0.146), 6A-C), as the range of threshold probabilities was wide and practical.
Discussion
In this study, we constructed and validated a nomogram which combining clinical factors and CT texture features extracted from hyperdense areas on CT images within half an hour after EVT, to predict FR in patients with PTIH.Notably, the novel nomogram based on five predictors, including age, NIHSS score, ASPECT score, entropy and kurtosis, allowed clinicians to stratify the risk of FR in patients with PTIH.Currently, FR is becoming a major challenge in the treatment of patients with AIS by mechanical thrombectomy (3).However, the pathophysiology of FR remains unclear.The underlying mechanisms of FR include the "no-reflow" phenomenon, initial tissue damage, reperfusion injury, cerebral edema, poor collateral flow and inflammation (3,10).Notably, reperfusion injury is an important mechanism of FR and many studies have identified it as an independent risk factor for poor prognosis 3 months after EVT (11,12).Previous studies have shown that the occurrence of PTIH is strongly associated with reperfusion injury after EVT (12,13), which may be one of the mechanisms for the higher rate of FR in patients with PTIH (68.3% in our study) than in those without PTIH (39.3% in our study).Therefore, it will be of great clinical value to stratify the risk of FR specifically in patients with PTIH by developing predictive nomogram models.
PTIH, representing intracerebral hemorrhage or contrast extravasation, is an important radiographic feature of cerebral reperfusion injury after EVT (12,13).Although intracerebral hemorrhage is a known and important risk factor for FR (3,10), identification of whether the hyperdense area on CT images is hemorrhage or contrast leakage is based on follow-up CT 24 h after EVT, which may delay treatment adjustment.It has been reported that quantitative analysis of hematoma texture features on cranial CT may contribute to improved prediction of clinical outcome in symptomatic intracerebral hemorrhage (14).In our study, we demonstrated that CT textures extracted from hyperdense areas on NCCT images in patients with PTIH were independent predictors of FR.CT texture features can be divided into first-order and second-order features.First-order features are based on histogram analysis and represent the distribution of pixel values per gray level (15).Second-order features provide results on the spatial formation of voxels by calculating statistical relationships between neighboring voxels (16).First-order histogram features are more accessible than second-order features and are just as informative.In our study we extracted a total of 18 first-order features from hyperdense areas on NCCT images.Finally, entropy and kurtosis were identified as independent predictors for FR in patients with PTIH.
To our knowledge, our study is the first to construct a nomogram based on CTTA to predict FR in patients with PTIH.Our results showed that higher entropy and kurtosis were associated with a higher risk of FR.Sarioglu et al. (9) found that CT textures were effective in predicting unfavorable functional outcome in AIS patients undergoing EVT.In contrast to our study, they extracted CT texture features not from hyperdense areas on NCCT images in patients with PTIH, but from early ischemic CT signs.Kanazawa et al. (17) reported that the mean CT value of clots in the subarachnoid space could predict clinical outcome in patients with subarachnoid hemorrhage, whereas we did not find the predictive role of the mean CT value.
There are several limitations in the present study.First, it was a single-center retrospective study which may have some selection bias, and further multicenter prospective studies are needed to reduce bias.Second, ROIs were drawn by hand and based on the experience of observers.We attempted to address this issue by assessing the repeatability of two independent observers.Third, we did not extract second-order features, which are informative but relatively difficult to extract and analyze.More comprehensive studies of CTTA are needed to further analyze the association between CTTA and FR in patients with PTIH.Finally, the sample size of the study was small and large sample studies are needed to confirm our findings.
In conclusion, we developed and validated a nomogram based on clinical factors and CT texture features extracted from hyperdense Nomogram to predict futile reperfusion in patients with PTIH.Frontiers in Neurology 09 frontiersin.orgaspects of stroke, including the diagnosis of stroke lesions (18-20) and cerebral hemorrhage (21,22).In the future, we should intensify our research on CT texture analysis to fully unveil its value and expand its application.
FIGURE 2
FIGURE 2Distribution of the 3-month mRS scores for patients with and without PTIH.
FIGURE 5
FIGURE 5Calibration plot for predicting futile reperfusion in patients with PTIH in the development (A), internal validation (B) and external validation (C) cohorts.
FIGURE 4 The
FIGURE 4The ROC curve of the nomogram for predicting futile reperfusion in patients with PTIH in the development (A), internal validation (B) and external validation (C) cohorts.
1 People′s Hospital and the requirement for patient informed consent was waived (approval number KY-2023-046-01).A total of 651 patients with AIS underwent EVT at Huaian NO.1 People′s Hospital from October 2017 to February 2023.Inclusion
TABLE 1
Comparison of baseline characteristics between the development cohort and internal validation cohort.Values are listed as frequency (percentage), mean ± SD or median (interquartile range).P-value represents the comparison between the development cohort and validation cohort, with "a" representing Mann-Whitney U-test, "b" representing Chi-square test, "c" representing Student's t-test, and "d" representing Fisher's exact test.SBP, systolic blood pressure; DBP, diastolic blood pressure; NIHSS, National Institute of Health stroke scale; ASPECT, Alberta Stroke Program Early Computed Tomography Score; TOAST, Trial of Org 10,172 in Acute Stroke Treatment; LAA, large-artery atherosclerosis; CE, cardioembolism; SOE, stroke of other determined etiology; SUE, stroke of undetermined etiology; ICA, internal carotid artery; MCA, Middle cerebral artery; ODT, onset to door time; DRT, door to reperfusion time; ORT, onset to reperfusion time.
TABLE 2
Comparison of baseline characteristics of patients with futile and good reperfusion in the development cohort.
areas on CT images within half an hour after EVT to predict FR in patients with PTIH.The proposed nomogram was able to accurately stratify the risk of FR specifically in patients with PTIH.It may help clinicians to formulate more appropriate treatment plans for high-risk patients in the early post-EVT period and has clinical promotional value.In addition to predicting FR, CT texture is valuable in other
TABLE 4
Comparison of CT texture features in patients with futile and good reperfusion in the development cohort.
aValues are listed as mean ± SD or median (interquartile range).P-value of "a" represents Mann-Whitney U-test and "b" represents Student's t-test.
TABLE 3
Logistic regression analysis for predictors of futile reperfusion in patients with PTIH in the development cohort., National Institute of Health stroke scale; ASPECT, Alberta Stroke Program Early Computed Tomography Score; DRT, door to reperfusion time; ORT, onset to reperfusion time. | 2024-04-21T15:17:15.755Z | 2024-04-19T00:00:00.000 | {
"year": 2024,
"sha1": "dc7c2b5e55849068c07ff287fb9c49b52c085cc1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1327585/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b8acb109a53656ef109191b5baf59b280bbf6ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245337130 | pes2o/s2orc | v3-fos-license | A Supramolecular Nanoparticle of Pemetrexed Improves the Anti-Tumor Effect by Inhibiting Mitochondrial Energy Metabolism
In recent years, supramolecular nanoparticles consisting of peptides and drugs have been regarded as useful drug delivery systems for tumor therapy. Pemetrexed (PEM) is a multitarget drug that is effective for many cancers, such as non-small cell lung cancer. Here, RGD-conjugated molecular nanoparticles mainly composed of an anticancer drug of PEM (PEM-FFRGD) were prepared to deliver PEM to tumors. The peptide could self-assemble into a nanoparticle structure with diameter of about 20 nm. Moreover, the nanoparticle showed favorable solubility and biocompatibility compared with those of PEM, and the MTT test on A549 and LLC cells showed that the PEM-FFRGD nanoparticles had stronger cytotoxic activity than PEM alone. Most importantly, the nanoparticle could promote tumor apoptosis and decrease mitochondrial energy metabolism in tumors. In vivo studies indicated that PEM-FFRGD nanoparticles had enhanced antitumor efficacy in LLC tumor-bearing mice compared to that of PEM. Our observations suggested that PEM-FFRGD nanoparticles have great practical potential for application in lung cancer therapy.
INTRODUCTION
Lung cancer is the main cause of death worldwide, with approximately 1 million cases being recorded each year (Ferlay et al., 2019). Pemetrexed (PEM) is a multitarget antifolate drug used for cancer treatment (Adjei, 2003) and it was approved by the FDA to treat malignant pleural non-small cell lung cancer (NSCLC) as well as mesothelioma in 2004 (Manegold et al., 2009). Additionally, it was reported that PEM had a therapeutic effect on colorectal tumors and lung cancer (Hanauske et al., 2001;Schaer et al., 2019). It was reported that PEM could hinder translation inhibition upon low glucose in NSCLC, and PEM combined with 2-Deoxy-glucose showed enhanced efficacy in decreasing cell proliferation of Malignant Pleural Mesothelioma (Piecyk et al., 2021). These finds suggest that PEM may have relationship with glucose metabolism. However, there is few reports studies the role of PEM on glucose metabolism. what's more, there are some limits to the clinical use of PEM, such as fast clearance, low water solubility, low bioavailability, poor targeting selectivity and penetration toward the tumor, and potential spleen and kidney toxicity (Li et al., 2007;Glezerman et al., 2011). Hence, it is necessary to further investigate and improve the antitumor efficiency of PEM.
In recent years, nano-delivery systems, such as poly (lactic-coglycolic acid)/polylactic acid nanoparticles, gold/silica nanomaterials, polymeric nanoparticles, and hyaluronan, have proven to be promising in drug delivery to improve the therapeutic effect (Liang et al., 2009;Liang et al., 2014;Cai et al., 2017;Amano et al., 2019;Sun et al., 2019;Naskar et al., 2021). Moreover, self-assembling peptide nanoparticles displayed huge prospects in the delivery of antitumor drug due to their high loading capacity, good biocompatibility, inherent degradability, controllable drug release and easy preparation (Cheetham et al., 2013;Basu et al., 2016;Zhang et al., 2016). They can be used as carriers both for physical encapsulation of drugs (Wu et al., 2015;Sato et al., 2018) and for chemical conjugation among drug molecules and self-assembling peptides (Ren et al., 2014;Yang et al., 2019). The synthesized peptide-drug compounds can effectively improve the solubility and stability of hydrophobic drugs under physiological conditions and enhance the accumulation and retention of drugs in tumor tissues (Gao et al., 2018;Qi et al., 2018;Yang et al., 2019). Currently, a variety of antitumor peptide-drug conjugates that form injectable nanoparticles have been constructed and have shown intensive antitumor efficacy in vitro and in vivo (Murphy et al., 2008;Gao et al., 2009;Song et al., 2013). For example, PEM-FE conjugates have been reported to have better cytotoxic efficacy than the free PEM (Lock et al., 2017). However, the therapeutic evaluations and mechanism is not further studied, and whether the PEM-FE can specific deliver PEM to tumors is not found. Therefore, further optimization and investigation on peptide-based nanoparticles to deliver PEM is promising.
Many surface receptors that are overexpressed in cancer cells have been used for targeted delivery by nanoparticle. αvβ3 integrin is one receptor that is absent in normal tissues but highly expressed in many tumors, thus making it a suitable target for selective delivery (Chen and Chen, 2011). αvβ3 integrins can interact with the Arg-Gly-Asp (RGD) motif existed in a majority of extracellular matrix proteins (Xiong et al., 2002;Graf et al., 2012). Several studies have suggested that the nanoparticle (NP) system modified by RGD can effectively target tumors (Wang et al., 2014;Zhao et al., 2014;Hou et al., 2016;Yadav et al., 2020). However, only a few studies have reported the targeted delivery of PEM (Lock et al., 2017). Therefore, RGD-conjugated nanoparticle delivery systems show great promise in cancer therapeutics.
Besides, aromatic motif phenylalanine-phenylalanine (FF) also often is used for constructing suparmolecular nanomaterials to improve self-assembly capability and regularize molecular arrangement (Ryan et al., 2015;Diaferia et al., 2019;Gallo et al., 2021;Li et al., 2021). In this study, we fabricated an RGD peptide-conjugated, self-assembling, and FFbased supramolecular nanoparticle mainly formed by PEM to concurrently enhance active targeting and anticancer efficiency toward tumors. As the self-assembly peptides are promising organic materials, which are widely applied in tissue engineering, drug delivery systems and biomaterials for they showed improved mechanical properties, stability, hydrophilicity and good biocompatibility (Ryan et al., 2015;Diaferia et al., 2019;Gallo et al., 2021;Li et al., 2021). PEM-FFRGD were synthesized with a method of standard solid phase peptide synthesis, and the PEM-FFRGD monomers can easily form into particle-like nanostructure via PH adjust and ultrasound method. Moreover, the prepared nanoparticles can not only prominently enhance the solubility of PEM but also promote tumor apoptosis and decrease energy metabolism. As a result, this PEM-FFRGD nanoparticle boosted antitumor activity in vitro and in vivo compared with that of PEM alone. This supramolecular nanoparticle may provide a unique PEM delivery system for targeted therapy of lung cancer.
Animals
C57BL/6 mice (6-8 weeks) were supplied by Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). All mice were maintained under specific pathogen-free conditions in the animal facility of Xinxiang Medical University. The Xinxiang Medical University Experimental Animal Ethics Committee approved all animal experiments.
Synthesis and Characterization of PEM-FFRGD Conjugate
The designed peptide PEM-FFRGD was synthesized using a method of standard solid-phase peptide synthesis (SPPS) by using 2-chlorotrityl chloride resin and N-Fmoc protected amino acids. The crude PEM-FFRGD conjugates were purified through reversed-phase high-performance liquid chromatography (HPLC). LC-MS (Shimadzu, 2020) was utilized to characterize the PEM-FFRGD conjugates.
Preparation and Characterization of PEM-FFRGD Nanoparticle
The peptide (0.5 wt%) was dispersed in phosphate buffer solution (PBS), and sodium carbonate was added to adjust the pH of the solution to 7.4 followed by an ultrasound for self-assembly. The laser was used to identify aggregation of PEM-FFRGD. The nanostructures of self-assembled PEM-FFRGD were viewed Frontiers in Bioengineering and Biotechnology | www.frontiersin.org December 2021 | Volume 9 | Article 804747 using transmission electron microscopy (TEM, HITACHI HT7700 Exalens) following a negative staining technique.
Cytotoxicity Analysis
Lewis lung cancer (LLC) and A549 cells (100 µl) were plated at a concentration of 5 × 103 cells/well in a flat-bottomed 96-well plate on Day 0. On day 1, cells were exposed to different concentrations of PEM and PEM-FFRGD for 72 h. Then, the culture medium was removed, and MTT (0.5 mg/ml) was added to each well and incubated for 4 h. The supernatant was discarded, and dimethyl sulfoxide (100 µl) was used to completely dissolve the crystals. Absorbance at 490 nm was detected using a microplate reader (EXL-800; Bio-Tek, Winooski, VT, United States).
Live/Dead Assay LLC cells and A549 cells (5,000 cells/well) were plated in 96-well plates and cultured for 24 h. Then, the cells were cocultured with 6 and 12 µM PEM and PEM-FFRGD for 72 h, respectively. The cells were washed with PBS and stained with calcein-AM and ethidium homodimer-1 (EthD-1) for 15 min. Then, the cells were viewed under microscopy (TI-S, Nikon, United States).
Flow Cytometry Analysis of Apoptosis
LLC cells and A549 cells (1.0 × 105 cells/well) were cultured in 12well plates for 24 h, and then LLC cells and A549 cells were treated with 6 and 12 µM PEM and PEM-FFRGD for another 72 h, respectively. The cells were then collected and stained with annexin V and PI for 20 min at room temperature and analyzed using flow cytometry.
Detection of ROS Levels
To detect reactive oxygen species (ROS), 2 × 105 LLC and A54 cells were plated in 24-well plates and cultured for 24 h. Then, the cells were treated with 6 and 12 µM PEM and PEM-FFRGD for 48 h, respectively, following which the cells were stained with DCFH-DA (10 µM) for 20 min with CCCP treatment (10 μM) as a positive control. The cells were washed and resuspended in PBS samples after staining and then analyzed using flow cytometry.
Measurement of Mitochondrial Membrane Potential
The mitochondrial membrane potential (ΔΨm) was measured using a mitochondrial detection kit (Beyotime, Shanghai, China). LLC and A549 cells were cultured in 24-well plates on coverslips at 5 × 104 cells/well and incubated for 12 h. Then, PEM and PEM-FFRGD (6 µM) were added and incubated with LLC cells, and the A549 cells were incubated with 12 µM PEM-FFRGD and PEM. After 48 h, the nuclei of the cells were stained with JC-1 and DAPI, followed by visualization using confocal laser scanning microscope (CLSM).
Metabolic Assessments
ECAR detection was performed using an XF24e Extracellular Flux analyzer (Agilent). Cells with different treatments were seeded on an XF24 microplate at 10,0000 cells/well 1 day prior to the assay and cultured in a cell incubator with CO 2 for 24 h. When glucose, oligomycin, and 2-DG were loaded into the corresponding injection port of sensor cartridges and the utility plate was placed on the instrument tray for calibration, the cell microplates were placed into a 37°C incubator without CO 2 for 45-60 min prior to running with warmed detection medium at 500 μl/well. Then, the cell culture microplate was loaded on the instrument following calibration. Wave 2.4 software (Agilent) was applied to analyze the data.
In vivo Antitumor Efficacy C57BL/6J mice were subcutaneously inoculated with Lewis lung tumors and then randomly divided into four groups (5 mice per group) when the tumor sizes were approximately 100 mm 3 . Mice in Groups 1 and 2 received PBS and FFRGD gels through i.v. injections, while mice in Groups 3 and 4 were administered 20 mg/kg PEM and PEM-FFRGD nanoparticles by i.v. injection. Repeated injections were administered on Days 1 and 5. Tumor size and body weight were monitored every 2 days, and the tumor volume was calculated according to the following formula: tumor volume (V, mm 3 ) (tumor width) 2 × (tumor length)/2.
Statistical Analysis
GraphPad Prism 8.0 was applied for statistical analyses. All data are presented as the means ± SD. Statistical analysis was acquired using the Student's t-test, and a difference between groups of p < 0.05 was considered significant.
Preparation and Characterization of PEM-FFRGD Nanoparticle
We recently developed a self-assembling peptide (PEM-FFRGD) mainly contained PEM, which was capable of forming supramolecular nanoparticles ( Figure 1A) and avoiding the non-specific interactions with normal tissues. The concept of self-assembling peptide-drug compounds is one in which the drug loading capacity can be precisely held at the level of molecular design (Su et al., 2015;Ma et al., 2016). We intended to further investigate its application in treatment of tumors. In this component, the PEM drug loading in PEM-FFRGD conjugates was 39 wt%, which was calculated as the ratio of the molecular mass of PEM to PEM-FFRGD conjugate (Supplementary Figure S1). Moreover, the conjugate structures were confirmed by LC-MS (Supplementary Figure S2). The m/z value of PEM-FFRGD was found to be 1,050.4 in accordance with the prospective exact masses of the compounds. We dispersed the PEM-FFRGD compound in phosphatebuffered saline (PBS; pH 7.4) at a concentration of 0.5 wt%, and a transparent solution was formed after adjusting pH to 7.4 and an ultrasound operation ( Figure 1B). The solution exhibited an obvious Tyndall effect compared with the solution of PEM-FFRGD monomer, indicative of the formation of nanoaggregate ( Figure 1B). Then, the microstructure of the self-assembled PEM-FFRGD was detected using TEM. The results showed Frontiers in Bioengineering and Biotechnology | www.frontiersin.org December 2021 | Volume 9 | Article 804747 5 that the PEM-FFRGD compound formed nanoparticles with diameters of approximately 20 nm ( Figure 1C).
PEM-FFRGD Enhanced the Cytotoxicity Activity
To investigate the cytotoxic activity of the PEM-FFRGD nanoparticle in vitro, various standard MTT viability assays were performed on A549 and LLC cells, with PEM and FFRGD as controls. As shown in Figure 2A, PEM-FFRGD showed stronger cytotoxic activity toward A549 and LLC cells than PEM alone. The viability of LLC cells was 50% after treatment with 0.2 µM PEM-FFRGD, but the viability was decreased to 26% following treatment with 6 µM PEM-FFRGD. Similar results were acquired for A549 cells exposed to PEM-FFRGD ( Figure 2B). The PEM-FFRGD and PEM showed no cytotoxicity when LLC and A549 cells were incubated with different concentrations of PEM-FFRGD and PEM for 24 h. When the incubation time was extended to 48 h, the PEM-FFRGD showed cytotoxic activity toward LLC and A549 cells and the viability of cells treated with PEM-FFRGD is lower than treated with PEM (Supplementary Figure S3). Similar results were acquired by live/dead assays (Supplementary Figure S4). These results may be attributed to the RGD motif on PEM-FFRGD as a result of more accumulation by tumor cells. Our results also suggested that 100 μM FFRGD did not display any distinct cell toxicity, indicating that the PEM-FFRGD nanoparticle has good biocompatibility.
We further detected the cytotoxic activity of PEM and PEM-FFRGD by live/dead staining. A549 and LLC cells were plated into 96-well plates and exposed to 12 and 6 μM PEM-FFRGD, and PEM was used as a control. After coculture for 72 h, the cells were stained with calcein-AM (green) and EthD-1 (red) and visualized using a microscope. As displayed in Figures 2C,D, PEM-FFRGD killed most A549 and LLC cells, with a few green signals observed. Additionally, PEM could kill most cells but exhibited more green fluorescence specific to living cells than PEM-FFRGD. These results indicate that PEM-FFRGD has stronger cytotoxicity against A549 and LLC cells, which may be due to the sustained release of PEM from the PEM-FFRGD Gel.
Apoptotic Mechanism of PEM-FFRGD
Annexin V/PI staining was used to investigate the influences of PEM-FFRGD on the apoptosis of LLC and A549 cells. As shown in Figure 3, both PEM-FFRGD nanoparticles and PEM induced significant apoptosis signals in LLC and A549 cells when the cells were incubated with these drugs. In addition, PEM-FFRGD induced stronger apoptosis in either LLC or A549 cells than PEM, with killing rates of 73 and 62%, respectively.
PEM-FFRGD Suppresses Energy Metabolism
Tumor cells can reprogram their metabolism to meet their bioenergy and biosynthetic needs, and increased glycolysis is a Frontiers in Bioengineering and Biotechnology | www.frontiersin.org December 2021 | Volume 9 | Article 804747 6 main biochemical feature of tumors (Ma et al., 2021). Tumor glycolysis plays vital role in the tumor rapid growth and cancer metastasis because it provides energy (Elia et al., 2018). Therefore, we detected the effect of PEM-FFRGD on glycolytic metabolism in LLC and A549 cells using an XF24e extracellular flux analyzer. We found that the extracellular acidification rate and glycolysis level of PEM-FFRGD-treated LLC cells were remarkably lower than those of the PEM group and control group ( Figures 4A,B). Moreover, the glycolytic capacity and glycolytic reserve were also lower than those of the control group and PEM group. Similar results were found for A549 cells exposed to PEM-FFRGD ( Figures 4C,D). These data suggest that PEM-FFRGD nanoparticles are more favorable for regulating the metabolic demands of tumor cells and thus suppressing tumor cell growth and proliferation.
PEM-FFRGD Decreases the Level of Intracellular ROS Production and Mitochondrial Membrane Potential
Previous results suggested that PEM-FFRGD regulated the energy metabolism of LLC and A549 cells, and the main place of energy metabolism was the mitochondria. Damage to mitochondrial function or the electron transport chain will directly lead to an increase in ROS levels in cells. To further explore the mechanism of the effect of PEM-FFRGD on energy metabolism, we detected the effect of PEM-FFRGD on intracellular ROS levels using the ROS assay kit. As shown in Figures 5A,B, after LLC and A549 cells were cultured with PEM-FFRGD and PEM for 48 h, the amount of ROS produced in PEM-FFRGD-treated cells was distinctly higher than that in PEM-treated cells, indicating that PEM-FFRGD was more beneficial for mediating ROS production. These results are strongly identical to the cytotoxicity activity data acquired from the MTT test and live/dead assay.
In addition, a specific fluorescent probe was used to detect mitochondrial membrane potential to verify the influence of PEM-FFRGD on mitochondrial respiration. The results showed that PEM-FFRGD could more significantly decrease mitochondrial membrane potential in both LLC and A549 cells than the PEM group and control group ( Figures 6A,B). In summary, PEM-FFRGD induced ROS accumulation and decreased mitochondrial membrane potential in A549 and LLC cells.
In vivo Efficacy of the PEM-FFRGD in an LLC Xenograft Model
The efficacy of the fabricated PEM-FFRGD was further investigated as therapy for LLC tumors in C56B6/J mice in vivo. LLC tumors were injected under the skin of the mice, and then PEM-FFRGD and PEM were administered intravenously when the tumor volume reached approximately 100 mm 3 on Day 1 and Day 7. The effect of the different drugs on tumor growth was assessed by measuring the tumor volumes within 2 weeks. The tumors in the control group and Gel alone group grew very rapidly, while tumor growth in the PEM-FFRGD and PEM groups was significantly inhibited. Moreover, compared with the PEM group, the tumor volume of the PEM-FFRGDtreated group was less ( Figures 7A,B). No distinct differences in the weight of the mice was observed among the groups ( Figure 7C), which suggests that not only Gel but also PEM had no influence on mouse weight. In addition, we examined the serum levels of TNFα, IL-6, ALT and AST in the mice. The results suggested that the levels of TNFα, IL-6, ALT and AST in mice treated with PEM-FFRGD were not different from those in mice treated with PBS and PEM alone ( Figures 6G, 7D). These results implied that PEM-FFRGD has no systemic toxicity in mice.
CONCLUSION
In summary, we developed a PEM-based theranostic filament nanoparticle system (PEM-FFRGD) that could target the tumor site by RGD motifs. Our results showed that the nanoparticle not only significantly inhibited LLC and A549 growth in vitro but also suppressed LLC growth in vivo. This was accompanied by favorable biocompatibility, leading to synergistic suppression of cell proliferation, promotion of tumor apoptosis and decrease of mitochondrial energy metabolism of tumors by enhancing the level of intracellular ROS production and abating mitochondrial membrane potential. The conjugate PEM-FFRGD could sustainably release PEM and displayed Frontiers in Bioengineering and Biotechnology | www.frontiersin.org December 2021 | Volume 9 | Article 804747 8 great advantages and powerful antitumor efficacy in targeting LLC xenograft tumors on the basis of the inhibition of energy metabolism and tumor-targeted administration in vitro and in vivo. The improvement of antitumor activities is mainly attributed to the RGD and FF peptides, which release the PEM slowly and prevent normal tissue from nonspecific uptake of PEM. These data indicate that the PEM-FFRGD conjugate offered a strategy to effectively target the presentation of PEM and RGD peptides to LLC cells and restrain tumor growth in a synergistic manner.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by Xinxiang Medical University Experimental Animal Ethics Committee.
AUTHOR CONTRIBUTIONS
HL and CG conceived and designed the project. ZW supervised this project. YS and LZ conducted the experiments and analyzed experimental data. LZ and HJ performed in vivo mice test. YS performed the statistical analysis. HL wrote the manuscript, and all authors discussed the results and proofread this paper. | 2021-12-21T14:09:40.441Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "647f4c10415c918f1d5c75df66f2ef5d225b9b1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "647f4c10415c918f1d5c75df66f2ef5d225b9b1c",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218954006 | pes2o/s2orc | v3-fos-license | Heuristic Analysis for In-Plane Non-Contact Calibration of Rulers Using Mask R-CNN
: Determining an object measurement is a challenging task without having a well-defined reference. When a ruler is placed in the same plane of an object being measured it can serve as metric reference, thus a measurement system can be defined and calibrated to correlate actual dimensions with pixels contained in an image. This paper describes a system for non-contact object measurement by sensing and assessing the distinct spatial frequency of the graduations on a ruler. The approach presented leverages Deep Learning methods, specifically Mask Region proposal based Convolutional Neural Networks (R-CNN), for rulers’ recognition and segmentation, as well as several other computer vision (CV) methods such as adaptive thresholding and template matching. We developed a heuristic analytical method for calibrating an image by applying several filters to extract the spatial frequencies corresponding to the ticks on a given ruler. We propose an automated in-plane optical scaling calibration system for non-contact measurement.
Introduction
Non-contact object measurement has a long history of applications in several fields of industry and study. A few applications include measuring industrial fractures/cracks [1], measuring plant/leaf size [2], wound morphology [3], forensics [4], and archaeology [5]. Non-contact object measurement refers to the measuring of objects via a method or device which does not interrupt or come in contact with the object of focus. This can either be done with a device in real time, or via software, after an image of the object is captured. To measure captured images a reference or proxy marker is often used to spatially calibrate the resolution of the image [6,7]. In this sense, a graduated device or ruler placed in close proximity can also be used to better spatially comprehend the contents of an image. Therefore a reference marker, specifically a ruler, can be used to spatially register the size of the contents in an image. For these images, the digital measurement in pixels (px) and the spatial reference marker, such as a ruler, would need to be captured in the same plane and then measured. A common metric for the combination of these two measurements is dots per inch (DPI) which is the number of pixels per inch.
Often, measuring an image in pixels involves manual work somewhere in the pipeline and in order to maintain a confident level of accuracy/consistency the task becomes time consuming, disregarding the abilities of semantic segmentation [8,9] momentarily. Dynamically or automatically achieving this level of image calibration would over come two limitations. The first is that manually measuring each image in a database can take a lot of human hours. The second is that manual measurement induces subjectivity to the task which is objective in nature, meaning different measurements will be retrieved by different people for the same image. This second hurdle can be evaluated by considering the aforementioned applications' individual needs for consistently accurate measurements.
In this work, we chose to implement non-contact object measurement for images containing rulers. This is primarily because rulers are well standardized and readily available in common practice. This system can be used to either automatically convert pixel measurements for an image with an object and a ruler or to manually measure elements with the generated graduation to pixel ratio. However, if measurements are provided, the system is able to take the objects morphological data, measured in pixels, and convert it to whichever graduation system was provided.
This system is capable of calculating the graduation to pixel ratio in DPI or DPM (dots per millimeter) of an image provided it contains a ruler. To do this, we created a heuristic technique for approximating the distances between the graduations on a ruler. In this work the resulting measurement in pixels from one graduation to another translates to the mode of the spatial frequencies along a region of a ruler, which identifies the element that occurs most often in the spatial frequencies set.
Hough Transforms have been cited as perhaps the most popular technique for measuring rulers in images for determining scale and measuring objects. Hough lines are extracted following a similar methodology to edge detection and often involve some initial metric of edges to retrieve a result. Although it is a very straightforward solution to the problem, it typically requires specific parameters for different rulers and images. One system which uses the Hough transform to detect measurement tools such as rulers and concentric circles of fixed radii [10] was developed. This system works by gathering several regions with vertical lines with respect to a large horizontal line. The filtering of the method looks at sample mean, variance and standard deviation to evaluate the weighted input information for graduations on a ruler. The specified inputs to the system are the type of ruler, the graduation distances, and the minimum number of elements to be considered [10]. For the two images listed in their paper under the results section given for manual vs semi-automatic the measurements varied from 2.3 mm to 0.027 mm.
Another method which uses the Hough transform in forensics [11] extracts the spatial frequencies of a ruler first by using a two-dimensional Discrete Fourier Transform (2-D DFT) which is modeled after a known ruler. Both this methodology and Calatron's et al. [10] method are largely similar for extracting the graduation distances, although here the evaluation is done using a block based DFT. This method is capable of sampling at a sub-pixel level, this is a necessity when assessing some forensic samples such as finger prints [11]. Additionally, this method appears to only work specifically for forensic rulers which have very unique features. Our system was not targeted for one particular reference ruler, thereofore we chose to forgo Hough transforms as it would only narrow the solution to a specific category or ruler type.
Object detection and recognition have been major topics of computer vision with a variety of solutions addressing several problems such as: character recognition, semantic segmentation, self-driving cars, and visual search engines [12][13][14][15][16]. In brief, object detection is a way to identify whether or not instances of an object are contained in an image [17], and object recognition a way to classify an object given a label that characterizes that object [18]. State of the art solutions for object detection and recognition today rely on one or a few combinations of deep neural network techniques. In particular, Region proposal based Convolutional Neural Networks (R-CNN) [19] are among the most proficient. Others, such as You Only Look Once (YOLO) [20], Fast YOLO [21], Single Shot Detector (SSD) [22], Neural Architecture Search Network (NASNet) [23], and Region-based Fully Convolutional Network (R-FCN) [24] apply different approaches. These discoveries/optimizations along with advances in R-CNNs occured all in a relatively short period of just a few years. Enhancements to the standard R-CNN architecture, such as Fast R-CNN [25], Faster R-CNN [26], and Mask R-CNN (MRCNN) [27] have led to improved speed, accuracy, and other benefits in the realm of object detection/recognition [28,29].
Year over year we see that smartphone devices and cameras are used for an increasing number of smart technologies and applications. Our heuristic system uses a combination of methods as a novel solution for detecting and measuring rulers to scale objects in the same plane. The capacity for our system is made possible by taking advantage of the high resolution images that have become the norm over the past few years. Our proposed method can be split into two distinct pieces in order to achieve heuristic analysis for in-plane non-contact calibration of rulers. First, we perform semantic segmentation by adapting the Mask R-CNN architecture which extracts the area containing a ruler. In Figure 1 this area is represented as "Segmented Result" which is where the proposed heuristic method begins to assess the ruler. We then perform the in-plane non-contact measurement of the segmented ruler returning a calibrated measurement, the architecture in this figure represents the proposed method end to end. In the Sections 3 and 4, we will expand on the outcome of region for "Pixel Conversion" which is an integer given as pixels per unit of measurement, e.g., pixels per millimeter.
Materials and Methods
In this section we describe our database, which was used for both training and validating the Mask R-CNN model. The same dataset was used in the development and testing of the calibration method. Then, we briefly provide an overview of our adaptation to Mask R-CNN for instance segmentation and object detection, and how we interpret its outputs for the calibration method. Finally, we describe in detail the procedure for our proposed heuristic calibration method of a ruler.
Resources-Database
To create the database, we segmented and labeled images containing rulers from a proprietary dataset. To perform the segmentation a semi-automated segmentation tool was created for generating binary masks and their corresponding JavaScript Object Notation (JSON) mapping. The semi-automated tool allows the user to plot points by using a mouse. In real time, the software will draw a line connecting these two points together. The points from the user and the line created bound the segmented object and correspond to regions. These regions are converted into a JSON object and is used as the ground truth for MRCNN. In total, 430 images were segmented and labeled, 322 images were used for training and the remaining 108 were used to validate the model. Some images in our database contain artifacts such as hands or clothes that interfere or cover parts of the rulers. We include these images for a more realistic outcome from the training. Each ruler was marked with the class "Ruler" to denote the segmented object or region contained within that image. Images in our database had image resolutions ranging from 800 × 525 pixels to 5184 × 3456 pixels. Across the entire dataset 61.2% of the images were larger then 1920 × 1080 pixels and 4.66% of the images were 800 × 525 pixels. These images were taken with a variety of devices including smart phones. We did not personally procure this database it was provided by a third party source.
Mask R-CNN Adaptation for Ruler Segmentation
Within the field of computer vision, semantic segmentation provides a way for separating or extracting objects within an image by classifying several small groups of pixels that correlate to a larger single group or class [9,30]. Mask R-CNN, a popular extension of Faster R-CNN (FRCNN), an architecture for object instance segmentation, has the capacity to perform semantic segmentation, object localization or classification, and masking [27]. This architecture won the COCO dataset challenge in 2017 and has been adapted to many different datasets with top of the line results [28,29,[31][32][33][34]. This architecture makes several improvements to FRCNN, one of these is how regions of interest (RoI) are used within the model, improving the instance segmentation of FRCNN by the application of what they call RoIAlign, a feature map RoI identifier that computes the value of each sampling point using bilinear interpolation from the adjacent grid points [27]. A second improvement comes from the addition of a fully convolutional network or FCN to perform the masking which occurs in parallel with the classification and bounding box extraction. The FCN [9] makes use of a Feature Pyramid Network or FPN [35] which performs the feature extraction for the classification. This is then used to propose the bounding box around the object of interest, which is essential to our method because it gives us with high confidence the square area containing a ruler. The 'Mask' in Mask R-CNN is the last improvement to the FRCNN model we will discuss, and it is the reason we chose to use MRCNN in our system. As an output the model will produce a binary mask containing the area of the object of focus, in this paper that is the area of the ruler, based on the prediction for the class and RoIs. This follows from making several region proposals with different sizes around each instance of an object. The mask generated consists of m × m masks generated from the FCN and it is structured by the RoI-Align [27].
The binary mask is our preferred target output of Mask R-CNN, as it will provide the least amount of noise with the maximum amount of information needed for our system to calibrate an image. With the mask we can extract only the ruler from the original image and then immediately perform our calibration method. However this procedure is not perfect so we additionally take advantage of the bounding box which will contain enough of a ruler for the calibration method to execute. The calibration method we propose is also capable of handling outputs of this type. In Figure 1 a bounding box result is used to show the full scope of the architecture. In Figure 2, masks are used to show the capability of Mask R-CNN and why it is used in this paper for extracting a ruler from an image. We applied some constraints to the bounding box output and discuss the drawbacks in Section 2.3 but still consider it as an acceptable intermediate outcome of our system.
Heuristic Scale Calibration
To generate the calibrated output, our method follows three steps: Preparation, Deep Searching, and Calibration/Conversion. Each of these steps or components and their contribution to the system can be seen in Figure 2. In addition, in this figure, preparation, the output of Mask R-CNN can be seen as either a segmented ruler or a bounding box. The preparation step includes cropping, down-sampling, removal of noise, and testing the image for spatial frequencies before the deep search. In the deep search, the method will search the ruler by looking for spatial frequencies corresponding to the graduations or ticks on a ruler.
In the validation, the last part of Figure 2, the target system and calibration, or conversion from pixels, is performed. This is a mapping of the number of ticks in a region to the number of pixels expected for the equivalent space. We assume the calibrated output will be a measurement in the metric or imperial system because we expect a visible and referenceable ruler contained in the image.
Preparation
Preparing the Mask R-CNN result for the deep search starts with a simple test to determine if the image has any spatial frequencies that can be searched. This test is a lite version of the Deep Searchbut gives adequate indication whether the output is searchable. The decision of this step is based on the detection score achieved using MRCNN. In short if the test fails we know that there was very few spatial frequencies in the Mask R-CNN result, one example of this is a picture of a plain surface. In the event that there is no mask, the system will default to searching the bounding box from here on. A search on the bounding box is slower, as the search window will include excess area of background. The amount of background acceptable for searching with our method should be less than roughly two times the area of the ruler. Decreased performance is expected when the background is large as the search would take too long to process. We will refer to the resulting image from MRCNN as the RulerImage. The RulerImage is then converted to grayscale and template matching is performed. Template matching in this process is used to reduce the search area to a small window of the ruler to one dense region of graduations, which we will call the TemplateWindow. The TemplateWindow can be seen in Figure 2. The TemplateWindow is the only area region of the image that is searched for in the Deep Search. For template matching, we use the Template Matching Correlation Coefficient TM_CCOEFF, which slides through an image and seeks the best sum of pixel intensities closest to the template following 2D convolution [36].
A patch or window of size w × h from the image denoted I is compared against a template denoted T , creating an array set of results that represent the matches [36]. The best match is selected with TM_CCOEFF which chooses the best matrix denoted R(x, y) as the comparison with the points and their position having the global maximum. The template ruler was manually created and designed with the dimensions of a metric ruler; however, our system can identify an imperial ruler with the same template. The size of the TemplateWindow is bounded between 150 × 150 to 500 × 500 px and its value is calculated based on the size of the RulerImage, this is to reduce computing time. The size of the TemplateWindow is initially calculated as the length of the shortest dimension of the segmented result. It is then incrementally reduced to fit in the aforementioned bounds. The size reflects the pixel density of the image where 500 px corresponds to images larger than 2560 × 1440 px. The lower bound is the minimum resolution the system can search with and is used for images as small as 800 × 525 px. To finish preparations for the deep search the image is converted to black and white using adaptive thresholding. We chose this method over others, such as Otsu Method (https://www.mdpi.com/search?q=otsu), because it does not look at the global values of the image. Adaptive thresholding converts a gray scale image to black and white by splitting the image into windows and taking the local minimum/maximum of the window. The method for adaptive thresholding in OpenCV (Open Computer Vision) [36] takes a thresholding method as a parameter such as Otsu but for our system Binary thresholding was used. In our system we used ADAPTIVE_THRESH_GAUSSIAN_C as the thresholding method because it takes the weighted sum of the neighborhood instead of the mean [36]. The mean value produces results closer to the Otsu method which only considers the mean value of the image. For the purpose of removing localized noise we used the Gaussian_C method to ensure better results. The window size must also be specified by a kernel, space similar to convolution. Adaptive thresholding has other uses such as edge/object detection [29] and feature extraction/segmentation [37][38][39].
Deep Searching
We define a simple computer vision definition for the graduations on a ruler, in our system, as the region in any RulerImage or TemplateWindow with the highest uniform spatial frequency pattern. The spatial frequency f is defined below as the number of graduations or ticks under a line given the width of black pixels w i within a threshold t.
To search for this region, we start by creating an array or line with N number of points where 150 <= N <= 500, the exact value is the width of the TemplateWindow. This value is found in Section 2.3.1. We chose this range based on the variable image size in our database and therefore each TemplateWindow is between 150 × 150 and 500 × 500 pixels. The lower bound of 150 we select to maintain a level of fidelity in the image which should be considered when initially capturing the image. The upper bound of 500 was selected because it is big enough to perform a thorough search on large images and small enough to still execute quickly. The search is repeated for each of the corners in the TemplateWindow such that each corner searches in a 90 degree arc about the image. This is demonstrated in Figure 2 as the InitalSearch. Each of the search parameters are based on the size of the image, where the number of searches is equal to where X is the width of the image, N is the image width of the target image size, R is the quotient of N over 500, and i is the search iterator. The value i is used to incrementally reduce the search space this is because we expect to iteratively improve. Every search has a window or width equal to: where m is a constant based on the target image size. We selected the constant m based on the target RulerImage size such that the number of searches and search windows would be proportional to the size of the RulerImage. These constants ensures that even if a TemplateWindow is very small, i.e., 150 × 150 pixels, the image will still get searched. After each iteration of the search, the window size and searches per window decreases. Figure 3a presents the combined rendering of the Deep Search method. In Figure 3b each of the regions have been rendered separately noting the empty or white areas where the search stopped or skipped searching altogether. In Figure 3c the rendering shows two zoomed arbitrary regions of Figure 4a where the ticks on the ruler are located, noting the increased number of searches in this small region. After each search the list of results is filtered to determine if the average number of graduations or spatial frequencies has improved. The first filter will remove results that were less than the average uniform spatial frequency, from Equation (1), in the most recent search. The second filter removes results with a large slope with respect to both the vertical and horizontal axis. The two filters will drop on average half of the search results. The deep search is looking for the improved spatial frequency over each iteration of results. The conclusion of the search will occur when either of the following is true: (1) the search has gone on for too long or (2) the average uniform spatial frequency has not changed. The limited number of searches is meant to provide an extensive yet deterministic result as there could be marginal improvement over many searches. From experimentation, the first condition typically means there was no good result to pick. Meaning the image was not very clear. An unchanged average uniform spatial frequency or the same number of measured graduations is the desired outcome of the deep search where there will either be only one spatial frequency, or a set of spatial frequencies equivalent to the average. In the worst case, the average spatial frequency could decrease after a search. In this scenario, we resort to memoization (https://en.wikipedia.org/wiki/Memoization) and expect the best result to exist in the previous search. When the search exhausts all the available options we retrieve a single spatial frequency. The red line in Figure 4 is the final resulting line and its corresponding spatial frequency is below. In the final calculation we consider both the black and white values for width in the length of a single graduation. We measure the value for a single graduation as M f where M f is the sum of the modes for the black spatial frequency M b and white spatial frequency M w .
where we expect M f is one millimeter in the metric system or 1 16 of an inch in the imperial system.
Calibration
The final calibration of the image is performed immediately following the deep search. To do this the resulting line of the search slides vertically or horizontally through the TemplateWindow to search for spatial frequencies with the same period. The results in Figure 5 are calibrated based on the system where graduations are grouped based on 5 or 10 mm in the metric system and 1 2 or 1 inch in the imperial system. This calibration can be thought of as moving/pivoting the line gradually until it is perpendicular to the graduations to finda a spacial frequency correlated M f . Although, the term perpendicular is used to help explain the procedure, there is no trigonometric function in this procedure. The line we are using to evaluate the system of measure is the result of a heuristic search that uses two pixels per iteration for pivoting to identify a line with the least error in the frequency distribution. This leads to a line that approximates to a 90 • angle between the markers and the evaluating line. In Figure 5 the line is almost perpendicular and was the best found by the heuristic system in this case. The spatial frequencies shown in Figure 5 are evaluated as 5M f on the left and 10M f on the right. These two definitions are redundant to ensure confidence in the calibrated results when it is possible. The smaller of the two additional spatial frequencies, 5 mm and 1 2 in, should be measured an equal number of times in the calibration step to also retain confidence in a single measure. This number is calculated also based on M f as the number of spatial frequencies in the deep search result over the length of 5 mm or 1 2 inch.
Results
With the current system the results with our segmentation of the ruler and the results of the heuristic method are separated as depicted in Figure 6. Since MRCNN is well cited and popular for its results, we include the results of our adaptation of the model and technique for images containing rulers but only for detection.
Segmentation Results
The result of MRCNN, specifically the instance detection, is given in terms of the object detection confidence. In our experiment we took 30 images containing either a white, green, or brown graduated ruler in the shot. The rulers used varied in shape and size. The rulers used had: just metric graduations, just imperial graduations, and both metric/imperial graduations on the same ruler. In metric, rulers had lengths of 5, 15, and 30 cm. The background of the images were largely unique to each image. With Mask R-CNN we were able to achieve an average confidence score of 99.97%. In terms of our system this means we are almost certainly guaranteed a bounding box to start calibrating the image. This is because a bounding box is only generated by MRCNN when there is a detection score; however, if the detection score is too low the background area will be an issue for our system as mentioned in Section 2.1.
The heuristic method was tested on a printed USAF-1951 target in Figure 7 where three reference objects of different sizes, surrounded by red boxes were selected. The objects considered were measured from the first black stripe to the third black stripe for area, perimeter, diagonal, and length/width. Each photograph taken of the objects was done with a ruler in the shot. Several shots were taken at different heights while the rest were taken at roughly the same height. This was done to test the robustness of the search in terms of variance in the measure. We found that a picture taken very close to the reference objects induced more error. This is because we do not measure at the sub-pixel level in our method and the system is rounding off error to the nearest pixel. The metric measurements for each object are presented in Table 1. Noting the measurements for Length/Width as an average of two measured sides. Area was calculated as the square of the average for the Length/Width. Ten pictures at 4032 × 3024 pixels resolution were taken of the target using an iPhone X's 12MP f/1.8 4 mm camera. Each object was measured in pixels after the photo was taken using the "Measure" tool in ImageJ2 [40]. The measurements for each of the objects in each of the ten pictures is shown in Figure 8. The absolute error of these measurements are presented in Figure 9. Table 1. The manual measurements of the printed target using a metric ruler. Object 1 is the largest object from Figure 7, Object 2 is the second largest object, and Object 3 is the smallest object.
Heuristic Search Calibration Results
We performed real measurements for the Area, L/W, Diagonal, and Perimeter, both in metrics and in pixels. We expected the error to be squared in the area measurements and have included it in our results for completeness. We found that on average the measurement for calibrating the test image was 9 pixels/millimeter. Our calibrated results include the images taken at different heights are included in Figures 8 and 9. From the 24 of the 30 measurements for length and width our proposed calibration method measured within 1 mm of error with the remaining 6 having less than 2 mm of error. The average of the 10 measurements obtained for objects 1, 2, and 3 are 2.92 cm, 2.08 cm, and 0.82 cm, respectively. Figure 9 shows the relative error for L/W, Diagonal, and Perimeter measurements was less than 5% with the relative error for Area less than 6%. We found that the system performed slightly better on Object 1 and 2 than it did for Object 3. As Object 3 is the smallest, with a length of 8 mm, this can be attributed to our measurements being done at the pixel level.
We believe these results are meaningful as the general error in any systematic measuring device is plus or minus the minimum unit of measure, in our tests this was plus or minus 1 mm. In our tests, 87% of the time, results were well within this margin of error and we believe the measurements with 2 mm of error could partially be attributed to our error in measurement using the ImageJ2 "Measure" tool. In addition, as we have mentioned several times before, the system only samples the image at a pixel level, which could be cause for the additional millimeter of error.
Conclusions
In this paper we proposed a new system for automated calibration of images when a ruler is present in the scene. This is done by extracting the ruler and measuring its graduations as a spatial frequency. This system produces accurate and reproducible results, removing the tedious manual labor to perform metrological tasks. The robust and extensible OpenCV packages allowed us to formulate and execute image transforms that were necessary to move through the pipeline of data extraction. Aside from OpenCV we created the initial/deep search from scratch instead of applying more pre-built methods. This allowed us to fine tune the search and consider broader scenarios. We used Mask R-CNN to produce satisfactory ruler masks, with this technique we were able to remove the majority of uncertainty when handling rulers with widely varying backgrounds. Our system samples the image data at the pixel level which was mainly done to broaden the capabilities that the system could work on, namely different resolutions. When sampling at a sub-pixel level, assumptions about recording devices' parameters need to be made and manually input to the system. In order to overcome this difficulty, we trained Mask R-CNN to do semantic segmentation on rulers normalizing the segmented masks.
In addition to Hough line transform or Fourier transforms to extract the graduations on a ruler, limiting the ability to perform sub-pixel measurements, we iteratively reduce the search space of the graduations and perform a search for spatial frequencies corresponding to the ruler's graduation. This provides reproducable results on a wide variety of images. From end to end the extracted pixel to metric ratios or DPI/DPM can be found on average in 5.6 s.
In the future we look forward to optimizing the system, further reducing the search time and space on the deep search stage. We are currently working on implementing several methods to decrease the number of search candidates that would most likely not be in the region of the ruler's graduation. The system today stores many search results in case the heuristic model point backwards, this ultimately leads to still having many empty/wasted searches. Additionally, we look forward to finding a solution for halting the system early, potentially prior to performing either the initial or deep search, skipping the deep search when possible. This would lead to faster results from our system. | 2020-05-21T00:13:25.445Z | 2020-05-09T00:00:00.000 | {
"year": 2020,
"sha1": "28f3763c97fcc66c7873476dd2ebad86f9000fd2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/11/5/259/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4485d0e8b62fb61b9aa448b24ea149e4cf880c8d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1696653 | pes2o/s2orc | v3-fos-license | A Role of Eye Vergence in Covert Attention
Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders.
Introduction
Humans, like several other animals, have their two eyes positioned on the front of their heads, and provide us with a single visual field. The eyes receive a slightly different projection of the image because of the two eyes' different positions on the head. Therefore when looking at an object, the eyes must rotate around a vertical axis so that the projection of the image is in the center of the retina in both eyes. Vergence refers to the simultaneous movement of both eyes in opposite directions to obtain single binocular vision. The eyes rotate towards each other (convergence) when looking at an object closer by, while for an object farther away they rotate away from each other (divergence). Vergence is therefore an important cue in depth perception. The angle of vergence (AoEV) corresponds to the angle generated when both eyes focus on one point in space (Fig. 1).
Humans receive a surplus of sensory information. To cope with this, spatial attention is shifted to select relevant information at the expense of the rest. Usually, visuospatial attention moves about the environment in tandem with the eyes (overt attention). However, in the absence of overt orienting movements, attention also produces biases in perceptual and neural responses (covert attention; [1][2][3]). During eye fixation, small fixational eye movements (microsaccades) relate to covert attention [4,5], but see [6]. These findings corroborate the close connection of oculomotor system with visual attention.
Here we report another type of fixational eye movement, namely eye vergence that relates to covert attention. We show that during gaze fixation visual stimuli modulate the AoEV as a function of their ability to capture attention. Vergence angle increases after visual stimulation, and this enhancement correlates with bottom-up and top-down induced shifts in visuospatial attention. The start of the modulation in eye vergence is locked to the onset of the stimulus, while the size of the angle of eye vergence depends on the attentional load that the stimulus receives or attracts.
We argue that our observations have implications for theories of attention [7][8][9][10][11][12][13][14], and support a relationship between bottom-up and top-down attention, which are associated with segregated neuronal circuits [15,16]. Finally, our study shows that there is a basis for using eye vergence as a tool for measuring attention, and may provide new insights into attention and perceptual disorders.
Results
We tested subjects in a visual cue/no-cue paradigm (Experiment 1) and measured the angle of eye vergence (AoEV). Once subjects had fixated on a central cross for 300 ms, 8 vertical bars ( = possible targets) appeared around it (Fig. 2a). Subjects were given a valid cue (a small central line pointing to the target's position) in 50% of the trials, to inform them about the target location. In the other half of the trials, a no-cue stimulus (a central cross) was presented. Then one ( = target) of the 8 vertical bars was titled (20u) for 100 ms and subjects had to identify the direction of the tilt. Faster reaction times (RT) were found in the cue condition than in the no-cue condition (mean 6 sem RT: 58768.2 ms vs. 688.869.3 ms, t-test , p,0.01, df = 611). Detection performance was also slightly better when the target was cued (92.4% vs. 84.6%).
The positions of both eyes were simultaneously monitored during the task to compute the AoEV. Surprisingly, the size of the AoEV was not constant, but was affected by visual stimulation. Once a visual stimulus had been presented (i.e. the presentation of the central fixation spot, the array of vertical bars and the cue/no-cue stimulus, see Fig. 2a), the AoEV transiently increased with a mean velocity of 0.4760.06u/s (Fig. 2b). To find out whether the fluctuations in the AoEV relate to shifts of attention, we compared for all time samples the average strength of the modulation in AoEV for the cue and no-cue conditions. We found that the modulation of the AoEV was significantly (t-test, p,0.01, df = 611) greater after cue onset (Fig. 2b, blue points). This difference became significant at 290 ms. The maximum of the average (non-normalized) AoEV was 350% higher in the cue condition than in the no-cue condition (mean 6 sem: maximum of 0.12u60.0068 at 612 ms versus 0.035u60.0066 at 574 ms). The greater increase in the size of the AoEV in the cue condition was found for all target positions (Fig. 2d), although it was slightly higher for the horizontally located targets. As the increase in the AoEV occurred for all target locations, the observed effects do not reflect the nature of eye vergence (i.e. horizontal eye movements).
To support the idea that the greater increase in the AoEV after cue onset represents a cognitive mechanism, we analyzed the AoEV in subjects who viewed the same visual stimulation sequence as in Experiment 1, but without any instructed task (Experiment 2). The results show similar modulated responses in the AoEV over time, although the baseline was lower (Fig. 2c, upper panel). However, the modulation of the AoEV was as strong (t-test, for all time samples p.0.28, df = 160) after the cue stimulus as after the no-cue stimulus (Fig. 2c, lower panel). This demonstrates that the cue-induced change in the AoEV depends on the subject's engagement in the task.
Pupil size, which relates to attention [17][18][19], might influence the measurement of the AoEV. We therefore analyzed pupil size. The results show that the size of the pupil increased during the trial more after cue onset than after nocue onset (Fig. 2e). The difference in pupil size between the cue and no-cue conditions occurred at 572 ms (p,0.01, t-test, df = 611, blue dots in Fig. 2e), which is ,200 ms later than the cue vs no-cue difference in AoEV (vertical dotted line in Fig. 2e). This supports the proposed relation of pupil size with shifts in visuospatial attention. We also tested for microsaccades, which relate to attentional shifts [4,5] but see [6]. Our results show that the differential modulation of the AoEV in cue and no-cue conditions is independent of the occurrence of micro-saccades (Fig. 2f). Furthermore we tested the effects of target eccentricity on the modulation of vergence. Targets were located at 3.5 0 , 7 0 , and 14 0 from the fixation cross. As before, vergence angle increased after cue onset. The increase appeared to be weaker for targets at 3.5 0 than for more peripheral targets. However, no significant differences in the modulation strength between all conditions was observed (t-test for all time samples; 3.5u vs 7u df = 172, 3.5u vs 14u df = 187, 7u vs 14u df = 181; for all p.0.05; Fig. 3).
A possible confounding factor is that the cue and no-cue stimuli are slightly different, which could produce a differential effect on fixational eye movements; especially as cue/no-cue stimuli were presented at the fovea. To exclude this effect of foveal stimulation, and to provide further evidence for the role of eye vergence in shifts of visuospatial attention, we conducted a visual discrimination experiment (Experiment 3) with two succeeding auditory cues. Auditory cues (pronounced in their native tongue) were numbers from 0 (NoCue) to 8 (1-8, numbers indicated one of the eight possible peripheral positions; Fig. 4a). Participants could practice until they correctly associated the numbers with the stimulus positions. In 89% of the trials, the first cue was given and was either valid (80%) or invalid (20%). In the other trials (11%), a no-cue was presented. The second stimulus was a valid cue in 89% and a no-cue in 11% of the trials. Therefore, there were five conditions: 1) CueRCue-Same, 2) CueRCueDiff, 3) CueRNoCue, 4) NoCueRNo-Cue, 5) NoCueRCue. Overall performance correct was 91%. Reaction times were (mean 6 sem): CueRCueSame, 48765.14 ms; CueRCueDiff, 520613.73 ms; CueRNoCue, 606632.55 ms; NoCueRNoCue 579619.25 ms; No-CueRCue, 479627.23 ms. In the CueRCueDiff (i.e. the first cue is an invalid cue) and NoCueRCue conditions, we expect a shift of visuospatial attention after the second cue. In the CueRCueSame and NoCueRNoCue, attention is not shifted after the second cue. In the CueRNoCue, no shift of attention to a particular local region is required, as in the nocue condition in Experiment 1. This interpretation is depicted in the lower panel of figure 4c.
The eye recording results show that initially after cue presentation the AoEV decreases during the period. This is because previously the AoEV had increased by the presentation of the 8 possible targets and at the time of the cue stimulus eye vergence was still was returning towards baseline levels. However, compared to the decrease after no-cue condition, the decrease in the cue condition is much smaller (black traces in Fig. 4b, and compare the first three black bars to the last two in Fig. 4c). The difference in AoEV between cue and no-cue condition was less pronounced, probably because of the different task and cue types (auditory versus visual) that were used. After the presentation of the second auditory cue, when attention is expected to shift in the CueRCueDiff and NoCueRCue condition, we observed a clear increase in the size of the AoEV (Fig. 4b,c; Table 1). In contrast, no increase in the size of the AoEV was noticed in the CueRCueSame and CueRNoCue conditions. In the NoCueRNoCue condition, a difference in AoEV was observed because of the lack of modulation of the AoEV after the first no-cue stimulus (the AoEV continued to decrease as a consequence of the increase in the AoEV that resulted from the previous presentation of the array of vertical bars; see also Fig. 2b) and the second nocue stimulus. These findings demonstrate that when an auditory cue shifts visuospatial attention to a new target, the AoEV increases.
To better understand the relation between the AoEV and visuospatial shifts of attention, we tested subjects in a task (Experiment 4) that was identical to the first visual task (Experiment 1), except for that fact that the time (stimulus onset asynchronies, SOA) between cue removal and target onset varied (Fig. 5a) (Fig. 5b). This is in agreement with the onset found in Experiment 1. The increase in the AoEV is thus coupled to the onset of the cue and not to the time of the presentation of the target. We then compared the strength of the modulation of the AoEV in the cue and no-cue conditions (Fig. 5c). The comparison shows that the mean difference in the AoEV between the cue and no-cue conditions occurs for SOAs of 150 ms and longer, but not for shorter SOAs (Fig. 5d, Table 2; t-test, df = 3413, p,0.01 in conditions for SOA of 150, 200 and 300 ms). This result is mimicked by behavioral RT. The RT were similar for cue and no-cue trials with short SOAs (50 and 100 ms), but for longer SOAs (150, 200 and 300 ms) the RT was faster (t-test, df = 3413, p,0.05 in the condition SOA 10 ms; p,0.01 in conditions SOA 150, 200 and 300 ms; Fig. 5d). Hence, we consider that attention shifts around 250 ms (i.e. 100 ms target duration plus 150 ms SOA) after target onset and is accompanied by an increase in the AoEV. The SOA 10 ms shows some peculiarities. Here the modulation in eye vergence is present but the overall level is higher than the other SOA conditions (Fig. 5b). Also the behavioral reaction times for SOA 10 ms are different between the cue and no-cue condition (Fig. 5d).
To probe for a relation between the modulation of AoEV and bottom-up induced shifts in spatial attention we tested subjects in a detection task (Experiment 5). In this task (Fig. 6a), one of the vertical bars ( = target) was briefly tilted at various degrees (0u [no change], 5u, 15u, 30u, 60u and 90u), thereby modulating the saliency of the target as evidenced by performance (Fig. 6b). In the 0u case, attention will not be shifted to a particular place as there is no target. Detection performance and reaction times (mean 6 sem) were: 5u = 5.1%, 353642.5 ms; 15u = 81.3%, 330610.7 ms; 30u = 100%, 28366.8 ms; 60u = 99.1%, 27467.4 ms and 90u = 99.6%, 27366.7 ms. We then analyzed the strength of the modulation of AoEV after the stimulus onset. Again, the AoEV decreased because of the previous stimulation (see Fig. 2b), but around 300 ms after target onset, we observed an increase in the size of the AoEV.
This increase in the AoEV was a function of stimulus orientation (Fig. 6c-e; Table 3). It was most pronounced when the stimulus contrast was strongest and gradually decreased for lower contrast levels. The strength of the AoEV was positively correlated (R 2 = 0.89) with stimulus contrast (Fig. 6e). In comparison to the 0u condition, we observed a significant difference in the size of AoEV (t-test, p,0.05 for the conditions 15 0 , df = 419 and 30 0 , df = 452; p,0.01 for the conditions 60 0 , df = 455, and 90 0 , df = 457). The difference in AoEV in the 15 0 condition was however only significant for detected targets and not for undetected targets (Fig. 6d). For the lowest stimulus contrast (5u), which was below detection threshold (see Fig. 6b), the size of the AoEV did not significantly differ from the 0u (i.e. no change) condition (Fig. 6d). In conclusion, these observations show that the increase in the size of the AoEV correlates with visual detection and bottom-up induced visual attention.
Discussion
In this study, we report the relation between the angle of eye vergence (AoEV) and covert attention. We observed that the AoEV increases after the presentation of a cue or a peripheral stimulus while maintaining fixation at the center point. At first sight, a logical explanation for our results is to consider the distance of the peripheral target location. After the presentation of the cue subjects focus on the peripheral target (while maintaining fixation at the central point), which is slightly further away from the eyes than the central fixation point. The eyes then diverge to a more distant plane. In this situation the AoEV should decrease after cue presentation. However, opposite to such an expected reduction in AoEV we found an increase in vergence angle. In addition, the peak modulation in AoEV occurred well before target onset and at target onset when one would expect the subject to focus on the target, the AoEV decreased towards the initial values. Also, AoEV modulated after the onset of the fixation point (see Fig. 2b) when yet no peripheral targets were presented. Furthermore, in Cue-CueDiff condition of the third experiment attention shifted to different target locations. We observed that in this condition AoEV changed despite the identical distances of the targets to the fixation point. Therefore we believe that the distance to the target cannot explain the modulation in vergence. This conclusion is supported by the finding that target eccentricity did not correlate with vergence modulation. Also the results of the last experiment where vergence modulation correlates with stimulus contrast and perception indicates that distance is not the main explanatory factor. Moreover, our observed size in vergence modulation is about one factor lower than the expected vergence changes by depth. Finally, our findings indicate that the changes in pupil size does not cause the observed changes in AoEV as the temporal modulation in pupil size did not correspond to the temporal modulation in AoEV. For instance, pupil size showed a steady increase while modulation in vergence angle fluctuated over time (see Fig. 2e).
Instead, we speculate that vergence modulation reflects a shift in visual attention. As the change in the AoEV during fixation represents vertical eye movements, our results have consequences for the dissociation between eyes and covert attention [10,14,20] and for theories of attention [7][8][9][11][12][13] in general. Based on the results from the first and last experiment, we also propose that eye vergence links bottom-up and top-down attentional mechanisms, which are believed to be associated with segregated neuronal circuits [15]. In line with our suggestion is recent evidence showing that the frontal cortex, where attention originates [10,[21][22][23], controls eye vergence [24]. In addition, a recent study that demonstrates that the frontal cortex is involved both in top-down attention and in bottom-up attention [25]. However, vergence modulation is observed for all target locations and is, unlike covert attention, not spatial specific. This means that the possible effect of vergence on sensory processing has to become selective for one single target location (otherwise in the nocue condition vergence modulation should be strong as well!). So, further studies are needed to elucidate the possible role of eye vergence modulation in attention.
Disparity neurons in the primary visual cortex can detect the existence of disparity in their input from the eyes. These neurons are believed to provide depth information of the visual scene. Our current findings imply that retinal disparity changes during shifts of attention and accordingly the activity of disparity neurons. We therefore speculate that disparity neurons have besides a role in depth perception, also a role in attention and form part of the attention system in the brain. This suggestion is in accordance with findings of a discon-nection of vergence movements and depth perception [26,27] and with and unexpected specialization for horizontal disparity in primate primary visual cortex [28]. Moreover, recent evidence shows that ocular dominance maps may serve as a scaffold for the formation of disparity maps [29]. Ocular dominance columns in the visual cortex are formed during early ontogenetic stages and can be modified after birth during the sensitive period. In our study we provide indirect evidence for a connection between shifts of attention and disparity neurons. If true then our findings suggest that precise cortical developmental organization, i.e. correct axonal termination patterns and neuronal positioning into ocular columns, is beneficial for subsequent attentional processing of incoming sensory information [30].
Participants
The study was approved by the Ethics committee of the Faculty of Psychology of the University of Barcelona in accordance with the ethical standards laid down in the 1954 Declaration of Helsinki. We tested subject in several visual detection tasks (see Methods).
Twelve participants took part in Experiment 1 and Experiment 2 (1 man and 11 women, 22.961 age) from which 4 participated in the experiment with different target eccentricities. Six participants took part in Experiment 3 (1 man and 5 women, 25.361.6 age). Six participants performed Experiment 4 (1 man and 5 women, 25.361.6 age) and 4 participants (all women, 23.562.4 age) took part in Experiment 5. All participants had normal or corrected-to-normal vision. Participants received credits for courses or money for taking part in the experiment. We obtained written informed consent from all participants involved in our study.
Apparatus
We used in-house C++ software and EventIDE (Okazolab Ltd, London, UK) for presenting the stimuli. The display resolution was 10246768 pixels. The participants' position of gaze was monitored using a binocular EyeLink II eye-tracking system at 500 Hz (SR Research System, Ontario, Canada). To compensate for any head movements, we used individually molded bite bars (UHCOTECH Head Spot, University of Houston, Texas, USA). . Visual search task combined with auditory cues (Experiment 3) and modulation in eye vergence. A. Illustration of the auditory task. Symbols denote cue. Two consecutive cues are given to the subjects B. Average size of AoEV after the onset of the 1 st (black traces) and 2 nd (colored traces) auditory cue for individual subjects. C. Comparison between the slopes of the modulation (taken from windows in B) of AoEV after the 1 st and 2 nd auditory cue. Grey panels below illustrate the shift in visuospatial attention (red circles) for each condition. Small circles indicate focused attention to a single target while a large circle indicates global or more spread attention to all possible target location. Numbers indicate the size and position of the attention window after the 1 st (1) and 2 nd (2) Procedure Participants sat in a dimly lit (9 cd/m2) room, in front of the PC monitor at a distance of 47 cm. The eye tracking equipment was calibrated for each participant at the beginning of each set (standard 9 point calibration). Before starting the task, participants could practice with some training trials.
Experiment 1. Visual cue/no-cue experiment
The experiment consisted of 4 sets with 32 trials each (128 trials in total). After eye calibration, observers were required to fixate a central cross (565 pixels). After 300 ms, 8 peripheral bars (3611 pixels, eccentricity of 7.5u) appeared. In a separate control experiment of 2 sets we used eccentricities of 3.5u, 7.0u and 14u. After 1000 ms, a cue (a red line pointing to one of the peripheral positions, 3613 pixels) or a no-cue (a red cross, 13613 pixels) stimulus appeared for 100 ms in the central position. After an additional period of 1000 ms, one of the peripheral bars briefly (100 ms) changed its orientation (a tilt of 20u to the left or right). Participants had to respond by pressing a button as fast and accurately as possible to indicate whether the bar tilted to the left or to the right. Feedback was not given to the observers.
Experiment 2. Visual experiment without task
In this experiment, the same subjects viewed the same visual stimuli sequence as in Experiment 1. However, the subjects were instructed to fixate the central cross without performing any task (1 set of 32 trials).
Experiment 3. Auditory cue/no-cue experiment
The auditory experiment consisted of 360 trials. After eye calibration, observers were required to fixate a central cross (565 pixels). After 300 ms, 8 peripheral bars (eccentricity of 7.5u) appeared and 100 ms later, participants listened to an auditory stimulus (a number from 0 to 8 in Catalan). Each number (cue) indicated a peripheral bar position, except for number 0 (no-cue). The cues and no-cue were presented for 500 ms. After 800 ms, a second auditory cue was played, which was always valid. As before, it could be a number from 0 to 8. In 80% of cases, the first and the second auditory stimuli were the same. The percentage of trials for each condition was: 1) Cue-CueSame (71.1%), 2) Cue-CueDiff (15.6%), 3) Cue-NoCue (2.2%), 4) NoCue-NoCue (8.9%), 5) NoCue-Cue (2.2%). After 500 ms, one of the peripheral bars briefly changed its orientation (620u for 50 ms). Participants had to indicate as fast and accurately as possible whether it tilted to the left or to the right. Feedback was not given to the observers.
Experiment 4. Visual experiment with different delays (SOA)
This experiment was the same as Experiment 1 except that the time between cue onset and target onset randomly varied. The stimulus onset asynchronies (SOA) used were 10, 50, 100, 150, 200 and 300 ms. Subjects performed 384 trials (64 trials for each condition).
Experiment 5. Visual contrast experiment
This experiment consisted of 384 trials (64 per condition). After eye calibration, observers were required to fixate a central cross (565 pixels). After 300 ms, 8 peripheral bars (eccentricity of 7.5u) appeared and 500 ms later, one of the peripheral bars briefly changed its orientation. The tilt could be to the left or to the right. We used 0u (no tilt), 5u, 15u, 30u, 60u and 90u. Participants had to respond by pressing a button as fast and accurately as possible if they detected the tilt.
Data analysis
We calculated the angle of eye vergence by transforming the HRef recordings (X and Y coordinates of both eyes), provided by the Eye Link II software, into angular units through algorithms designed to calculate 3-D components of both eye gaze vectors. The transformation was performed taking into account the real distance of the screen to the observer and the actual inter-pupil distance. The AoEV is the point at which the intersection of both eye gaze vectors made the least error. For each subject, the eye vergence data were normalized by dividing the raw data by the maximum value of the recorded samples from fixation onset to target onset. Only correct trials were analyzed except in experiment 5. Detection of micro-saccades was done as described in [31].
For the calculation of the mean AoEV in Experiment 1 (including control experiments) and 2, we selected a window of 100 ms (1850 ms -1950 ms). This window was chosen because for all subjects it was centered on the maximum peak of AoEV after cue/no-cue onset. For the other experiments we selected per trial a time window and fitted a linear regression line by least square method through the sampled Figure 5. Visual search task (Experiment 4) with different SOA and modulation in eye vergcence. A. Illustration of the task. B. Average modulation across all subjects in AoEV separately for the different conditions (SOA). C. Average modulation in AoEV across all conditions. Colored vertical bars indicate the window of target presentation. Blue shaded area denotes a significant (p,0.01) difference between the cue and no-cue condition. D. Slopes of the modulation of AoEV and mean reaction times for the cue and no-cue of the different conditions (SOA). Bars represent the mean slopes, calculated for each condition (windows of 100 ms after target onset). Asterisks denote significant (* = p,0.05, ** = p,0.01, t-test) differences. Error bars are SEM. doi:10.1371/journal.pone.0052955.g005 data points within a window. Windows were 500 ms/100 ms starting after audio offset/target onset in experiment 3/4. In experiment 5, a 100 ms window was taken 300 ms from target onset. These windows were chosen because for all subjects they coincided with the start of cue or target induced modulation in eye vergence, when visually inspected. | 2016-05-04T20:20:58.661Z | 2013-01-31T00:00:00.000 | {
"year": 2013,
"sha1": "0a1edcf2ce63642d5db68ec905501c4363f98e32",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0052955&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a1edcf2ce63642d5db68ec905501c4363f98e32",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118367040 | pes2o/s2orc | v3-fos-license | Stacking Sequence Dependence of Graphene Layers on SiC(000-1) - Experimental and Theoretical Investigation
Different stacking sequences of graphene are investigated using a combination of experimental and theoretical methods. The high-resolution transmission electron microscopy (HRTEM) of the stacking sequence of several layers of graphene, formed on the C-terminated 4H-SiC(0001) surface, was used to determine the stacking sequence and the interlayer distances. These data prove that the three metastable configurations exist: ABAB, AAAA, ABCA. In accordance to these, findings those three cases were considered theoretically, using Density Functional Theory calculations comparing graphene sheets, freestanding and positioned on the SiC(0001) substrate. The total energies were calculated, the most stable structure was identified and the electronic band structure was obtained. The four graphene layer electron band structure depends crucially on the stacking: for the ABAB and ABCA stacking, the bands, close to the K point, are characterized by the hyperbolic dispersion relation while the AA stacking the dispersion in this region is linear, similar to that of a single graphene layer. It was also shown that the linear dispersion relation is preserved in the presence of the SiC substrate, and also for different distances between adjacent carbon layers.
Graphene is one of the most extensively investigated perspective materials at the moment. Its unique electronic properties, promising very important applications in electronics, have attracted interests of many scientific groups. The massless fermion dispersion, observed in the vicinity of the K point in the energy range close to the Fermi surface, and the related, experimentally determined the √ B dependence of the Landau levels, was reported from investigations of a single, freestanding graphene layer [1][2][3]. In order to implement graphen in the electronics industry, a method for the synthesis of reproducible, good quality graphene layers, stabilized on a solid substrate, has yet to be developed. The most natural one is the deposition of the epitaxial graphene on the SiC substrate, either on the Si-or C-terminated principal faces of the SiC crystal which is currently considered as one of the most promising method to attain this goal [4][5][6][7][8]. A recent work [9] shows that the method is potentially able to furnish good quality devices.
Graphene multilayer structures have been studied both experimentally and theoretically. From these studies, it is recognized that the multilayer graphene electronic properties are strongly dependent on the stacking sequence. Both the Bernal (AB) and rhombohedral (ABC) stacking sequences are described theoretically [10,11] and experimentally [12,13]. It was also shown that for rhombohedral(ABC) stacking the band gap can be induced by an external electric field [14]. The description of the AA structure was often overlooked because it is energetically unfavourable [15]. However, very recently, Norimatsu and Kusunoki [13] have observed the existence of the AA stacking sequence around the step on the Si-face. We have undertaken systematic investigations of the four layer graphene synthesized on the C-terminated 4H-SiC(0001) surface, such as the one which has been investigated recently [16]. By an optical inspection, the three different regions were identified, which are then subjected to the HRTEM scrutiny. The results are presented in Fig. 1. The three different stacking sequences are identified, showing the existence of the AA stacking sequence as well as the Bernal (AB) and rhombohedral (ABC) sequences, independent of the presence of the steps. Thus these three structures could be synthesized on the C-terminated 4H-SiC(0001) surface proving that these structures are metastable, and could be obtained by the appropriate selection of the thermodynamic parameters of SiC annealing [6]. These structures could persist for macroscopically long times.
The HRTEM investigations were also used to determine the distances between the graphene sheets. The example of such investigation, shown in Fig. 1, presents the determination of the carbon interlayer distances. The high precision measurements prove that the carbon bottommost layer is separated by a relatively large distance of 3.0Å from the SiC surface, which is close to the interlayer spacing in graphite, the situation which is a standard for graphene on the C-terminated face [16]. Therefore these graphene layers are not strongly bound to the underlying SiC surface, in contrast to the layers grown on the opposite, Si-terminated surface [6]. The following carbon sheet is separated by a distance of 3.7Å, while the remaining one by 3.4Å, i.e. essentially equal to the carbon sheet separation in graphite. These experimentally determined graphene interlayer distances were used in the DFT ab initio calculations described below. In the present work, we employ ab initio the density functional theory the VASP [17][18][19][20] code to investigate graphene-SiC interface. We have used the projector augmented wave (PAW) approach [21] in its variant available in the VASP package [20]. The local spin density approximation (LSDA) was applied for the exchange-correlation functional. The plane wave cutoff energy was set to 500 eV. The Monkhorst-Pack k-point mesh was set to 7 × 7 × 1. The 4H-SiC(0001) supercell was constructed using 8 bilayers of Si-C. Four graphene layers were located at the top of the SiC(0001) surface, at the separation determined experimentally (see Fig. 1). Since the graphene interlayer distance results from the van der Waals interaction, which, as a rule, could not be obtained from DFT calculations properly, the combined experimentaltheoretical approach is the only possible way to obtain the properties of multilayer graphene with high precision. In the present calculations, the slab replicas are separated by the space of about 17Å. An elastic adjustment was performed at the interface due to the lattice mismatch between SiC and graphite such that the two top SiC layers and the graphene layer was relaxed in the plane. The conjugate gradient algorithm was used in the relaxation of the atomic positions. The model, √ 3 × √ 3R30 • − SiC unit cell with a fitted graphene layer (GL) [22][23][24], was used. The direct DFT calculation indicate that for freestanding graphene layers, Bernal (ABAB) stacking sequence is the most stable, having the total energy equal to -80.824 eV which is lower than the energy of both the rhombohedral (ABCA) -80.836 eV and (AAAA) -80.782 eV stackings, which is consistent with the earlier works [14]. The electron energy bands are analyzed us-ing a comparison between the results obtained for four graphene layers, freestanding and deposited on the 4H-SiC(0001) surface as presented in Fig. 2. The three stacking configurations, AAAA, ABAB and ABCA exhibit the miscellaneous electronic structure due to the different symmetry. For the case of the AAAA configuration, the Dirac-type dispersion relation is observed. This is in agreement with a simple, tight binding argument presented recently by Gonzalez et al [25]. As expected, the band structure is different for both the Bernal (ABAB) and rhombohedral (ABCA) structures. In the case of the Bernal (ABAB) structure, in accordance to the earlier results, both the conduction and valence bands are hyperbolic [12]. The zero bandgap structure has the Fermi energy located at the common maximum of the valence and the minimum of the conduction bands. A slightly different picture is obtained for the freestanding rhombohedral (ABCA) sequence. The hyperbolic dependence, similar to that observed for the Bernal stacking is preserved. The two bands constitute a zero bandgap structure at the K point. Other bands are collapsed at the K point in the so-called "wizard-hat" shape [12]. Due to the lower crystallographic symmetry of the rhombohedral structure, they have the energy minimum shifted away from the K point (see Fig. 2 ).
The influence of the 4H-SiC substrate is different in these three cases. In the case of the (AAAA) stacking, the presence of the SiC substrate amounts to mere addition of the additional energy bands, of their energies located in the valence band energies of graphene. In the case of Bernal stacking, the influence of the substrate leads to a slight modification of the band structure, close to the K point. Thus the epitaxial graphene on the SiC(0001) surface is characterized by the essentially identical band structure for both (AAAA) and Bernal (ABAB), in respect to the freestanding graphene in these configurations, that indicates a relatively weak coupling of the graphene with the SiC surface as suggested by the relatively large distance measured by the HRTEM (see Fig. 1). In contrast to that, in the case of rhombohedral stacking, the structure is most affected, leading to the opening of the gap, and the creation of the double minimum in the conduction band. The second conduction band is concave and it is not on the Fermi level (see. Fig. 2). The first valence band which is curved up at the K point, is associated with the SiC substrate influence.
In order to explain why the symmetry of the systems so strongly affects the electronic structure, we have calculated electron densities between the first two freestanding graphene layers in AAAA, ABAB, ABCA stacking sequence Fig. 2 (left). In order to compare the electron density distribution in these three cases, we have used a consistent scale. For the AAAA stacking sequence, the areas of the increased electron density (red pattern) are arranged in the honeycomb pattern. This electron distribution pattern is similar to the isolated one graphene layer. Indeed, such a "highway" for electrons results in linear dispersion bands. The situation is quite different for graphene layers arranged into ABAB and ABCA sequences. Electron distributions for these two cases are similar and they form isolated islands of the increased electron density (red pattern). This different electron configuration gives hyperbolic HOMO and LUMO bands dispersion (see Fig 2 (left) ).
In order to determine the properties of the graphene AA stacking sequence, the bi-, three-and four-layer freestanding graphene structure was simulated using DFT calculations. The resulting band structure is shown in Fig 3. From these results, it follows that the band structure is qualitatively similar for a various number of layers. For all cases, the dispersion relation is linear, with the shift arising from the overlap of the adjacent carbon layers. For a pair of carbon layers, the overlap of p z orbitals is changed by a change of the interlayer distance (see Fig. 3 -top), resulting in a different mutual shift of the two bands. As shown in Fig. 3 (bottom), for an odd number of graphene layers, one of the Dirac cone is exactly at the K point and is crossed by the Fermi level as in the isolated graphene layer. For an even number of layers, the intersection of the bands is located symmetrically in the vicinity of the K point.
In this letter, using the HRTEM images, we have demonstrated that the epitaxial graphene, grown on the SiC substrate, can exist in three metastable configurations, i.e. the Bernal (ABAB), the rhombohedral (ABCA) and the AAAA stacking sequences. It was shown that the AAAA stacking, by virtue of its symmetry, has pronouncedly different electronic properties, being linear in the vicinity of the K point. The other two configurations have their properties different adopting the hyperbolic dispersion relation in the vicinity of the K point. The two structures are affected by the presence of the SiC substrate. In contrast to that, the AAAA stacking graphene preserves its linear dispersion relations in the presence of the SiC substrate. It is also demonstrated that the change of the distance between carbon sheets amounts to a mere shift of the band, still preserving its linear character. Thus, the obtained results open the route to mechanically stable, fast electronic devices, using the Dirac dispersion relation typical for the AAAA stacking. This profound difference in electronic properties between AAAA and other ABCA and ABAB stacking was confirmed by the direct plots of electron densities. It was observed that the charge pattern for the AAAA stacking is similar to an isolated graphene layer. It was also shown that for ABAB and ABCA structures charge accumulation regions are arranged into isolated islands. It is expected that the honeycomb pattern is more favourable for electron traffic than the insular one which confirms the advantage of the AAAA graphene as a material for future fast electronic devices. This work has been partially supported by the Polish Ministry of Science and Higher Education project 670/N-ESF-EPI/2010/0 within the EuroGRAPHENE programme of the European Science Foundation. The authors would like to thank the Faculty of Materials Science and Engineering of Warsaw University of Technol-ogy for using the JEOL JEM 3010 transmission electron microscope. The calculations reported in this paper were performed using the computing facilities of the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) of the University of Warsaw. The research published in this paper was supported by the Polish Ministry of Science and Higher Education under the grant no. UDA-POIG.01.03.01-14-155/09-00. | 2010-06-28T08:50:17.000Z | 2010-06-05T00:00:00.000 | {
"year": 2010,
"sha1": "6fc96b4eb5d4832aa70d7ca2c380bc8818aa4502",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1006.1040",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6fc96b4eb5d4832aa70d7ca2c380bc8818aa4502",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
218951271 | pes2o/s2orc | v3-fos-license | Spatial Variation of Temperature and Rainfall Trends in Kabul River Basin
Knowledge of temperature and rainfall periodicity is needed for urban and rural land-use and infrastructure planning, and their flood protection. In this study temperature and rainfall data from Climatic Research Unit Time Series version 4.03 (CRU TS4.03) was downloaded and inverse distance weighted (IDW) were made to analyze the temperature and precipitation variations in Kabul river basin (KRB). Interpolation maps were made using ArcMap 10.4.1. Analysis was done for period from 2004 to 2018. Annually for 15-year period, mean temperature variation can be noted as, upper part of the basin has minimum temperature of -1.2 °C while part of the study basin that lies in Pakistan has maximum temperature of 19 °C. Rainfall varies from 380 mm to 793 mm annually for 15-years. Due to increasing trend of both temperature and rainfall for KRB suitable methods should be applied for water management and flood risk reduction. Keywords— Temperature, Kabul River Basin, CRU, IDW, ArcMap 10.4.1
I. INTRODUCTION
Temperature and rainfall are key elements in hydrological cycle that plays an important role in water allocation. About one sixth (1/6th) of world's population depends upon water coming from frozen sources as their water resource [1]. This melting of water is influenced by temperature variations thus affecting the dependents. KRB is having its upper part covered with snow throughout the year. Thus, variation in temperature and rainfall analysis is necessary for the basin for better water distribution [2].
Higher temperature over a region is the feature of global warming. The global average temperature has increased by an average of 0.85 °C during 1800-2012 relative to 1961-1990 [3] and 0.74 °C [4]. This increasing temperature ultimately rises the flow of water coming from frozen sources and results in changing the water distribution.
For the study climatic component of water balance approach was acquired from Climatic Research Unit Time Series version 4.03(CRU TS4.03), produced by University of East Anglia, England. It includes various variables like rainfall, cloud cover, temperature, potential evapotranspiration, vapor pressure, and frost day frequency on monthly basis from January 1901 to December 2018. Station anomalies are interpolated into 0.50 x 0.50 (latitude-longitude) grid cells covering entire land surface globally [5].
A. Study Area
The study is carried out on the Kabul river basin that lies between longitude 67040/ to 75042/ east and latitude 33033/ to 36002/ north [6]. Kabul river, 700 km long, has its origin from the Sanglakh Range of the Hindu Kush Mountains in Afghanistan, passes through Nowshera, Pakistan and ends in the Indus river near Attock [7]. Kabul river basin is in south-east of Afghanistan also is the portion of Indus river catchments. The catchment area of basin is 68040 km2 i.e. 78% area lies in Afghanistan while 22% area is in Pakistan boundaries, as depicted in Figure 1. The eastern portion of the basin that emerges from Pakistan has higher elevation and is covered by snow in most part of the year [8].
B. Temperature and Rainfall Data
Monthly temperature and rainfall data of spatial resolution 0.50 was downloaded from CRU TS4.03 http://www.cru.uea.ac.uk/data from January 2004 to December 2018. Data was acquired for each grid made in study area and exported to excel spreadsheet. After that average of all grids were calculated to get temperature and rainfall data for entire basin.
C. Methodology
Entire study basin was divided into grid cells of 0.50 x 0.50 resolution. Data from each grid cell was obtained and then mean value for the basin was calculated using the weighted mean technique.
Inverse distance weighted maps were made using commands of GIS software ArcMap 10.4.1. Downloaded climatic data was arranged in excel spreadsheets and opened in ArcMap. Then after interpolation of data, monthly IDW maps for temperature and rainfall from 2004 to 2018 were made.
A. Spatial Analysis of Temperature and Rainfall
Spatial analysis of temperature and rainfall data, that was downloaded from CRU was done on ArcMap. Monthly IDW maps were made to inspect temperature and rainfall variations in KRB. Annually it can be noted that temperature ranges from -1.2 °C to 18.9 °C, as shown in figure 4 (a). the upper part of the basin was having negative temperature thus covered with snow throughout the year and serves as source of frozen waters. While lower part that lies in Pakistan boundary has maximum temperature of 18.9 °C i.e. plain area. Monthly maps shown in Figure 2 depicts that in January temperature of the study basin falls to minimum of -13 °C that follows the rainfall in upper part as snow at this minimum temperature. In June, the temperature rises maximum to 28 °C that results in snow melting and causes the stream flow to reach its peak rate. Rainfall map divides the entire basin into two zones, semiarid and sub-humid. Semi-arid is the zone with annual rainfall of 300 to 600 mm, 70% of area lies in it. While other 30% lies in sub-humid zone with 600 to 1000 mm annual rainfall shown in figure 4 (b). Rainfall spatial analysis shows the minimum rainfall of 380 mm to maximum of 793 mm. the upper part of basin is having rain falling as snow. March is having maximum rainfall of 118 mm over the basin while in September has minimum rainfall of 2 mm (figure 3).
B. Trend Analysis of Temperature and Rainfall
Graphs were made to examine the trend for temperature and rainfall in KRB. Figure 5 gives the trendline for seasonal as well as annual temperature and rainfall. It can be observed that in summer temperature trendline is increasing. Same trend can be seen for annual temperature for 15-year i.e. from 2004 to 2018. This increasing temperature is affecting the snow budget as the basin comprise some of its part as snow. And this change influences the downstream water allocation.
International Journal of Engineering Works Vol. 7, Issue 04, PP. 207-210, April 2020 www.ijew.io Similarly, for rainfall trend estimation graph was developed and trendline was analyzed on seasonal and annual basis. Winter season is having decline in trendline for study period. That demands the proper management of water resources to be used by dependents. While annually, slight increasing trend line is observed.
CONCLUSIONS
The study concluded that the temperature has increasing trend for summer while on annual basis trend is also increasing slightly. The mean maximum annual temperature of the basin for 15-years was 19 °C and minimum was -1.2 °C. Seasonally, temperature falls to -13 °C in January and rises to 28 °C in June. Rainfall range for the basin for 15-year period on annual base is 380 mm to 793 mm. Monthly analyses concluded that March has maximum rainfall of 118 mm and September has minimum 2 mm rainfall. | 2020-04-13T18:18:40.384Z | 2020-04-12T00:00:00.000 | {
"year": 2020,
"sha1": "e91a350a77ef40ecfe61c804f09c6b89796fbfde",
"oa_license": "CCBY",
"oa_url": "https://www.ijew.io/pdfdownload/spatial-variation-of-temperature-and-rainfall-trends-in-kabul-river-basin",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e91a350a77ef40ecfe61c804f09c6b89796fbfde",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
4016723 | pes2o/s2orc | v3-fos-license | The Distribution of Henipaviruses in Southeast Asia and Australasia: Is Wallace’s Line a Barrier to Nipah Virus?
Nipah virus (NiV) (Genus Henipavirus) is a recently emerged zoonotic virus that causes severe disease in humans and has been found in bats of the genus Pteropus. Whilst NiV has not been detected in Australia, evidence for NiV-infection has been found in pteropid bats in some of Australia’s closest neighbours. The aim of this study was to determine the occurrence of henipaviruses in fruit bat (Family Pteropodidae) populations to the north of Australia. In particular we tested the hypothesis that Nipah virus is restricted to west of Wallace’s Line. Fruit bats from Australia, Papua New Guinea, East Timor and Indonesia were tested for the presence of antibodies to Hendra virus (HeV) and Nipah virus, and tested for the presence of HeV, NiV or henipavirus RNA by PCR. Evidence was found for the presence of Nipah virus in both Pteropus vampyrus and Rousettus amplexicaudatus populations from East Timor. Serology and PCR also suggested the presence of a henipavirus that was neither HeV nor NiV in Pteropus alecto and Acerodon celebensis. The results demonstrate the presence of NiV in the fruit bat populations on the eastern side of Wallace’s Line and within 500 km of Australia. They indicate the presence of non-NiV, non-HeV henipaviruses in fruit bat populations of Sulawesi and Sumba and possibly in Papua New Guinea. It appears that NiV is present where P. vampyrus occurs, such as in the fruit bat populations of Timor, but where this bat species is absent other henipaviruses may be present, as on Sulawesi and Sumba. Evidence was obtained for the presence henipaviruses in the non-Pteropid species R. amplexicaudatus and in A. celebensis. The findings of this work fill some gaps in knowledge in geographical and species distribution of henipaviruses in Australasia which will contribute to planning of risk management and surveillance activities.
Introduction
Hendra virus (HeV) and Nipah virus (NiV) are paramyxoviruses of the genus Henipavirus, and bats of the Genus Pteropus (Family Pteropodidae) have been identified as their primary wildlife reservoir [1]. These viruses have repeatedly spilled over from the reservoir hosts to cause disease in domestic animals and humans in Australia, Malaysia, Bangladesh and India [2]. Considerable effort has been expended to determine the distribution of henipaviruses and the bat species that constitute reservoir hosts for HeV and NiV. Serological evidence of infection has been found in 28 species, 12 from the Genus Pteropus (see Table 1). Despite this effort, there are few published accounts of isolation of henipaviruses from wild bats. These include: three isolates of HeV from Pteropus poliocephalus [3]; four isolates of HeV from Pteropus alecto [4]; one isolate of HeV from Pteropus conspicillatus [4]; and single isolates of NiV from Pteropus hypomelanus, Pteropus lylei and Pteropus vampyrus [5][6][7].
Evidence of henipavirus infection has been found across the range of Pteropus bats from eastern Australia, north to Indonesia, Malaysia, Thailand and Cambodia; and west to Bangladesh, India and Madagascar, suggesting that these viruses occur throughout the geographic range of this genus [8]. Henipavirus infection has also been found to be present in Eidolon helvum, a species of fruit bat occurring throughout sub-Saharan Africa [9,10]. Given that over two billion people live in the area where Pteropus and Eidolon bats are present, even sporadic or occasional spillover of virus from bats to humans may result in a significant number of human infections.
Hendra virus has spread from Pteropus bats to horses in Australia on at least 33 separate occasions, always with fatal consequences [4]. Seven humans who have had close contact with infected horses have become infected with HeV, including four fatally [11][12][13]. While the economic and public health consequences of Hendra virus have been limited to date, the effects of Nipah virus have been much more severe. Nipah virus was responsible for an outbreak of disease in pigs and humans in peninsular Malaysia and Singapore in 1998-1999 resulting in the death of over 100 people and the culling of over one million pigs [14]. Since that time there have been at least 10 outbreaks of NiV disease in humans in Bangladesh and India with the resultant death of over 140 people [15]; there has also been clear evidence of human to human transmission of this virus indicating potential for a human epidemic [16].
The apparent distribution of HeV and NiV is currently separated by the biogeographic region known as Wallacea, with the presence of NiV confirmed by viral isolation and/or PCR from P. hypomelanus, P. lylei and P. vampyrus from peninsular Malaysia and Cambodia (Table 1), and apparent on the basis of serological evidence from P. vampyrus on Sumatra, Java and Borneo [17,18]. The presence of Hendra virus has been confirmed by viral isolation from P. alecto, P. poliocephalus and P. conspicillatus from Australia [4,19], and is apparent based on serology in P. scapulatus in Australia [1] and P. hypomelanus, P. neohibernicus, P. capistratus, P. admiralitatum, Dobsonia magna and Dobsonia andersoni from Papua New Guinea [20]. It is not known whether the distributions of HeV and NiV are mutually exclusive or overlap, or indeed if other henipaviruses exist between the locations where HeV and NiV occur. It is possible that HeV and NiV are relatively host speciesspecific, and that this has resulted in the apparent lack of overlap of the two viruses. Furthermore it may be that there is some form of competitive exclusion of one virus by the other from each of the two respective regions. It has been known for well over 100 years that a major biogeographic barrier exists between the Australo-Papuan and Wallacean region on the one hand, and southeast Asia on the other, with different groups of both terrestrial vertebrates and invertebrates occurring on either side of this 'line' [21]. It has even been suggested that this boundary has protected Australia from the recent H5N1 avian influenza epidemic [22]. Of the major groups of terrestrial mammals, only rodents and bats extend across this region from southeast Asia into Australia. There are 13 species of Old World fruit bat (Family Pteropodidae) that occur only to the west of Wallace's Line and 67 species that are confined to the east, while 20 species have wide distributions throughout the region and occur on both sides of the line [23].
The aim of this study is to investigate the occurrence of henipaviruses in fruit bat populations in the regions of northeast Australia (Queensland), New Guinea (Papua New Guinea) and Wallacea (Indonesia and East Timor). In particular we tested the hypothesis that Nipah virus is restricted in distribution to west of Wallace's Line.
Fruit bats (Family Pteropodidae) were sampled from northeast Australia, Papua New Guinea (Western Province and Madang Province), East Timor (Cova Lima Province) and Indonesia (Sulawesi and Sumba), and tested for the presence of anti-Hendra virus (HeV) and anti-Nipah virus (NiV) antibodies. PCR tests were also conducted to determine the presence of henipavirus RNA.
Ethics Statement
All animal work followed the guidelines of the American Society of Mammalogists and the National Health and Medical Research Council of Australia [24,25]. The study was approved by the Animal Ethics Committee of the Queensland Department of Primary Industries and Fisheries (Permit number FN 47/2003-1) and the Queensland Parks and Wildlife Service (Permit number WISP03721106).
The virus neutralisation test (VNT) results from 66 of the 109 fruit bats from Papua New Guinea presented here have been previously reported in Breed et al. [26]. The analytical approach presented in this study is novel and different to that reported in the previous study.
Study Sites
Bats were sampled from the following locations: Townsville and Cairns, Queensland Australia; areas around Bensbach and Mabudawan, Western Province, Papua New Guinea; Madang township, Madang Province, Papua New Guinea; areas around Suai, Cova Lima District, East Timor; Waikabubak area, Sumba, Indonesia; areas around Manado, North Sulawesi Province, Indonesia.
Capture and Sampling
Bats were caught in 12 m or 18 m mist nets suspended between two 12 m poles and anaesthetised for collection of samples. In Australia inhalation anaesthesia, delivering isoflurane (Isoflurane, Laser Animal Health Pty Limited) and oxygen via an anaesthetic machine, was used following the protocol described by Jonsson et al. [27]. In East Timor, Papua New Guinea, and Indonesia, bats were anesthetised using a combination of ketamine (Ketamil, Ilium, Smithfield, Australia) and medetomidine (Domitor, Novartis, Pendle Hill, Australia) injected into the pectoral muscles at similar doses to those used by Middleton et al. [28]. Atipamezole was used to reverse the effects of medetomidine. Blood samples were collected by venepuncture of the propatagial vein and aspiration of blood with a 23 or 25 gauge needle and 1 mL or 3 mL syringe depending on the size of the animal. Blood was allowed to clot in 2 mL tubes for 24 hours before centrifugation and separation of serum and storage at 4uC until testing. Samples of urine and saliva were collected onto cotton swabs and stored in viral transport media or an RNA stabilisation reagent (RNAlater, Qiagen, Doncaster, Australia) for the detection of viral RNA by RT-PCR. Each individual bat from Australia, Papua New Guinea, East Timor and Sumba has samples of urine, saliva and blood tested by PCR, while the urine samples from bats in Sulawesi were pooled with samples from five to 10 individuals per pool.
Serological Tests
The currently accepted reference procedure for detection of antibodies to HeV and NiV is the VNT according to Daniels et al. and the OIE [29,30]. This was performed on the serum samples at the Australian Animal Health Laboratory in Geelong, Victoria, Australia, which is the World Organisation for Animal Health (OIE) reference laboratory for HeV and NiV viruses. A serum sample was considered positive if it neutralised HeV or NiV at a dilution of 1:5 or greater in the VNT. According to the OIE Manual of Diagnostic Tests and Vaccines for Terrestrial Animals ''Anti-HeV antiserum neutralises HeV at an approximately fourfold greater dilution than that which neutralises NiV to the same extent. Conversely, anti-NiV antiserum neutralises NiV approximately four times more efficiently than HeV [14,30].'' Hence we categorised sera as reacting to HeV or NiV if a four-fold difference in titre was observed, or equivocal if comparative titres were equal or showed a two-fold difference. Luminex binding and inhibition serological assays were also performed on sera where sufficient sample volumes were available according to Bossart et al. [31].
Viral RNA Detection (PCR) Tests
Presence of HeV and NiV nucleic acid was tested for by realtime reverse-transcriptase (RT) PCR assay (RT-qPCR) (TaqMan) according to Smith et al. [32] (HeV M gene) and Guilllaume et al. [33] (NiV N gene) at the Australian Animal Health Laboratory. Additionally, samples from Indonesia and Papua New Guinea were tested for henipavirus RNA using a consensus RT-qPCR assay for the N gene according to Feldman et al. [34] at Queensland Health Forensic and Scientific Services for Public Health Virology. As these assays were in a developmental stage at the time the work was conducted, strict cycle threshold (CT) cutoff values were not available. However, it was generally considered that samples with CT values: ,35 were positive; 35-45 were 'suspect' positive, and .45 were negative. Samples returning 'suspect' positive results were not tested further.
Further to this, a subset of samples from East Timor and Papua New Guinea were tested for NiV nucleic acid using a duplex nested conventional RT-PCR for the N gene according to Wacharapluesadee et al. [35]. A sample was considered positive if a band of appropriate size was visualised. All samples producing such bands were sequenced to determine their genetic relationship to known henipavirus isolates.
Virus isolation was attempted from samples collected into viral transport media from which positive PCR results were obtained where qPCR indicated an adequate amount of viral material to be present.
Results
Fruit bats were sampled for the presence of henipaviruses in Australia (Townsville and Cairns in Queensland), Papua New Guinea (Western Province and Madang Province), East Timor (Cova Lima Province) and Indonesia (north Sulawesi and Sumba), all of which are located to the east of Wallace's Line. The following results were obtained from the various geographic locations:
Australia
Sixty-four P. alecto were sampled near Townsville in January 2005 following detection of HeV in a horse in December 2004. HeV RNA was detected in the blood, urine and saliva of one subadult male P. alecto by RT-qPCR at CT values 35.3, 37.6 and 39.9 respectively. This animal tested negative for the presence of HeV antibodies by VNT. Antibodies to HeV were detected in 28 of the 64 (44%, 95% CI 32-56) animals tested. Comparative HeV-NiV titres were not performed on these sera.
One-hundred-and-eighty P. conspicillatus were sampled near Cairns in June and November of 2005 following spillover of HeV to a horse and subsequently a human in October 2004. Neither HeV nor NiV RNA was detected in any of the animals sampled. Antibodies to HeV, NiV or both viruses were detected by VNT in 119 of 180 (66%, 95% CI 59-73) animals sampled. Of the animals testing positive on VNT, based on neutralising antibody titre, 52 (43.7%) indicated exposure to HeV, 8 (6.7%) indicated exposure to NiV, and 59 (49.5%) showed equivocal titres (see Table 2; Figure 1).
Papua New Guinea
Fruit bats were sampled in Western Province (n = 56) and Madang Province (n = 53). None of the animals tested positive on RT-qPCR for HeV or NiV RNA from samples of blood, urine or saliva.
Eight Dobsonia magna were sampled; one showed a positive VNT to NiV (titre 1:10) but was negative for HeV antibodies. Twentyone Macroglossus minimus and three Pteropus macrotis were sampled and none showed seroreactivity to either HeV or NiV on VNT.
NiV RNA was detected by RT-qPCR in the blood of one P. vampyrus with a CT value of 43 and in the saliva of four R. amplexicaudatus (CT values 38.5, 41, 39 and 39). No HeV RNA was detected in any of the samples tested. NiV RNA was also detected in the urine of one R. amplexicaudatus by nested RT-PCR and a 357 nucleotide fragment sequence was obtained from the Nucleocapsid-gene. This sequence showed 100% homology to Malaysian NiV isolates from a Pteropus hypomelanus (Genbank accession number AF376747), pig (Genbank accession number AJ627196) and human (Genbank accession number NC_002728); 98% homology to a NiV isolated from Pteropus lylei in Cambodia (Genbank accession number AY858110); and 93% homology to a NiV isolate from a human in Bangladesh (Genbank accession number AY988601).
Henipavirus RNA was detected by generic RT-qPCR in eight pooled urine samples from P. alecto and A. celebensis from Sulawesi, with CT values of 31-34. Henipavirus RNA was detected in the urine sample from one P. alecto from Sumba by RT-qPCR with a CT value of 28. These samples testing positive for henipavirus RNA by RT-qPCR; all tested negative for both HeV and NiV RNA by type-specific RT-qPCR.
Attempts at viral isolation from samples yielding positive PCR results were all unsuccessful.
Discussion
The aim of this study was to determine the occurrence and diversity of henipaviruses in fruit bat populations in the regions of northeast Australia, New Guinea (Papua New Guinea) and Wallacea (Indonesia and East Timor). Nipah virus has had a much greater impact than HeV on human and domestic animal health to date and hence we proposed to determine if NiV occurs east of the Wallace Line. We also aimed to determine if henipaviruses circulated in fruit bat species other than those of the Genus Pteropus in the Australasian region. We used serological and molecular approaches to determine the presence of henipaviruses in fruit bat populations and attempted to identify the species of virus when evidence of henipaviruses was detected.
Australia
At the time of this study was initiated, the detection of henipavirus RNA in wild bats was a rare event, thus the sampling of a population near Townsville just one month after a nearby spillover event [13], was an opportune time to enhance the likelihood of detecting virus in a bat population. Results from the sampling of 64 P. alecto at a colony just 1 km from where a horse had contracted HeV near Townsville one month previously confirmed the presence of HeV in this bat population with the positive detection of HeV RNA in blood, urine and saliva from a single individual. This finding was supported by the detection of antibodies to HeV by VNT at a seroprevalence of 44% although comparative testing for NiV antibodies was not performed.
The sampling of 180 P. conspicillatus near Cairns (where three spillover events of HeV to horses had occurred in the past [4], failed to yield any PCR positive samples to henipavirus RNA. Nevertheless, 67% of the bats had antibodies to henipaviruses, with 43.7% of these indicating exposure to HeV, 6.7% indicating exposure to NiV and 49.6% showing equivocal titres (see Figure 1). Given the previous cases of HeV in horses in this area over a period of eight years and the subsequent frequent detection of HeV from fruit bats in this area by Field et al. [36], it appears highly likely that HeV is endemic in this fruit bat population. However the high proportion with equivocal titres (49.6%) and the small proportion of samples returning comparative titres indicating NiV exposure (6.7%) are perplexing. Possible explanations for the equivocal titres and those that suggest exposure to NiV include: that the immunological response may vary among individual bats such that a fourfold or higher titre to HeV was not present in all bats following exposure to HeV; exposure of the sampled bats to a different henipavirus from that of HeV; coinfection, or subsequent infection, of bats with HeV and another henipavirus had taken place; the HeV strain used in the VNT is antigenically different to the HeV strain that the bats had been exposed to. The recent detection of Cedar virus in Australian bats may support the second and third explanations above [37].
Papua New Guinea (PNG)
Henipavirus RNA was not detected in samples from any of the bats from PNG despite expending considerable effort to maximise the likelihood of detection of viral RNA. This included: the collection of samples into a commercial RNA preservative (''RNA-Later'', Qiagen, Doncaster, Australia), as well as viral transport medium to improve preservation of RNA, the use of a dry-shipper to hold samples at 2150uC immediately following collection until processing at the laboratory, and the screening of samples with a generic ''henipavirus'' RT-qPCR prior to HeV and NiV specific RT-qPCR testing.
The comparative serology of bats from PNG showed a very high proportion (84.8%) with equivocal titres to HeV and NiV on VNT, with similar and small proportions indicating exposure to HeV (8.7%) and NiV (6.5%) (see Figure 1). Further data are required to determine which species of henipavirus occur in these populations, but it is clear that henipaviruses are indeed present, though their threat to animal and public health remains unclear.
None of 21 Macroglossus minimus sampled showed seroreactivity to either HeV or NiV. This may provide 95% statistical confidence that the M. minimus population does not support henipavirus infection, assuming a minimum seroprevalence in the population of 14% if virus were present and representative sampling of the population. This may suggest that henipaviruses do not circulate in this species, or at least not at the seroprevalence usually detected in Pteropus species. Although M. minimus has a very large distribution from Australia and New Guinea, through the Indo-Malayan archipelago to mainland Asia, these results suggest that it is unlikely to act as a reservoir host if virus infects these animals. None of three sampled Pteropus macrotis individuals was seropositive to henipaviruses, but the small sample size limits meaningful interpretation of these findings.
East Timor
Several detections of NiV were made by PCR in the samples from East Timor. These positive results were obtained both by RT-qPCR and by nested RT-PCR. The detection of NiV RNA by nested PCR from a urine sample from R. amplexicaudatus allowed amplification and sequencing of a 357 base pair RNA fragment and comparison to published NiV nucleotide sequences. The sequence showed 100% homology with the nucleotide sequence of NiV isolates from Malaysia. This finding is thus consistent with the distribution of R. amplexicaudatus and P. vampyrus in both Timor and Malaysia. The detection of NiV RNA by RT-qPCR in saliva samples from three other R. amplexicaudatus, albeit at high CT values (38.5, 41, 39 and 39), adds weight to the contention that NiV was circulating in this population of bats at the time of sampling. The detection of NiV in the blood of one of the P. vampyrus sampled with a CT value of 43 suggests an extremely small amount of viral RNA may have been present and is of dubious significance when considered in isolation. However when the comparative serology results are considered from the same population of animals indicating exposure to NiV in 86.4% (see Table 2 and Figure 1) of animals, the body of evidence supporting the presence of NiV in fruit bats in East Timor is strong.
The presence of NiV in P. vampyrus on Timor is not completely unexpected given that NiV has been isolated from this species of bat on mainland Asia [7], and that this species also occurs on Sumatra, Java and the Lesser Sunda Islands, including Timor [38]. A genetic study of P. vampyrus indicates a high level of gene flow among populations throughout their range [39], and satellite telemetry has shown P. vampyrus to fly between peninsular Malaysia and Sumatra [40], indicating the potential of viral transfer from one population to another of this species. The finding of NiV RNA in R. amplexicaudatus is surprising as henipaviruses have rarely been found in bat genera other than Pteropus, although antibodies to henipaviruses have been found in a related species Rousettus leschenaulti, in China and Vietnam [41,42]. In the study by Li et al. [41], although five of 16 individuals sampled from one location gave a positive response on ELISA and western blotting assays, these sera did not neutralise HeV or NiV on VNT. This is consistent with our findings in R. amplexicaudatus of a lack of neutralising seroreactivity on VNT, but some seroreactivity on Luminex serology (data not shown). This may be due to a different immune response to henipavirus infections in non-Pteropid bats where a low level of neutralising antibodies are produced that are difficult to detect in current assay systems [41].
Indonesia -Sulawesi
Eight pooled urine samples containing urine from both P. alecto and A. celebensis showed positive results on a RT-qPCR with primers designed for a region of the nucleocapsid gene that is conserved among published HeV and NiV sequences. The CT values of the positive samples were all within the range of 31-34 cycles. These samples were then tested with a HeV specific and a NiV specific RT-qPCR with negative results. This suggests the virus present in these urine samples was a henipavirus other than HeV or NiV. The comparative serology from these animals showed the highest proportion returning equivocal titres (58.8%) and the rest indicating exposure to HeV (41.2%) and none indicating exposure to NiV. Possible explanations of these findings include infection of the P. alecto and A. celebensis populations of north Sulawesi with a Hendra-like virus (that differs in nucleotide sequence at the primer binding site of the HeV RT-qPCR), or previous exposure to HeV and current infection with a Hendralike virus.
Indonesia -Sumba
One P. alecto, an adult male, was captured and sampled on Sumba. Henipavirus RNA was detected in its urine using the RT-qPCR with primers designed for a conserved region of the nucleocapsid gene at a CT of 28. Subsequent testing of the urine sample with HeV and NiV specific RT-qPCR was negative for both viruses. This animal showed equivocal titres to HeV and NiV on VNT. These findings are consistent with infection of this animal by a henipavirus that differs in nucleotide sequence from HeV and NiV but is closely related to both viruses in terms of nucleotide sequence and in elicitation of antibodies that neutralise HeV and NiV at similar titres.
Conclusions
This study showed clear evidence for the presence of NiV east of Wallace's Line in East Timor, although it was not detected in individuals sampled from Sulawesi, Sumba or New Guinea (see Figure 1). This extends the range of areas from which NiV has been detected by PCR from peninsular Malaysia by over 2,500 km to the southwest to the island of Timor. However the results from Sulawesi and Sumba suggest NiV may not be present throughout the intervening area. Rather, the distribution of NiV may be linked to the presence of specific fruit bat species, particularly P. vampyrus.
We also found clear evidence of the presence of henipaviruses in non-Pteropus species in Australasia: Acerodon celebensis in Sulawesi and Rousettus amplexicaudatus in East Timor. A single seropositive result in Dobsonia magna from Papua New Guinea adds to several other detections of henipavirus antibodies in bats of this genus [20].
A major finding in this study was evidence for non-NiV, non-HeV henipaviruses in the region. We found molecular evidence for such viruses in Sulawesi and Sumba, with samples positive in a generic henipavirus PCR assay but not in NiV or HeV specific assays. In addition, we found serological indication for such viruses in those two locations, and also in Australia, PNG, and to a lesser extent in East Timor, with samples showing equivocal neutralising antibody titres against both NiV and HeV. While HeV and NiV are the only recognised pathogenic henipavirus species, there is accumulating evidence that other henipaviruses exist [37,43].
As with other emerging infectious diseases of wildlife, serological and virological diagnostic capabilities are limited due to incomplete understanding of the diversity and relatedness of these pathogens (e.g. level of cross reactivity). Further studies utilising enhanced genome detection methods in areas where equivocal serological results are obtained are required to elucidate the risk posed by henipaviruses.
This study, in combination with the serological evidence of henipavirus infection in P. vampyrus from Sumatra, Java and Borneo [17,18], has shown that henipaviruses occur in fruit bats widely across the Sunda Shelf, Wallacea and New Guinea. Future work could be fruitfully directed towards further characterisation of the diversity of henipaviruses in Wallacea and New Guinea where novel henipaviruses may occur. The evidence presented here suggests such viruses do exist, though the threat they may pose to human and animal health remains unclear. Also further investigation of the role of non-Pteropus fruit bats in the ecology of henipaviruses is indicated, particularly members of the genera Rousettus and Acerodon. | 2018-04-03T02:00:10.356Z | 2013-04-24T00:00:00.000 | {
"year": 2013,
"sha1": "cb77226eb4532d5a7b41460d0879c3e6c8611c82",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0061316&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb77226eb4532d5a7b41460d0879c3e6c8611c82",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221641075 | pes2o/s2orc | v3-fos-license | Forecasting the Leading Indicator of a Recession: The 10-Year minus 3-Month Treasury Yield Spread
In this research paper, I have applied various econometric time series and two machine learning models to forecast the daily data on the yield spread. First, I decomposed the yield curve into its principal components, then simulated various paths of the yield spread using the Vasicek model. After constructing univariate ARIMA models, and multivariate models such as ARIMAX, VAR, and Long Short Term Memory, I calibrated the root mean squared error to measure how far the results deviate from the current values. Through impulse response functions, I measured the impact of various shocks on the difference yield spread. The results indicate that the parsimonious univariate ARIMA model outperforms the richly parameterized VAR method, and the complex LSTM with multivariate data performs equally well as the simple ARIMA model.
where: ] [r] RP RRP , r = E[π + E + I + R + ε expected trajectory of the consumer inflation rate over the treasury security's duration ] E[π = expected trajectory of the inflation-adjusted real interest rates.
[r] E = Together the sum: represents the expected path of the nominal interest rates ] [r] E[π + E inflation risk premium, real rate risk premium in the treasury yield, and RP I = RRP R = the actual treasury yield may deviate from the yield implied by the DTSM model by the error term. ε : Furthermore, they decomposed the yield spread i.e. the yield curve's slope, into the slope of the components: risk premium and expectations: Using data from 1985-Q1 to 2018-Q1, they examined the effects of employing these channels in the probit specifications to estimate the probability of a recession and the significant influences of each channel. More pronounced peaks before the inception of each of the three recessions in the sample suggest that their model's forecasts outperform those from the more traditional probit specifications.
Kelley and Benzoni (2018) modified the aforementioned decomposition to incorporate variables on a shorter time horizon. They found that whenever the Fed eases the monetary policy, noticed by either a lower real interest rate or a reduced expected real interest rate spread, the probability of a recession within a year rises. This contrasts with a diminished slope of risk premia is linked with either a lower or higher probability of recession, contingent on the origin of decline. More recently, a reduced slope of inflation risk-premium has signaled a greater chance of recession, and vice versa. Therefore, not all descent in the yield spread is a harbinger for the economy, and not all steepening are auspicious news either.
Merterns and constructed probit models to quantify the probability of recession at months based on 2 t + 1 the term at time contingent upon the term spread being above or below a particular threshold. The explanatory , t variables incorporated are term premium, natural interest rate, and the ratio of household net worth to income. Mertens and argue that quantitative easing had significantly depressed the term premium component of the long-term yields (Bonis, Ihrig, and Wei 2017). The yield curve flattens but doesn't raise recessionary risk, albeit the long-term yields decline, signifying that flattening may not always be bothersome. Keeping in mind that the term spread's predictive relationship does not reveal much about the causes of recession, we should note that yield curve inversions could cause recessions as elevated short term rates and tightening policies slow the economy. Alternatively, if investors expect a downturn, then the rising demand for safe, long-term Treasury bonds will reduce the long-term yields, inverting the yield curve.
Trubin and Estrella (2006) emphasized that the yield curve is a more forward-looking leading indicator as the recession signals that it produces are more advanced than those produced by other variables. Moreover, those signals are very sensitive to changes in the financial markets. As treasury securities don't face major credit risk premium, they are useful in forecasting the chances of a recession. They claim that using treasury yields whose maturities are far apart generate accurate results in forecasting the real activity. At the short-end of the curve, the three-month Treasury rate, when applied in conjunction with the 10-year Treasury rate yields robust and precise results. Furthermore, 10-year minus 2-year treasury rates graphically invert earlier and more frequently than those by 10-year minus 3-month Treasury rates spread, which is typically larger as depicted in figure 2. The evidence connotes that more pronounced inversions are correlated with deeper downturns. For PCA, I aggregated the data on the treasury yields of varying maturities from 3-month bills to 30-year bonds res, − t from October 10, 1993 to August 21, 2020. Figure 3 displays a three-dimensional plot of the data on yield curve. Whilst the considerable proportion of variation in the level is visually striking, the variation on the curvature and slope are less conspicuous. Then, I decomposed the yield curve via its principal components. The two features of any dataset include noise and signal and Principal Component Analysis (PCA) extracts the signal and diminishes the dataset's dimensionality. This is because it finds the fewest number of variables that explain the largest proportion of the data. It achieves this by transforming the data from a covariance or correlation matrix into a subspace (or an eigenspace) with fewer dimensions where all the explanatory variables are orthogonal to each other, thus avoiding multicollinearity and reducing noise.
These vectors or axes of the eigenspace are known as the eigenvectors, and the eigenvalues determine the length of the vectors.
Whilst I calculated five principal components (PC) in total from the data on ten Treasury yields, I C1, P C2, .., C5 − P . P extracted the three latent factors that describe the dynamics of the yield curve level, slope, and curvature. These are − are measured by the first three principal components: PC 1, PC 2, and PC 3, as depicted in figure 4. The level refers to the parallel shifts in the yield curve; the slope is the changes in the short and long term rates evident by flattening and steeping of the curve, and twists denote the changes in the curvature of the model. After standardizing the data such that I calculated the covariance matrix, and performed the eigendecomposition on the standardized data res (0, ), t~N 1 generated eigenvalues and eigenvectors . Whilst eigenvalues are the scalars of the linear transformation − λ − v , eigenvectors are vectors whose direction remains unchanged even after applying the transformation.
Then, I arranged and based on decreasing such that the first eigenvalue contributes the maximum variance to the λ v , λ tres data. Finally, from the eigenvalues, I calculated the proportion of the total variance explained by each PC i The second principal component depicts the slope, which in this case is the difference between the 10-year treasury bond and the 3-month treasury bill (10Y-3M spread), also called the yield spread. Visually, the slope appears nearly identical to PC 2 as shown in figure 7. Furthermore, the high correlation of 0.916 between the 10Y-3M spread and PC 2 ieldsp, − y corroborates the evidence that the second principal component denotes the slope.
Vasicek Interest Rate Model
For univariate forecasting, the data comprises of yieldsp from January 4, 1982 to August 21, 2020. A stochastic technique of modeling the instantaneous movements in the term structure of forward interest rates is the Vasicek interest rate model. The factors that describe the trajectory of interest rates are time, market risk, and equilibrium value, wherein the model assumes that the rates revert to the mean of those factors as time passes. The larger the mean-reversion, the less is the probability that the interest rates will be closer to their current values, hence the rates will drift rapidly to their mean values over time. I have constructed a simple stochastic model to simulate the yield spread, which follows the same procedure as forecasting the short-term interest rates. This method employs maximum likelihood estimation to derive the parameters of the Vasicek model, which is of the form: , where: r (θ )dt dW d t = k − r t + σ t strength of mean reversion or the speed at which the yield spread rates revert to the mean k = . θ level of mean reversion or the long-run level of yield spread θ = volatility in yield spread at time t σ = short rate (yield spread) at time t r t = expected changes in the yield spread or the drift term, also known as the mean reversion for Vasicek model. (θ ) k − r t = random market risk that follows a Wiener process W = If the long-run mean value is less than the current rates, then the drift adjustment component will be negative.
Consequently, the short term rate will be in close proximity to the mean-reverting level. If , then the model pulls it r t > θ down as , and if then it is pushed up as Estimating the expected ex-ante yield spread rates, the Vasicek model caliberates the weighted average between the yield spread currently at time t and the expected long-term value In a nutshell, it forecasts the value of the yield spread at the end of a time period, . θ contingent upon the recent volatility in the market, long-run average yield spread rate, and market risk factor. The concept of mean-reversion fails in periods of soaring inflation, economic stresses, and during crises. The interest rates (or the prices of securities) quickly incorporate the economic news when the mean-reversion parameter k is huge. In reverse, the effects will be longer if k is small. Using a closed form solution below that avoids "discretization errors" , I have , where e (1 ) I have employed the stochastic technique to model the yield spread ex-ante. Simulating from I ast observed value, r 0 = l have assumed that the long-run yield spread, which is the mean of the data. Then, I generated a sequence of .75%, θ = 1 10 ex-ante trajectories of the yield spread in each of the graphs below.
We observe the model's mean-reverting nature by specifying further away from . Over time this pulls the yield r 0 θ spread towards , the magnitude of k controls the speed of the reversion. As k grows, mean reversion quickens. θ Likewise, larger widens the volatility and the potential distribution rate. σ Increasing the speed of the mean reversion intensifies the pace at which yieldsp converges to its long-run level . When θ shoots up by 5 times, volatility rises, increasing the fluctuations in the ex-ante paths of yieldsp, making it harder to σ converge to the level of yield spread. θ Thus, knowing the long-run value of the short-term yield spread rates and the mean-reversion adjustment rate k θ enables us to calculate the evolution of the yield spread rates using the Vasicek model. Some of the caveats of the Vasicek model include that, firstly, the equation can only analyze one market risk factor at a time. Secondly, the long term rates have a relatively larger effect on the short-term rates than the short-term rates themselves. Lastly, the model overstates the long term volatility and understates the short term volatility. Next, I have applied the classical time series methods in forecasting the yield spread.
Stationarity and ARIMA
Before constructing a model, I checked for stationarity of the yield spread using the Augmented Dickey Fuller Test.
Here, the null hypothesis is that a series has unit root or is non-stationary. A low p-value of 0.008 indicates that the data is stationary; resulting in the data having constant variance and covariance. Thus, the variance of yieldsp is not a function The ACF plot in figure 9 shows a strong autocorrelation of lags as spikes very gradually reduce. The partial autocorrelation (PAC) measures the correlation between and after we control for correlations at intermediate y t y t−n lags. So, when we regress against a constant, , the PACF at lag n is the regression coefficient on . y t , .., y y t−1 .
t−n y t−n
The PACF shows a significant lag for perhaps 3 days, with significant lags spotty out to perhaps 20 days. Due to considerable significant spikes shown in the ACF plot, I differenced yieldsp as yieldsp_diff, (written as in yieldsp Δ equations) and noticed that the series is not heavily serially correlated as previously depicted in the leveled data. From figure 10, the significant spikes of up to three lags in the ACF and PACF plots of the difference yield curve suggest that we can try and in the ARIMA model. First, I have modeled on the leveled yieldsp, then A time-series rests on the foundational assumption that the disturbance or the error term is a white noise process i.e. and Thus, firstly we cannot use the last day's error term to forecast [ε ] , corr(ε , ε ) onstant. V t = σ 2 = c the current error term. Secondly, the error has a constant variance i.e. homoskedastic. Thirdly, the errors are serially uncorrelated. Alternatively, moving average (MA) models are extended versions of the white noise series which comprises the past forecast errors (unobserved white noise shocks ) in a regression-like model. We use maximum ε likelihood estimation to fit in the parameters of and choose the model with the lowest Akaike , , − p d q RIM A(p, , ) A d q information criterion (AIC). ARIMA is not sensitive to the data containing two types of stationarity: unit roots and hidden trends -seasonal, polynomial, linear, etc. While differencing eliminates any kind of polynomial trend, we may have to difference a multiple numbers of times to stationarize a series which has a higher degree polynomial trend.
Fitting the model yields in the entire dataset yields the following equation with : The residuals from both the levelled and difference ARIMA models are stationary as the joint probability distribution ε t of does not depend on t for any k, and for all . Following the same procedure of ε , ε , .
modeling the difference ARIMA in the train set using a walk-forward validation scheme as done previously (where the model is updated each time it received a new data), the test RMSE from the difference model is ARIM A(3, , ) − 1 3 Figure 13 compares the ex-ante yieldsp from the leveled and the difference ARIMA models, and the ex-post .05615. 0 values of yieldsp for the most recent period from April 4, 2020 to August 21, 2020. Heteroskedastic (GARCH) model to show heteroscedastic variances. In financial economics, a change in variance in one time may change the variance in the same direction in the subsequent time, as observed in the yield spread data. For instance, a drop in yield spread when recession is imminent worries investors about deteriorating economic and financial conditions, leading to further drop in yield spread. This can occur when the yields on long term treasuries plummet, shrinking the gap between short and long term yields, and may cause yieldsp to be negative when short term yields surpass the long term yields, signalling a contractionary economy. If the variance of the yield spread represents the riskiness of the spread, then certain time periods are riskier than others; connoting that magnitude of error in certain times is greater than in other times. Heightened riskiness depicts volatility clustering.
Moreover, as these risky times are not randomly scattered across the daily time series, a degree of autocorrelation is present in the yield spread. This downward trajectory of volatility is evidence of heteroskedasticity where errors are serially correlated. Thus, the series of errors is conditionally heteroskedastic. Even in heteroskedasticity, the OLS regression coefficients are unbiased, but the confidence intervals and standard errors generated conventionally appear very narrow, giving a misleading sense of precision. Rather than considering this as a problem to be resolved, we use (G)ARCH models that treat heteroskedasticity by modeling the variance. Resultantly, it not only resolves the defects of OLS, but also forecasts the variance of residual. The tricky aspect of conditional heteroskedasticity is that the ACF plots of a very volatile series may misleadingly appear to be a stationary discrete noise process, albeit the series actually has unit root with varying variance.
is a generalized autoregressive conditional heteroskedastic model of order p and q if: Useful for forecasting volatility, (G)ARCH models the change in variance over a time period as a function of the residual errors from a mean process. These models explicitly differentiate between conditional and unconditional variance, and let the former change as a function of historical errors. I specified a lag parameter q to state the number of historical residual errors incorporated in the model. (G)ARCH are applied to stationary series -devoid of seasonal and trend components, but can have non-constant variance. These models project the ex-ante variance for a given time horizon. (G)ARCH has a mean of 0, uncorrelated processes with heteroskedasticity conditional on the lagged values, but constant unconditional variances. Empirically, after constructing ARIMA, we can use (G)ARCH to model the expected variance on the residuals, provided that the residuals are not serially correlated and don't have any trend and seasonal patterns.
When fitting an figure 14 demonstrates the decay of the p lag displayed in the ACF plot of the residuals and the R(p), A squared residuals. To apply GARCH we need to ensure that the mean of the residuals is 0, and from the descriptive statistics in table, the mean of the residuals obtained from is which is very miniscule and Whilst the residuals from appear to be a white noise process, the squared residuals are highly serially RIM A(1, , ) A 0 3 correlated. The slow decay of successive lags indicates conditional heteroskedasticity. The Q-Q plot indicates that while the points fall along the line in most parts of the graph, they curve-off in the extreme right hand side, suggesting that the residuals are more extremely valued than otherwise found in a normally distributed data. Given that the lags in both the PACF and ACF plots are significant, the model would be better fit with both AR and MA parameters. So, I have fit a model and checked how the residuals from it behave. From the generalized we can write The estimates of all parameters except fall within their respective confidence intervals, and AIC is β 3 4, 66.6. − 2 6 While the residuals (from GARCH) plot (not shown) looks like a realization of a discrete white noise process, indicating a good fit, the squared residuals are still not fully white noise, albeit less serially correlated than the squared residuals from So, has not properly "explained" the serial correlation present in the squared residuals, inhibiting me from predicting in the test set.
Multivariate Forecasting
Next, I have forecasted yieldsp using multivariate daily data from January 1,1990 to June 1, 2020 obtained from FRED.
The variables incorporated are : 1. Real-time Sahm Rule recession indicator It signals the beginning of a recession when the three month ahm − s : moving average of the U3 unemployment rate increases by at least 0.5 percent in contrast to its lowest level in the previous twelve months. It is based on real time data i.e. the values of the current and historical − unemployment rate available at a particular month. Rising percentage points of sahm from its average are indicative of impending contraction, and the number accelerates during a recession.
2. Fitted instantaneous forward rate 1-year rate hence generated by Kim and Wright (2005) when orward1yr − f : they fit a "three-factor arbitrage-free term structure model." It extracts the markets' expectations of the ex-ante paths of variables such as the short-term rates, term premium in bond yields, etc. A negative correlation with yield spread shows that it is inversely related with yieldsp.
NBER based recession indicator
The dummy variable of 1 indicates a recessionary or ec_ind {recession}. : orward1yr, f it using fitting the arbitrage free term-structure model on the US Treasury yields. Departing from the expectations hypothesis, the term premium is the difference between the yield and the average expected short rate over the life of the coupon bond. We can attribute the recent declining trend in the term premium due to several factors such as stable below-target inflation rate, more effective and explicit forward guidance, and quantitative easing during the Zero Lower Bound. Absence of inflation and other monetary or fiscal shocks may have diminished compensation associated with lower risk. A higher term premium due to systemic risks can arise when the yield spread is narrow or on the verge of being inverted.
University of Michigan Inflation Expectations
From the Survey of Consumers, it reflects the latest nf exp − i : changes in the prices that consumers expect in the next one year. In theory, inflation expectations rise when investors expect the economy to heat during easing monetary policy, and vice-versa. This occurs when the short-end of the yield curve is at a very low level while the long-term yields may be high, widening the yield spread.
6. CBOE Volatility Index conveys investors' expectations of the short term volatility embedded in the prices ix − v : of stock index options. Higher volatility usually occurs during unstable financial markets, typically arising during or before a recession when the yield spread significantly narrows and may turn negative.
TED spread
It is the difference between the US dollar value of 3-month LIBOR rate and the 3-month ed − t : Treasury bill. Test. Transforming the sample skewness and kurtosis derives the test statistic. The null hypothesis is that the data is K 2 normally distributed, and the alternate hypothesis is that the data is kurtic and/or skewed (not normally distributed) The small p-value of 0.00 rejects the null hypothesis. Hence, the sample of yield spread is not normally distributed.
The model diagnostics in figure 16 showcase if any of the OLS assumptions have been violated.
Figure 16: Stationary and serially uncorrelated residuals
Ensuring that the residuals of our model are uncorrelated and normally distributed with zero-mean, the model diagnostics imply that the residuals have gaussian distribution. The red KDE line closely follows the line, indicating that the (0, ) N 1 residuals are normally distributed. The standardized residuals appear to be white noise and don't display seasonality, also evident from the correlogram -the time series of the residuals have very low serial correlation with its lagged values. However, the Quantile-Quantile (QQ) plot shows heavy tails as whilst the points lie along the line in the middle, they curve off in the extreme ends. This implies that the residuals have more outliers or extreme values than expected if the residuals were normally distributed. Using the test values of these exogenous variables, I predicted the test set yieldsp for the test set and the confidence interval associated with each forecast.
At each observation, I have generated forecasts using the full history upto that point. The root mean squared error of the forecasts is 0.3458, more than those produced by univariate models. The monotonically increasing bands around the forecasts in figure 17 indicate that the model's beliefs about the distant future is less precise than those for the near future.
Vector Autoregression (VAR)
Extending the idea of univariate regression to a multivariate time series, the reduced-form vector autoregression of order l forecasts yieldsp using the l lagged values of itself and the stationary variables. To approximate the process AR(l) − V well in the absence of MA terms, we may require a larger AR order. Thus, more parameters will be estimated, increasing the fitted variance. To tame the variance, we can regularize (or apply a shrinkage term to the VAR model). Firstly, I determined the optimal VAR order from the lag order criteria From a pool of models, AIC selects a model such IC. − A that it generates the smallest one-step ahead squared forecast error. Setting the range of lags from 1 to 50, figure 18 displays the optimal lag order of 33, and the AIC from is Graphically, the AIC diminishes as the AR(33) V 0.8102. − 4 lag order increases: The forecasts from figure 19 are slightly wavy prior to January 2019, prior to which they are relatively constant at 0.66, unable to capture the variations in movements from the peak to trough, possibly signifying high bias and unfitting the data. This contrasts with the other models that vary in conjunction with the ex-post values over time. A probable reason is that the VAR model is excessively parameterized as there are 7 variables each with 33 lags and constants, resulting in the model to estimate a total of 238 parameters. A dearth of information to determine the model's coefficients can culminate in diffusive predictive distributions and inaccurate predictions.
Impulse Response Functions (IRF) and Forecast Error Variance Decomposition (FEVD)
I have examined the properties of VAR by doing structural analyses: impulse responses and forecast error variance decomposition. Individual estimates of coefficients only yield limited information about how a system reacts to a shock as all the variables in a VAR model are associated with each other. To obtain a border dynamic picture of the model's behavior, I have constructed graphs of impulse response functions. Impulse responses identify shocks to the VAR model. In the context of a VAR model, the IRFs trace out the time path of the effects of an exogenous shock to one ε t (or more) of the endogenous variables on some or all of the other variables in a VAR system given that no future innovations are present in the system. By back substitution, the standard MA representation of the simultaneous equations in is as follows: The normalization yields zero covariance between the disturbances of the transformed system. Consequently, we can shock a variable at a time, isolating the effects of other variables. I have computed the response of to a unit yieldsp Δ t normalized innovation to all the variables in the matrix. At time horizon h, the impulse responses of variables due to Y t an exogenous shock to is the derivative with respect to the shocks. Using Cholesky decomposition, we yieldsp Δ t decompose the variance-covariance matrix into a lower triangular matrix and its transpose such that ∑ P P ′ P ∑ = P ′ where the diagonal elements of P are positive.
where is the coefficient matrix in the VAR model. Figure 20 traces the impact of shocks of macro variables (present in the VAR system) onto the difference yieldsp . A shock of does not affect till the second period and raises the to estimated 0.45 in period termpr Δ yieldsp Δ yieldsp Δ four. Thereafter, it gradually slumps to the negative territory at about before converging to 0 in the tenth period. .25 − 0 An impulse response from has somewhat opposite effect due to the near reverse in the trajectory of f orward1yr Δ over time. As is stationary, the impulse response of shocks on decays to 0 by the tenth yieldsp Δ yieldsp Δ yieldsp Δ period, signifying that one-time innovations don't have long-term ramifications on the paths of yieldsp. Δ Figure 20: Impulse responses functions depicting shocks from the exogenous variables to yieldsp Another way to characterize the dynamics associated with the VAR is by computing the FEVD from the VAR model.
We can find how much a shock to one variable such as ted, vix, etc impacts the variance of the forecast error of a different one, such as So, in the short run, i.e. in the same period, explains 59.78 percent of the yieldsp. Δ yieldsp Δ variance in forecast error of this strong influence on indicates that it is strongly endogenous. termpr Δ − termpr Δ and rec_ind are strongly exogenous as they only weakly influence in predicting ed, vix, Δf orward1yr t termpr . Δ t itself explains 40.21 percent of variance in its error. However, in the long run, the influence of in termpr Δ t termpr Δ explaining the variance in its forecast error diminished marginally from 40.21 to 38.58 percent, whereas has yieldsp Δ an incremental influence, as its effect rises from 59.78 to 60.85 percent in the short in the earlier periods in the short run.
Multilayer Perceptron (MLP)
Shifting gears from the classical econometrics forecasting tools, now I have explored the forecasting mechanisms from First, we transform the input of each hidden layer linearly and apply the rectified linear (ReLU) activation function to transform the input non-linearly. This non-linear transformation enables MLP to identify complex (x) ax (0, ), − f = m x non-linear relationships between yieldsp and the independent variables with missing values, and is robust to noise.
However, the caveat is that we can only provide a fixed number of inputs to generate a fixed number of outputs by enumerating the temporal dependence in the model's design. The firm mapping function between the inputs and the outputs in the feedforward neural network is problematic when we feed in a sequence of inputs in the model.
Prior to fitting the model in the train set, I transformed the time series into a supervised learning problem where the observations in the previous time step become inputs to forecast Moreover, I rescaled the data such ieldsp Increasing the number of lags as explanatory variables or inputs, I have simultaneously ieldsp − , ) . y t ∈ ( 1 1 ⋁ t expanded the network's capacity by varying the number of neurons in a single hidden layer , albeit at a possible (1, , ) Unlike regression predictive techniques like ARIMA that don't consider the complexity arising from sequence dependence among the input variables, recurrent neural networks (RNN) are powerful deep learning methods to manage sequence dependence. RNNs are called recurrent as they perform the same task for every element of a sequence, and the output i.e. the ex-ante yieldsp depends on the new input i.e. and all the historical values prior to fed ieldsp y t+1 ieldsp y t+1 in the past. While a vanilla neural network is usually very constrained as it accepts a fixed-sized vector as inputs and produces a fixed-size vector as output, a type of RNN is the Long Short-Term Memory (LSTM) can capture long term − dependencies than vanilla RNNs. Its convoluted architecture can be successfully trained using Backpropagation Through Time, wherein overcomes the problem of vanishing gradients.
Long Short Term Memory (LSTM)
In contrast with the neurons in vanilla artificial neural networks (ANN), the LSTM networks have memory blocks (or cell states) linked through layers. As shown in the diagram, the cell state is a horizontal line that runs on top of the C t−1 diagram. Like a conveyor belt, it flows down the entire chain, linearly interacting only a few times. The "gates" are structures that optimally regulate the flow (addition or subtraction) of information in the LSTM's cell states. The components of a block enable LSTM to work smarter than ANNs as it acts as a repository of memory for the most recent sequence. Furthermore, a cell state consists of gates that regulate the block's state and output. Operating on a given sequence of input, each gate within the block uses the sigmoid activation function to control if they are triggered or not.
If triggered, the state changes and information flows through the cell state conditionally.
A unit, like a mini-state machine, comprises of three types of gates as depicted in figure 23: Forget Gate: It decides the information to throw away from the cell state. For each number in the cell state the , C t−1 output from the forget layer is Here, we apply the sigmoid function of the form , and the output from the sigmoid layer (x) 0, 1] : σ = 1 1+e −x ∈ [ describes how much information would be let through the gate.
is the hidden state at time step t , also known as the h t "memory" of the network as it captures information about the sequence of events that occurred in all the previous time steps. It is the output of the current cell.
Output Gate: As before, a sigmoid layer that decides the output based on the cell state's memory and input. In developing the LSTM model's architecture, I applied a regularization technique called "dropout" wherein a proportion of recurrent connections and inputs to the LSTM units are probabilistically excluded from the process of updating weights and activation when we train a network. In this case, I've dropped 20 percent of the data and trained the network for 1000 epochs. This diminishes overfitting and enhances the model's performance. Overfit models that occur in convoluted models tend to describe random noise in the data, instead of explaining the true relationship between variables, raising the variance but reducing the bias. Setting a batch size of 100 propagates 100 observations (in the train set) chronologically through the network each time and fits the model with the observations in the batch size.
Additionally, we can enhance the model's performance by tuning the hyperparameters such as the type of optimizer. This is crucial as ideally the optimizer should reach global minima where the cost function is the lowest. Thus, I have compiled the model using the iterative method called the RMSprop optimizer. Another regularization tool that I have applied using a callback function is "early stopping." This updates the learner to ensure that the training data is fit better at each iteration and avoids overfitting. Usually, a model may perform well upto a certain point i.e. the loss is low; whereas it raises past that point. Therefore, early stopping guides the number of iterations to run before overfitting the model. Specifically, the callback function monitored the loss in the validation (test) set at each epoch, and training is 5 The diagram is taken from: https://www.researchgate.net/figure/Block-diagram-of-the-LSTM-recurrent-neural-network-cell-unit-Blue-boxes-means-sigmoid_fig 2_328761192 interrupted if the test set loss does not improve after 10 epochs. If the test loss drops below training loss, then the model may be overfitting the training data. Whilst I allowed the network to train for 1000 epochs, the model converged to the optimal value in 207 epochs. The LSTM model resulted in the train and test RMSE as 0.0982, and 0.0630, respectively, implying that the model is not overfit. Measuring and plotting the test and train RMSE in figure 24 shows that the loss in the validation set is lower than than the train set across the 287 epochs. al_loss, − v Figure Normally, the state within the network is reset after each training batch when fitting the model as done in the previous "stateless" LSTM model. However, by making the layer "stateful", we can gain finer control over the internal state of the network by building stacked LSTMs. Here, the output from the hidden and cell state of batch i becomes the input for batch as memory in LSTMs enable the network to automatically recall across longer temporal dependence. Stacking i + 1 is the process of building the layers of LSTM such that the output of one layer becomes the h , t , t , .. : l : l = t + 1 + 2 .
input of another layer, making the model deeper. This means that it can build state over the entire training sequence and even maintain that state if needed to make predictions. This model yielded a train and test set RMSE of 0.07999, and 0.06547, respectively. The unsophisticated linear models such as ARIMA cannot learn a long-term structure embedded in the data. For instance, an cannot learn dependencies greater than p, warranting the series to be locally stationary. Without stationarizing R(p) A the variables, forecasts become behave bizarrely for large values of the time horizon h, and the variance of the forecast errors may explode if Unlike these linear models, LSTMs can learn non-linearities very well as a larger dataset . h → ∞ facilitates them to learn long-term dependencies. Hence, in constructing LSTMs using multivariate data, I used all the variables as given, without stationarizing the variables with unit roots. First, splitting the dataset into train and test sets, then I further divided the train and test sets into input and output variables. Thereafter, I defined the LSTM wherein the first hidden layer comprises 25 neurons and the output layer has 1 neuron. Optimizing the model using the RMSProp method, the model computes the loss function via the mean absolute error. Fitting the model for 500 training epochs with a batch size of 50, the LSTM's internal state is reset at the end of each batch size. Thus, the internal state is a function of the number of observations. Figure 26 tracks both the train and test loss for upto 500 epochs. Till the first 25 epochs, the loss in the test set rises as is greater than the train set loss, indicative of possible overfitting. However, it gradually receded and falls below the train loss after the epoch, nullifying the effects and concerns of an overfit model.
Conclusion and Discussion
The emphasis of this paper is to contrast the classical and emerging forecasting techniques and using the metric of root mean squared error to evaluate which model produces the best results. Table 6 compares the test RMSE obtained from each model, from which I concluded that predictions from and multivariate LSTM lie on the opposite end of the AR(33) V spectrum, with the former performing relatively poorly than the latter. The unchanged ex-ante yieldsp from the highly parameterized reduced form starkly contrasts with those from all other models which are able to capture the AR(33) V changing patterns. A possible way to improve the model is by developing a structural VAR, where the innovations have structural interpretations as deeper structural shocks drive the innovations. Alternatively, we could develop a vector error correction method that represents a cointegrating VAR. Besides being more efficient than the VAR, it explains the long and short term relationship between the variables and we can evaluate how to correct deviations from the long run.
Model
Test RMSE Similarly, one could construct a Bayesian VAR, build on the fundamentals of Bayes Theorem. Whilst the reduced form parameters are passed in the likelihood function, the orthogonal matrix does not enter the likelihood function. Ω Therefore, we cannot identify based on the given sample. Consequently, the conditional posterior will be identical to Ω the conditional prior as the conditional distribution of the parameters in reduced-form will not be updated.
Comparatively, identifications don't hinder the conceptual workings of the Bayesian analysis to a great extent. In Bayesian inference, the sample contained in the likelihood function updates the prior to form a posterior distribution. So, we can assign probabilities to the model specifications and update those values after observing the data. It is crucial to know the predictive distributions of ex-ante values such as inflation rate, yield curve and other macroeconomic variables relevant for policy-making. Equally essential is accounting for the uncertainty about the actual shocks and the estimated parameters. As Bayesian methods treat these parameters and shocks symmetrically as random variables, it is straightforward to simultaneously consider these two sources of uncertainty. Finally, we can explore a related methodology called Gaussian Process, as it hinges on multivariate Gaussian distribution. | 2020-09-14T01:00:09.786Z | 2020-09-06T00:00:00.000 | {
"year": 2020,
"sha1": "1cca8173eb500c505f7387cb5115e31a5ffeac40",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.05507",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1cca8173eb500c505f7387cb5115e31a5ffeac40",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Mathematics"
]
} |
253571549 | pes2o/s2orc | v3-fos-license | Dual Language Models for Code Switched Speech Recognition
In this work, we present a simple and elegant approach to language modeling for bilingual code-switched text. Since code-switching is a blend of two or more different languages, a standard bilingual language model can be improved upon by using structures of the monolingual language models. We propose a novel technique called dual language models, which involves building two complementary monolingual language models and combining them using a probabilistic model for switching between the two. We evaluate the efficacy of our approach using a conversational Mandarin-English speech corpus. We prove the robustness of our model by showing significant improvements in perplexity measures over the standard bilingual language model without the use of any external information. Similar consistent improvements are also reflected in automatic speech recognition error rates.
Introduction
Code-switching is a commonly occurring phenomenon in multilingual communities, wherein a speaker switches between languages within the span of a single utterance. Code-switched speech presents many challenges for automatic speech recognition (ASR) systems, in the context of both acoustic models and language models. Mixing of dissimilar languages leads to loss of structure, which makes the task of language modeling more difficult. Our focus in this paper is on building robust language models for code-switched speech from bilingual speakers.
A naïve approach towards this problem would be to simply use a bilingual language model. However, the complexity of a full-fledged bilingual language model is significantly higher than that of two monolingual models, and is unsuitable in a limited data setting. More sophisticated approaches relying on translation models have been proposed to overcome this challenge (see Section 2), but they rely on external resources to build the translation model. In this paper, we introduce an alternate -and simpler -approach to address the challenge of limited data in the context of code-switched text without use of any external resources.
At the heart of our solution is a dual language model (DLM) that has roughly the complexity of two monolingual language models combined. A DLM combines two such models and uses a probabilistic model to switch between them. Its simplicity makes it amenable for generalization in a low-data context. Further there are several other benefits of using DLMs. (1) The DLM construction does not rely on any prior information about the underlying languages. (2) Since the structure of our combined model is derived from monolingual language models, it can be implemented as a finite-state machine and easily incorporated within an ASR system. (3) The monolingual language model for the primary language can be trained further with large amounts of monolingual text data (which is easier to obtain compared to code-switched text).
Our main contributions can be summarized as follows: • We formalize the framework of DLMs (Section 3).
• We show significant improvements in perplexity using DLMs when compared against smoothed n-gram language models estimated on code-switched text (Section 4.3). We provide a detailed analysis of DLM improvements (Section 5). • We evaluate DLMs on the ASR task. DLMs capture sufficient complementary information which we leverage to show improvements on error rates. (Section 4.4).
Related Work
Prior work on building ASR systems for code-switched speech can be broadly categorized into two sets of approaches: (1) Detecting code-switching points in an utterance, followed by the application of monolingual acoustic and language models to the individual segments [1,2,3].
(2) Employing a universal phone set to build acoustic models for the mixed speech and pairing it with standard language models trained on code-switched text [4,5,6,7,8].
There have been many past efforts towards enhancing the capability of language models for code-switched speech using additional sources of information such as part-of-speech (POS) taggers and statistical machine translation (SMT) systems. Yeh et al. [7] employed class-based n-gram models that cluster words from both languages into classes based on POS and perplexity-based features. Vu et al. [9] used an SMT system to enhance the language models during decoding. Li et al. [10] propose combining a code-switch boundary predictor with both a translation model and a reconstruction model to build language models. (Solorio et. al. [11] were one of the first works on learning to predict code-switching points.) Adel et al. [12] investigated how to effectively use syntactic and semantic features extracted from code-switched data within factored language models. Combining recurrent neural network-based language models with such factored language models has also been explored [13].
Dual language models
We define a dual language model (DLM) to have the following 2-player game structure. A sentence (or more generally, a sequence of tokens) is generated via a co-operative game between the two players who take turns. During its turn a player generates one or more words (or tokens), and either terminates the sentence or transfers control to the other player. Optionally, while transferring control, a player may send additional information to the other player (e.g., the last word it produced), and Given two language models L1 and L2 with conditional probabilities P1 and P2 that satisfy the following conditions: We define a combined language model L, with conditional probabilities P , as follows: for w ∈ V2 also may retain some state information (e.g., cached words) for its next turn. At the beginning of the game one of the two players is chosen probabilistically.
In the context of code-switched text involving two languages, we consider a DLM wherein the two players are each in charge of generating tokens in one of the two languages. Suppose the two languages have (typically disjoint) vocabularies V1 and V2. Then the alphabet of the output tokens produced by the first player in a single turn is V1 ∪ { sw , /s }, sw denotes the switching -i.e., transferring control to the other player -and /s denotes the end of sentence, terminating the game. We shall require that a player produces at least one token before switching or terminating, so that when V1 ∩ V2 = ∅, any nonempty sentence in (V1∪V2) * uniquely determines the sequence of corresponding outputs from the two players when the DLM produces that sentence. (Without this restriction, the players can switch control between each other arbitrarily many times, or have either player terminate a given sentence.) In this paper, we explore a particularly simple DLM that is constructed from two given LMs for the two languages. More precisely, we shall consider an LM L1 which produces /sterminated strings in (V1 ∪ { sw }) * where sw indicates a span of tokens in the other language (so multiple sw tokens cannot appear adjacent to each other), and symmetrically an LM L2 which produces strings in (V2 ∪ { sw }) * . In Section 4.2, we will describe how such monolingual LMs can be constructed from code-switched data. Given L1 and L2, we shall splice them together into a simple DLM (in which players do not retain any state between turns, or transmit state information to the other player at the end of a turn). Below we explain this process which is formally described in Fig. 1 (for bi-gram language models).
We impose conditions (1)-(4) on the given LMs. Condition (1) which disallows empty sentences in the given LMs (and the resulting LM) is natural, and merely for convenience. Condition (2) states the requirement that L1 and L2 agree on the probabilities with which each of them gets the first turn. Conditions (3) and (4) require that after switching at least one token should be output before switching again or terminating. If the two LMs are trained on the same data as described in Section 4.2, all these conditions would hold.
To see that P [w | w] defined in Fig. 1 is a well-defined probability distribution, we check that w P [w |w] = 1 for all three cases of w, where the summation is over w ∈ V1 ∪ Figure 2: DLM using two monolingual LMs, L1 and L2, implemented as a finite-state machine.
where the first equality is from (1) and the second equality is from (2).
The case of w ∈ V2 follows symmetrically. Figure 2 illustrates how to implement a DLM as a finitestate machine using finite-state machines for the monolingual bigram LMs, L1 and L2. The start states in both LMs, along with all the arcs leaving these states, are deleted; a new start state and end state is created for the DLM with accompanying arcs as shown in Figure 2. The two states maintaining information about the sw token can be split and connected, as shown in Figure 2, to create paths between L1 and L2.
Data description
We make use of the SEAME corpus [14] which is a conversational Mandarin-English code-switching speech corpus.
Preprocessing of data. Apart from the code-switched speech, the SEAME corpus comprises of a) words of foreign origin (other than Mandarin and English) b) incomplete words c) unknown words labeled as unk , and d) mixed words such as bleach跟, cause就是, etc.. Since it was difficult to obtain pronunciations for these words, we removed utterances that contained any of these words. A few utterances contained markers for non-speech sounds like laughing, breathing, etc. Since our focus in this work is to investigate language models for codeswitching, ideally without the interference of these non-speech sounds, we excluded these utterances from our task.
Data distribution. We construct training, development and test sets from the preprocessed SEAME corpus data using a 60-20-20 split. Table 1 shows detailed statistics of each split. The development and evaluation sets were chosen to have 37 and 30 random speakers each, disjoint from the speakers in the training data. 1 The out-of-vocabulary (OOV) rates on the development and test sets are 3.3% and 3.7%, respectively.
Monolingual LMs for the DLM construction
Given a code-switched text corpus D, we will derive two complementary corpora, D1 and D2, from which we construct bigram models L1 and L2 as required by the DLM construction in Figure 1, respectively. In D1, spans of tokens in the second language are replaced by a single token sw . D2 is constructed symmetrically. Standard bigram model construction on D1 and D2 ensures conditions (1) and (2) in Figure 1. The remaining two conditions may not naturally hold: Even though the data in D1 and D2 will not have consecutive sw tokens, smoothing operations may assign a non-zero probability for this; also, both LMs may assign non-zero probability for a sentence to end right after a sw token, corresponding to the sentence having ended with a non-empty span of tokens in the other language. These two conditions are therefore enforced by reweighting the LMs.
Perplexity experiments
We used the SRILM toolkit [15] to build all our LMs. The baseline LM is a smoothed bigram LM estimated using the codeswitched text which will henceforth be referred to as mixed LM. Our DLM was built using two monolingual bigram LMs.
(The choice of bigram LMs instead of trigram LMs will be justified later in Section 5). We also evaluate perplexities by reducing the amount of training data to 1 2 or 1 3 of the original training data (shown in Table 3). As we reduce the training data, the improvements in perplexity of DLM over mixed LM further increase, which validates our hypothesis that DLMs are capable of generalizing better. Section 5 elaborates this point further.
ASR experiments
All the ASR systems were built using the Kaldi toolkit [18]. We used standard mel-frequency cepstral coefficient (MFCC)+delta+double-delta features with feature space maximum likelihood linear regression (fMLLR) [19] transforms to build speaker-adapted triphone models with 4200 tied-state triphones, henceforth referred to as "SAT" models. We also build time delay neural network (TDNN [20])-based acoustic models using i-vector based features (referred to as "TDNN+SAT"). Finally, we also re-scored lattices generated by the "TDNN+SAT" model with an RNNLM [21] (referred to as "RNNLM Rescoring"), trained using Tensorflow [22] on the SEAME training data. 2 We trained a single-layer RNN with 200 hidden units in the LSTM [23] cell. The pronunciation lexicon was constructed from CMUdict [24] and THCHS30 dictionary [25] for English and Mandarin pronunciations, respectively. Mandarin words that did not appear in THCHS30 were mapped into Pinyin using a freely available Chinese to Pinyin converter. 3 We manually merged the phone sets of Mandarin and English (by mapping all the phones to IPA) resulting in a phone inventory of size 105.
To evaluate the ASR systems, we treat English words and Mandarin characters as separate tokens and compute token error rates (TERs) as discussed in [9]. Table 4 shows TERs on the dev/test sets using both mixed LMs and DLMs. DLM performs better or on par with mixed LM and at the same time, captures a significant amount of complementary information which we leverage by combining lattices from both systems. The improvements in TER after combining the lattices are statistically significant (at p < 0.001) for all three systems, which justifies our claim of capturing complementary information. Trigram mixed LM performance was worse than bigram mixed Table 4: TERs using mixed LMs and DLMs LM; hence we adopted the latter in all our models (further discussed in Section 5). This demonstrates that obtaining significant performance improvements via LMs on this task is very challenging. Table 5 shows all the TER numbers by utilizing only 1 2 of the total training data. The combined models continue to give significant improvements over the individual models. Moreover, DLMs consistently show improvements on TERs compared to mixed LMs in the 1 2 training data setting.
Discussion
Code-switched data corpora tend to exhibit very different linguistic characteristics compared to standard monolingual corpora, possibly because of the informal contexts in which codeswitched data often occurs, and also possibly because of the difficulty in collecting such data. It is possible that the gains made by our language model are in part due to such characteristics of the corpus we use, SEAME. (We note that this corpus is by far the most predominant one used to benchmark speech recognition techniques for code-switched speech.) In this section we analyze the SEAME corpus and try to further understand our results in light of its characteristics.
Code-switching boundaries. Code-switched bigrams with counts of ≤ 10 occupy 87.5% of the total number of codeswitched bigrams in the training data. Of these, 55% of the bigrams have a count of 1. This suggests that context across code-switching boundaries cannot significantly help a language model built from this data. Indeed, the DLM construction in this work discards such context, in favor of a simpler model. n-gram token distribution. We compare the unigram distribution of a code-switched corpus (SEAME) with a standard monolingual corpus (PTB [26]). A glaring difference is observed in their distributions (Figure 3-a) with significantly high occurrence of less-frequent unigrams in the code-switched corpus, which makes them rather difficult to capture using standard ngram models (which often fall back to a unigram model). The DLM partially compensates for this by emulating a "class-based language model," using the only class information readily available in the data (namely, the language of each word).
Illustrative examples. Below, we analyze perplexities of the mixed LM and the DLM on some representative sentences from the SEAME corpus, to illustrate how the performances of the two models compare. We observe that when less frequent words appear at switching points (like total, meeting, etc.), the DLM outperforms the mixed LM by a significant margin as illustrated in the first two sentences above. In cases of highly frequent words occurring at switching points, the DLM performs on par with or slightly worse than the mixed LM, as seen in the case of the third sentence. The DLM also performs slightly better within long stretches of monolingual text as seen in the fourth sentence. On the final sentence, which has multiple switches and long stretches of monolingual text, again the DLM performs better. As these examples illustrate, DLMs tend to show improved performance at less frequent switching points and within long stretches of monolingual text.
Effect of Trigrams. In standard monolingual datasets, trigram models consistently outperform bilingual models. However, in the SEAME corpus we did not find a pronounced difference between a bigram and a trigram model. This could be attributed to the fact that the number of highly frequent trigrams in our corpus is lower in comparison to that in the PTB dataset (Figure 3-b). As such, we have focused on bigram LMs in this work.
Conclusions
We introduced DLMs and showed robust improvements over mixed LMs in perplexity for code-switched speech. While the performance improvements for the ASR error rates are modest, they are achieved without the aid of any external language resources and without any computational overhead. We observe significant ASR improvements via lattice combination of DLMs and the standard mixed LMs. Future directions include investigating properties of code-switched text which can be incorporated within DLMs, using monolingual data to enhance each DLM component and demonstrating the value of DLMs for multiple code-switched language pairs. | 2018-04-03T01:02:57.628Z | 2017-11-03T00:00:00.000 | {
"year": 2017,
"sha1": "82e42ee80793587ca23b4f78762fb88fdd69f7b7",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1711.01048",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1138d94f5300798504c6ef84c142cc3581b39dd5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
230568582 | pes2o/s2orc | v3-fos-license | Quantifying Adaptability of College Campus Buildings
While much has been written about adaptable buildings, quantification of adaptability is still in its nascent stage. Little has been published toward validation of quantitative adaptability models. This paper proposes a scoring system for evaluating the design-based adaptability of college campus buildings. This system was created to be a tool to guide future designs. Different physical features (i.e., floor-to-floor height and structural span lengths) of the buildings are considered in the scores. Adaptability scores are calculated for four buildings on Clemson University’s campus. Scores are compared to those from an earlier study of the same buildings; the earlier study quantified adaptability by surveying experts through an Analytic Hierarchy Process (AHP). Both approaches rank the subject buildings in the same order with respect to adaptability. Additionally, scores from both approaches are linearly correlated. These encouraging results suggest that the proposed scoring system is a starting point for quantifying the adaptability of college campus buildings.
Adaptability and Design-Based Adaptability (DBA)
Adaptability has been defined as the ease with which a building can be physically modified, deconstructed, refurbished, reconfigured, repurposed and/or expanded [1]. Similar definitions are presented in books by Schmidt and Austin [2] and Cowee and Schwehr [3]. Physical, economic, functional, technological, social, legal and political factors all impact a building's adaptability [4]. Physical factors that impact adaptability include a building's age and state of repair, as well as the features of its design. The portion of adaptability based on design features has been described as Design-Based Adaptability (DBA) [5]. While DBA is only one contributor to overall adaptability, it is critical because it is the only portion that can be directly impacted (intentionally or otherwise) by design decisions. This paper proposed a quantitative model for scoring DBA of college campus buildings. The model is intended as the first step toward a tool for architects and engineers who are seeking to design more adaptable buildings.
Few models and methods have been proposed to quantify adaptability, and even fewer have been empirically validated [6]. Existing models for measuring DBA [5,[7][8][9] have been created using weighted-sum approaches. In weighted-sum models, a building is first scored for a variety of different parameters (e.g., floor plan openness). Scores are then multiplied by weighting values based on the scale and importance of the parameters, and then products are summed to determine an overall score. The proposed scoring system is also created using the weighted-sum approach, but is distinct from previous models in its use of research data for development and validation.
DBA Strategies
This section briefly reviews relevant strategies for increasing a building's DBA. Words in bold are used as shorthand for describing the strategies. More detailed reviews can be found in the works by Ross et al. [1] and Heidrich et al. [10] and detailed practical examples of each strategy are listed in Table 6.1.
The strategy of Layering building systems was examined by Duffy [11] and Brand [12]. Duffy proposed that buildings should be analyzed as they are built and maintained: in layers such as "shell, services and scenery." Brand observed that building layers are replaced at different rates ( Fig. 6.1). He suggested that the layers be designed with physical and functional separation so each layer can be modified without impacting the others.
Large floor-to-floor heights and wide structural grids are part of the Open strategy. For example, floor-to-floor height dictates if "ample space for HVAC equipment, etc." [13] is available. Small floor heights can constrain the possibility of future changes. Similarly, wide open structures present more options for future change than do densely located structures.
Reserve capacity is providing additional capacity beyond needs for the original building function. Future changes to a building may result in additional technical requirements, these changes can be facilitated by reserve capacity [1]. This idea is typically described in terms of structural capacity, but the strategy can also be applied to building services and space plans. Plan depth is related to the proximity of interior spaces to exterior walls. In the context of adaptability, access to exterior walls is desirable because many potential building functions, particularly those on college campuses, benefit from exterior windows. While plan depth has been reported as being beneficial to adaptability, other building characteristics are reported more frequently [14].
Simple designs can reduce uncertainty associated with adaptation projects. Easy to understand load paths, repeating elements and details, orthogonal walls, stacking floor plates all contribute to simplicity [1]. Plan depth Creating a building footprint that allows interior spaces to be in close proximity to exterior walls and windows The building on the right has a shallow plan depth [18] (continued)
Becker et al. 2020
Becker et al. quantitatively measured DBA of four buildings from the Clemson University campus using an Analytic Hierarchy Process (AHP) [14]. The four buildings were the Watt Family Innovation Center (WFIC), Academic Success Center (ASC), Lee Hall and Stadium Suites ( Fig. 6.2). These buildings were selected for study because of their similar size, age, and quality of materials. AHP is a method that separates multifaceted decisions into a series of pairwise comparisons. Pairwise results are aggregated to determine an overall best option. Experts in the Becker study used AHP to compare the subject buildings according to their relative suitability for different potential adaptation schemes. After aggregating the individual pairwise scores, the buildings' overall adaptability scores were 0.3 for WFIC, 0.23 for ASC, 0.27 for Lee Hall, and 0.2 for Stadium Suites. Higher scores mean that a building is more suited for potential adaptation.
Becker et al. also qualitatively evaluated the buildings' adaptability by asking experts to describe the physical features that made the buildings more or less suitable for potential adaption. Open floor plans and high floor-to-floor heights were the most frequently mentioned features. Some of the other features, listed in order of most-to-least frequently mentioned, included: flexible HVAC systems, overdesigned structure, ease of access/plentiful circulation and building footprint/plan depth that
Building A (Watt Family Innovation Center)
Building B (Academic Success Center) • 4 stories + basement, total 6070 m 2 .
• Raised plenum HVAC system. • Special structure: reinforced concrete cast on metal deck composite with beams and column. • Green roof.
• Skylights and light sensors.
Scoring System Description
The proposed scoring system measures the DBA of college campus buildings. Previous work by Becker et al. [14] measured DBA of existing campus buildings, whereas the scoring system aims to guide the design of future buildings. The choice to evaluate college campus buildings was made partially for the practical reason that the current researchers had access to detailed drawings and information about the buildings. More importantly, campus buildings were chosen because the stakeholders tend to be long-term owners who are interested an elongated life for their facilities.
College campuses are always evolving based on new student and faculty needs. The recent transition to classrooms with increased social distancing due to COVID-19 is one example. Since abundant land area for new construction is not always a viable asset in these necessary evolutions, the buildings located on these campuses must be able to adapt to new occupancies quickly in order to further the success of the university. The four buildings analyzed using the system proposed were chosen based on their similar sizes and ages.
Parameters and Parameter Score
Eight physical features (parameters) are considered in the proposed scoring system. These parameters are similar to the DBA strategies cited in the literature (Sect. 1.2) and observed in the qualitative data collected by Becker et al. (Sect. 1.3). Separate scales are proposed to relate the value of each parameter to an adaptability score between 0 and 10. Individual adaptability scores are then multiplied by weighting factors, and the products are summed to determine an overall DBA score. This section discusses the parameters and their adaptability scales.
It has been theoretically argued that there is a limit to the degree to which DBA strategies should be applied [20]. For example, just because reserve structural strength can increase a building's adaptability, it would be wasteful to design all buildings to the highest and most stringent structural requirements. Scores for the individual parameters reflect this notion. Most of the parameter scores have diminishing returns as the parameters increase in value. Relationships between parameter scores and values are based on the authors' professional opinions and reasoning. They are presented as a first step but are far from definitive. The authors intend to conduct additional research on this topic in the near future.
To the extent possible, relationships between parameter scores and values are continuous mathematical functions. Continuous functions are used in lieu of checklist scoring systems. In a recent conference on adaptable buildings, checklist systems were criticized for promoting "checklist fatigue," facilitating "gaming" or scores, and for dulling designer's critical thinking [20].
Structural spacing. Structural spacing is related to the open DBA strategy. Scores for this parameter are determined using Fig. 6.3. It is reasoned that spacings below 10 ft (3 m) severely restrict the types of college campus functions that could be used in such spaces. Accordingly, the scoring for structural spacing begins at 10 ft (3 m). The score increases with increasing structural spacing with slope changes at 30 ft (9 m) and at 60 ft (18 m). The 30 ft spacing is based on the size of a typical classroom. After a structural spacing of 60 ft, the score remains constant because 60 ft is large enough The floor-to-floor height used to determine the adaptability score is taken as the average for the building. It is calculated as the elevation difference between top of first floor and top of the roof structure divided by the number of stories.
Wall deconstructability. Wall deconstructability refers to how easy it is to remove an interior wall [21]. Adaptability increases as walls are easier to deconstruct. The schedule in Table 6.2 lists the deconstructability score associated with different wall types. Bearing walls are considered the hardest to remove and are assigned a score of 0. Non-bearing walls are easier to remove and have higher scores. The highest score is for "removable" walls that are intentionally detailed to facilitate removal. The wall deconstructability score is based on the average wall deconstructability score across all interior walls in a building (Eq. 6.1). For example, the WFIC ( Fig. 6.2) has a combination of bearing, light, and removable walls and has a wall deconstructability score of 7.5. Wall deconstructability is associated with the layer and open DBA strategies. where: WDAS Wall deconstructability adaptability score L j Total length of wall type j D j Deconstructability score of wall type j J Index for wall type.
HVAC accessibility. Accessibility refers to how readily an HVAC system can be inspected, updated or modified. This parameter is related to the layer DBA strategy; HVAC systems that are highly integrated with or embedded in other building layers tend to be more difficult to adapt. The adaptability score for this parameter is more subjective than for the other parameters. Systems with embedded/rigid designs have a score of 0 while fully exposed/flexible designs have a score of 10. Scores given to the buildings in Fig. 6.2 are demonstrative. The WFIC is given a score of 8. It has raised floors that house the HVAC ductwork. Segments of the floor can be easily removed to inspect, replace or modify the ductwork. Lee Hall is given a score of 6. The ground floor of Lee Hall has a hydronic heating tubes in embedded in a concrete slab-on-grade. Ductwork for cooling is fully exposed below the upper floor and roof structure. The score for Lee Hall reflects the lack of accessibility of the in-slab heating, on the one hand, and positive accessibility of the ductwork on the other. HVAC systems for Stadium Suites and the ASC are typical of many buildings on the Clemson campus. HVAC ducts and chases are in wall/ceiling cavities that are covered by gypsum board. This condition is assigned a 5 and is considered a typical level of HVAC accessibility.
Design live load. Design live load is associated with the reserve DBA strategy. Standard 7 from the American Society of Civil Engineers [22] lists uniform design live loads between 20 and 300 psf (1-14 kN/m 2 ) for different occupancies. Live loads for most college campus occupancies fall between 20 and 100 psf, and these values form the first segment of the adaptability scale for live loads shown in Fig. 6.5. Live loads over 100 psf have increasing adaptability scores, but with diminishing returns (lower slope on figure) because design loads over 100 psf are only needed for special conditions such as data centers and libraries. Different design live loads are typically applied across different portions of a building. In these situations, the weighted average design live load is used to determine the adaptability score. For example, the majority of areas in the Stadium Suites building is designed for 40 psf. Common rooms and corridors have higher design loads. The weighted average is 47.5 psf; therefore, the design live load adaptability score is 3.8. Plan depth. The percentage of a floor plate area that is within 12' (3.7 m) of an exterior wall is an indicator of the plan depth strategy. A relatively skinny building has low plan depth and high percentage of area close to exterior walls. A "big box" store is an example of a building with high plan depth and a corresponding low percentage of space near exterior walls. While interior spaces in "big box" buildings can be adapted for a variety of uses, experts from the Becker et al. [14] study preferred shallower plans. This is because shallow plans provide greater proximity to exterior walls and windows which is desirable for many college campus occupancies. Shallow plan depths facilitate more occupancies making them more adaptable.
The scale for determining the plan depth adaptability score is shown in Fig. 6.6. A score of 0 is associated with large plans depths in which 10% or less of the floor area is within 12' of exterior walls. The score increases with increasing up to a peak at 50%. Scores decrease for percentages above 50% as the plan depth becomes "too thin." When 100% of the plan area is within 12' of the exterior, the plan depth is 24 ft (7.3 m). Such plans can facilitate a limited number of campus occupancies and are assigned an adaptability score of 6.0.
Orthogonal walls. The adaptability score for this parameter is linearly related to the percentage of walls in a building that are oriented in orthogonal directions (Fig. 6.7). In the Stadium Suites building, there are diagonal wall segments that form the corner tower ( Fig. 6.2). The remaining 90% of walls are orthogonal which results in an adaptability score of 9. Orthogonal walls are taken as an indicator of the simple DBA strategy. Stacking floor plates. Stacking floor plates are also related to the simple DBA strategy; stacking floor plates are indicators of simple structures and details. This parameter is calculated as the overall percentage of floor plate areas in a building that match. This percentage is linearly related to the floor plate adaptability score using the same scale as the orthogonal wall adaptability score (Fig. 6.7). An example of this indicator is found in the WFIC in which floor plates get smaller with each story. The floor plate adaptability score is 6 because 60% of the floor plate area stacks. While this indicator is very simple to calculate and apply, the authors are currently considering more rigorous methods for calculating stackability. From an adaptability perspective, it is reasoned that some portions of buildings (i.e., plumbing chases) are more critical to stack vertically than others. More rigorous models could consider which portions of a building stack.
Overall Adaptability Score
Adaptability scores for the individual parameters are aggregated to determine the overall adaptability score. This is done by multiplying each parameter score by a weighting factor representative of its level of importance then summing the products: where: OAS Overall adaptability score PW i Parameter weighting factor PAS i Parameter adaptability score I Index for parameters.
Parameter weighting factors are based on the qualitative data collected from building professionals in Becker et al. [14]. The professionals listed physical features of the subject buildings that would facilitate or impede adaptation. Parameters in the model were assigned to the most similar physical features mentioned by the professionals. Parameters (features) that were more frequently listed are assigned higher weights than those listed less frequently (Table 6.3). The parameter weighting factors are set such that they sum to 1.0. Hence, the overall adaptability score ranges from 0 to 10.
Comparison of Scoring System and AHP Study
Overall adaptability scores for the four subject buildings were calculated (Table 6.4) and were compared to the results of the Analytic Hierarchy Process (AHP) study presented by Becker et al. [14]. The scoring system and the AHP study resulted in the same rank order from most to least adaptable. As seen in Fig. 6.8, there is a high degree of linear correlation (R 2 = 0.84) between the scoring system and the results of the AHP study. The favorable comparison is encouraging and suggests that the proposed scoring system may have practical value for measuring and comparing adaptability Score from AHP study Overall adaptability score of college campus buildings. Caution is advised, however, as the comparison with the AHP study is a relatively small degree of validation.
Summary and Conclusions
A system is proposed for scoring and comparing the design-based adaptability (DBA, the portion of adaptability associated with a building's physical design) of college campus buildings. The system considers eight different physical parameters, such as floor-to-floor height and design live load, which can be readily measured. Adaptability scores for the individual parameter scores are aggregated to determine a building's overall adaptability score. The system is intended as an aid for designing new buildings for adaptability and also for evaluating adaptability of existing buildings. Four case study buildings from the Clemson University campus were used to evaluate the proposed scoring system. DBA of these same buildings has previously been quantitatively determined by Becker et al. [14] using the Analytic Hierarchy Process (AHP). Results from the proposed scoring system and the earlier AHP study are in good agreement (R 2 = 0.84). While these results are encouraging, more research on a larger, more diverse group of buildings is recommended to further develop and validate the proposed system. Traditional office buildings and multi-family residential buildings could be a starting point for a new group to test. | 2020-12-10T09:06:48.073Z | 2020-12-08T00:00:00.000 | {
"year": 2020,
"sha1": "940a245f6b37d988ec93e6fe1258fdafa199aea3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "88acf32737f8a00dea7de370924b7ee1117a063b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
4719858 | pes2o/s2orc | v3-fos-license | Spatial correlation of elastic heterogeneity tunes the deformation behavior of metallic glasses
Metallic glasses (MGs) possess remarkably high strength but often display only minimal tensile ductility due to the formation of catastrophic shear bands. Purposely enhancing the inherent heterogeneity to promote distributed flow offers new possibilities in improving the ductility of monolithic MGs. Here, we report the effect of the spatial heterogeneity of elasticity, resulting from the inherently inhomogeneous amorphous structures, on the deformation behavior of MGs, specifically focusing on the ductility using multiscale modeling methods. A highly heterogeneous, Gaussian-type shear modulus distribution at the nanoscale is revealed by atomistic simulations in Cu64Zr36 MGs, in which the soft population of the distribution exhibits a marked propensity to undergo the inelastic shear transformation. By employing a mesoscale shear transformation zone dynamics model, we find that the organization of such nanometer-scale shear transformation events into shear-band patterns is dependent on the spatial heterogeneity of the local shear moduli. A critical spatial correlation length of elastic heterogeneity is identified for the simulated MGs to achieve the best tensile ductility, which is associated with a transition of shear-band formation mechanisms, from stress-dictated nucleation and growth to structure-dictated strain percolation, as well as a saturation of elastically soft sites participating in the plastic flow. This discovery is important for the fundamental understanding of the role of spatial heterogeneity in influencing the deformation behavior of MGs. We believe that this can facilitate the design and development of new ductile monolithic MGs by a process of tuning the inherent heterogeneity to achieve enhanced ductility in these high-strength metallic alloys. Simulations shed light on the connection between elastically soft spots and the deformation of metallic glasses. Amorphous materials generally suffer from low ductility and fatigue strength: relatively low strain activates shear bands that quickly lead to failure. Now a team from the University of Alabama, Lawrence Berkeley National Laboratory and the University of California, investigate computationally the connection between the intrinsic heterogeneity of metallic glasses (and in particular Cu64Zr36) and the formation of shear bands. As the spatial correlation of the heterogeneity increases, and the regions of elastically soft and hard spots expand, the authors observe a transition from stress-driven nucleation and growth of the shear bands (resulting in increased ductility) to strain percolation, initiated in the soft spots (leading to reduced ductility as the number of soft spots has decreased). Combined with ways to control the number and distribution of soft spots, this insight could help improve the ductility of metallic glasses.
INTRODUCTION
Metallic glasses (MGs) derive their remarkably high strength from their unique amorphous structure, yet their mechanical properties are often compromised by very limited tensile ductility due to severe strain localization in the form of the shear banding. To tackle the major challenge of ductility in high-strength MGs, structural and mechanical heterogeneities in MGs are desired, which could delocalize strain and suppress shear band instabilities. [1][2][3] A conventional engineering route to achieve microstructural heterogeneity is the design of MG-matrix composites by incorporating in situ or ex situ second (or multiple) crystalline phases. 4,5 Recently, a new design paradigm, namely the four R's, has been proposed to achieve ductility via intentionally tailoring the internal structure of the MG itself. 3 The four R's stands for retaining more soft spots (i.e., clusters of atoms that strongly participate in soft vibrational modes 6 ) directly from parent liquid, restraining shear-band propagation and nucleation, rejuvenating glass structure into a more deformable state, and relocating to alloy compositions rich in structural inhomogeneity. Centered on the four R's is the hypothesis that a monolithic MG, though macroscopically uniform as a single phase, has, in fact, an inherently inhomogeneous amorphous structure that may be tuned.
The heterogeneous amorphous structure and properties of MGs at the nanoscale have been revealed by both modeling and experimental works. [6][7][8][9][10][11][12][13] Using molecular dynamics (MD) simulations, Ding et al. 3,7 discovered two types of motifs in the Cu 64 Zr 36 MGs, icosahedral short-range order (ISRO) and loosely packed "geometrically unfavored motifs" (GUMs). GUMs are usually elastically 'soft spots' and are most likely to serve as fertile sites for inelastic shear transformations, giving rise to both structural and mechanical heterogeneities. Using the potential energy landscape model, Fan et al. 8,9 derived the atomic-level shear modulus distribution of Cu 56 Zr 44 and revealed that the mechanical instability is due to the aggregation of atoms with low local shear moduli. Furthermore, Wagner et al. 12 utilized the atomic force acoustic microscopy to explore the indentation modulus of PdCuSi, which was found to display a Gaussian-like distribution when the cantilever tip was 10 nm. Liu et al. 11 applied the dynamic force microscopy to measure the phase shift of Zr 55 Cu 30 Ni 5 Al 10 at the nanoscale. A spatial distribution of the phase shift was obtained as a consequence of heterogeneity in local viscoelasticity; in particular, a correlation length of 2.5 nm was identified.
Such nanoscale heterogeneity in MGs focuses on the critical role of the "soft spots" 6 that likely undergo stress-driven shear transformation to induce plastic flow. To create ductility, soft spots need to be coordinated over a larger scale, such that the control of heterogeneity has to be translated from the nanometer-scale shear transformation zones (STZs) to their organization into near macroscopic shear-band patterns. 3 One way to enhance the heterogeneity is to tune the population by proliferating soft spots through changes in the thermal history, 14 rejuvenating MG structure through thermomechanical cycling, 15,16 or irradiating MGs into more deformable states. 17 A recent continuum model by Wang et al. 18,19 proposed a bimodal dispersion of free volume that could result in work hardening and an increase in the free volume gradient that could give rise to a synergy of strength and toughness. In addition, Zhao et al. 20,21 also predicted the improvement in ductility by incorporating medium-range ordering (MRO) information into a mesoscale simulation model. Although these experimental and modeling results are encouraging, the specific details are still largely missing concerning the mechanistic details at the STZs' level and their collective operation into shear-band patterns in the presence of nanoscale heterogeneity. Part of the reason for this is that the time and length scales of individual STZs are too limited either for current experimental resolution or for a continuum mesh to capture directly.
In this work, we employ multiscale modeling techniques to investigate the effect of the spatial heterogeneity of elasticity on the deformation behaviors of the MG, using a Cu 64 Zr 36 MG as the representative system where the local shear moduli display a Gaussian distribution that is directly adopted from atomistic simulations. Explicitly, we use a mesoscale STZ dynamics model to characterize the spatiotemporal correlation of STZs into shear bands in the simulated MGs regulated by various spatial correlations of local shear moduli. We identify a critical correlation length at which the simulated Cu-Zr glass exhibits the best ductility with the largest strain to failure as well as the least propensity for strain localization. On examination of the initial stage of the shear-banding process at various correlation lengths, we observe a transition in the mechanisms of shear-band formation from stress-dominated nucleation and growth to structure-controlled strain percolation, accompanied by saturation of elastically soft sites participating in the plastic flow. Our discussion of these results is focused on determining the correlation between shear-band patterns and soft cluster characteristics extracted from cluster analysis. A hypothesis of critical cluster size for shear-band nucleation is proposed to explain the simulated behavior, together with its implications for developing improved mechanical properties in MGs.
Nanoscale elastic heterogeneity
The nanoscale elastic heterogeneity is identified using MD simulations of the model Cu 64 Zr 36 MGs, as shown in Fig. 1a, b. The local shear moduli for two model MGs with different cooling rates (10 9 K/s and 10 11 K/s) are computed by a fluctuation method. [22][23][24] At 300 K, the shear moduli in both models exhibit the Gaussian distribution. The MD simulations reveal that the distribution of the local shear moduli is sensitive to the thermal history of the model MG. Fig. 1a shows the distribution of local shear moduli with a sub-cell size of 8 8 Å. As the cooling rate increases from 10 9 K/s to 10 11 K/s, the mean value of the Gaussian distribution decreases from 25.9 to 20 GPa; whereas the deviation (4.44 GPa) remains almost unchanged. Additionally, Fig. 1b illustrates the coarse-grained map of atomic elastic moduli when the cooling rate is 10 9 K/s, a pronounced spatial heterogeneity is observed with the red and blue colors respectively representing the high~50 GPa and low~15 GPa local elastic regions. Our MD simulations, however, did not capture the variation of spatial heterogeneity of elastic constant with respect to MG thermal history, while such variation has been reported experimentally Fig. 1 The spatial heterogeneity of elastic property at the nanoscale. a The Gaussian distributions of local shear moduli of Cu 64 Zr 36 MG obtained by MD simulations under two cooling rates (10 11 K/s in blue and 10 9 K/s in black) and used in the STZ dynamics simulations (in red). b A coarse-grained map of the distribution of local shear moduli obtained by MD simulations at the cooling rate of 10 9 K/s. c The spatial distributions of local shear moduli in the samples of STZ dynamics simulations with correlation lengths of 0.5 and 5 nm, respectively. d The correlation coefficient as a function of distance (see Eq. 1) for the samples with correlation lengths of ξ = 0.5, 3, 5, and 7 nm, respectively. The correlation length is defined when the value of the correlation coefficient decreases to e -3 = 0.05 at r = ξ upon thermomechanical processes. 16,25 Another limitation of MD simulation for studying elastic heterogneeity problem is the much smaller spatial correlation length (~1-1.5 nm as shown in Fig. 1b), compared to that of reported experimental values in MGs 11 Such discrepancy could be attributed to the inherent limitations of MD simulations, in which fast and uniform cooling rates are adopted to obtain the MG sample with a typical size of~5 nm.
To further establish a connection of the nanoscale elastic heterogeneity and MG deformation at large scale, we employ a mesoscale STZ dynamics model. The STZ dynamics model coarsegrains a collection of atoms, or an STZ as the fundamental deformation unit defined on a finite element (FE) mesh, and controls the stochastic activation of STZs based on the energetics of the system using kinetic Monte Carlo algorithm (kMC). The combination of these two features enables access to the experimentally relevant time and length scale while preserving a microscopic view of the processes of MG deformation. The shear modulus distribution used in the STZ dynamics simulation is displayed as the red curve shown in Fig. 1a, taking the Gaussian distribution identified by the MD simulations with the same standard deviation equal to 4.44 GPa. The average value of the shear modulus is adjusted to 31 GPa by fitting the experimental stress-strain response of Cu 64 Zr 36 . 26 Such an adjustment can be rationalized by the fact that the average value of the shear modulus is very sensitive to the thermal history of MGs 27 evidenced by our MD results.
The spatial heterogeneity of elasticity in the STZ dynamics simulation is tuned by achieving a specific correlation length of the local shear moduli. Following the definition of a spatial autocorrelation function, 28 the correlation coefficient of local shear moduli, ρ, is expressed as: where μ denotes the average shear modulus; µ r0 is the shear modulus of a reference position r 0 ; µ r0+r is the shear modulus at the position which has a distance of r from the reference. The correlation coefficient, ρ, is calculated from all appropriate pairs that are r distance apart from each other. In our simulated MG samples, the correlation coefficient, ρ, decays exponentially with respect to the distance r, as shown in Fig. 1d. We adopt an exponential covariance function ρ(r) = e −3r/ξ , 29 used in spatial data analysis to describe the decay, and ξ is the correlation length defined to be the smallest distance beyond which the correlation coefficient ρ is less than 0.05. Furthermore, to obtain the spatial distribution of local shear moduli with a specific correlation length ξ, a reverse Monte Carlo (RMC) method was employed. 30 Figure 1c illustrates the map of local shear modulus distributions within the simulation samples with ξ = 0.5 and 5 nm, respectively. Clearly, as the spatial correlation length increases, both elastically soft and hard sites aggregate, giving rise to the larger clustering regions with similar local shear moduli.
Macroscopic stress-strain responses in the presence of nanoscale elastic heterogeneity The spatial correlation of elastic heterogeneity significantly affects the macroscopic deformation behaviors of MGs. Fig. 2a illustrates the tensile stress-strain response of the simulated MGs with various correlation lengths. A transition from stress-overshoot to elastic-nearly perfect plastic flow is observed as the spatial correlation length increases. Correspondingly, the deformation morphology changes from a single dominant shear band to the formation of multiple shear bands, as shown in Fig. 2b. For the MG with ξ = 0.5 nm, the significant stress overshoot followed by an abrupt stress drop after yielding is a signature of substantial strain localization in the form of a single dominant shear band. As the correlation length increases to 2 nm, the extent of the stress overshoot is reduced. Correspondingly, the primary shear band becomes less severe as several other strain localization paths start to develop. Once the correlation length reaches 5 nm or larger, the stress overshoot disappears, and the simulated MG exhibits elastic-nearly perfect plastic flow. Meanwhile, the corresponding deformation morphology is characterized by the formation of multiple shear bands with a reduced localized strain in each band. It is not surprising that the increase in the correlation length can alleviate strain localization by forming multiple shear bands, although this effect starts diminishing above a critical correlation length (i.e., 5 nm in this study). For instance, for the MGs with ξ = 7 nm, the extent of strain localization of the most "matured" band is more severe than that in the MGs with ξ = 5 nm. Fig. 2c compiles the data of the yield stresses and the flow stresses vs. spatial correlation lengths for all the MGs (including five independent configurations for each correlation length). Notably, the effect of spatial correlation on the strengths of the MGs, specifically the yield and flow stresses, can be seen to exhibit two regimes. In Regime I, the strength of the MGs is reduced as the spatial correlation increases, whereas in Regime II, the MG strength remains essentially unchanged. The weakening effect in Regime I reflects the increased participation of elastically soft sites during plastic deformation when the spatial correlation is enhanced. In Regime II, starting at the correlation length ξ = 5 nm, the fact that the strength remains roughly constant indicates a critical condition at which the participation ratio of soft sites to carry plastic flow may saturate.
Correspondingly, the dependence of the MG ductility, evaluated by the plastic strain to failure, 31 on the spatial correlation ξ also exhibits two regimes, i.e., increasing with increasing ξ in Regime I vs. decreasing with increasing ξ in Regime II. This non-monotonic trend gives rise to the largest plastic strain to failure = 1.037% at the critical correlation length ξ = 5 nm, as shown in Fig. 2d. It should be noted that the current STZ dynamics model has no mechanism for failure; the plastic strain to failure is, therefore, a proxy variable selected to be the macroscopic plastic strain at which the magnitude of local accumulated von Mises strain of an element reaches 0.325 based on a previous study. 31 The resultant increasing trend in Regime I is expected, as the enhanced spatial correlation can lead to a larger population of soft sites that undergo plastic flow, promoting the generation of multiple strainlocalization paths (Fig. 2b). The decreasing trend in Regime II implies that when the critical condition is reached, i.e., the soft population contributing to plastic flow saturates, further increase in spatial correlation could result in strain localization, rather than distributing flow, which is of course highly detrimental to ductility. In this regard, we can quantify the magnitude of strain localization using a microscopic strain localization index, as proposed in reference. 32 This index is defined as the standard deviation of the local von Mises strain of all the elements at a macroscopic strain of 0.04, i.e., reflecting the heterogeneity of microscopic plasticity. As shown in Fig. 2d, the microscopic strain localization index first decreases in Regime I, indicating strain delocalization and homogeneous flow; after reaching the critical correlation length, the index reverses this trend and starts to increase in Regime II, thereby signifying enhanced strain localization.
The process of shear band formation at various correlation length ξ Next, we delve into the details of the process of shear-band formation and its correlation with elastic heterogeneity. In particular, we choose three cases in Fig. 3: ξ = 0.5 nm, i.e., below the critical correlation length; ξ = 5 nm, i.e., at the critical correlation length; and ξ = 7 nm, i.e., above the critical correlation length. In each of the three cases, the macroscopic stress-strain response is presented, along with the deformation morphology Spatial correlation of elastic heterogeneity tunes N Wang et al. mapped on top of the local shear modulus distribution at the critical moments, for the simulated samples subjected to tensile deformation. To connect with the elastic heterogeneity, the evolution of the participation ratio of the soft sites that have undergone plastic flow is also investigated. At a selected moment, the soft participation ratio is calculated as the number of soft sites that have undergone STZ activation divided by the total number of STZ activated sites. The soft population is taken as the portion of the local sites that have a shear modulus less than the average value of 31 GPa for the sample. Additionally, the shear modulus distributions of the local sites undergoing STZ activity are plotted in comparison to the whole sample distribution, to examine the elastic distribution profile of the activated STZs.
Shear-band formation at the weak correlation of elastic heterogeneity For the case of the correlation length ξ = 0.5 nm, one dominant shear band is formed even in the presence of elastic heterogeneity, as shown in Fig. 3c. The stress-strain response exhibits a characteristic stress overshoot-immediately after yielding at moment 0, i.e., at the onset of plastic flow, the macroscopic stress continues to climb to a peak at moment 1. Correspondingly, the strain localization is initiated randomly and produces a shear band embryo at moment 1. Upon the formation of the embryo, the stress drops (i.e., from moment 1 to 2) and the shear front propagates progressively along the maximum shear stress path to form an incipient shear band across the sample. As the deformation proceeds, the incipient shear band continues to slide accompanied by its thickening, with the stress relaxing to a flow value (i.e., from moment 2 to 3).
To assess the effect of elastic heterogeneity, the evolution of the soft participation ratio with the tensile strain is plotted in Fig. 3b. In the very early stage of shear band nucleation, e.g., from moment 0 to 1, the soft participation ratio is larger than 75%, such that the soft sites are statistically preferred for STZ activation. From moment 1 to 2 to 3, at the onset of incipient shear-band propagation and growth, the soft participation ratio drops rapidly, as the newly activated STZs become equally likely to be generated from soft or hard sites. This observation is further evidenced by the distributions of the local shear moduli of the activated STZ sites at the three moments, by comparison to the shear modulus distribution of the entire simulated MGs, as displayed in Fig. 3d. At moment 3 in particular, the shear modulus distribution of the activated STZ sites reproduces the probability distribution of the entire sample, showing no preference to the soft sites. Thus, in the case of ξ = 0.5 nm, even though the existence of structural heterogeneity offers a reasonable amount of elastically soft sites that could facilitate strain delocalization, the spatial correlation is not strong enough to do so. Therefore, the concentrated stress in front of the shear band embryo drives the rapid growth of one dominant shear band.
Shear band formation at the critical correlation length ξ For the MGs with correlation lengths of 5 nm, multiple shear bands are formed to generate plastic flow. Upon yielding at moment 1, the STZs are triggered in several soft sites and evolve into shear band embryos (Fig. 3g). Further straining to moment 2, the embryos penetrate the surrounding locations clustered with relatively low shear moduli regions and form several incipient shear bands. Eventually, the incipient bands connect with each other and develop a percolated strain path throughout the sample to conduct steady plastic flow. Over the course of shear-band formation, the stress barely changes after yielding such that the flow stress remains close to the yield stress in the stress-strain response (Fig. 3e).
On examination of the participation of local sites in the process of plastic flow, the STZ activation is dominated by the soft sites in the early stages of the formation of shear-band embryos from (Fig. 3f), resulting in a soft participation ratio exceeding 95%. When the embryos grow from the very soft regions and connect with each other to form the incipient bands (i.e., from moment 2 to 3), the soft participation ratio slightly drops, as the strain percolation process starts to incorporate harder sites (Fig. 3h). Nevertheless, over the entire shear-band formation process, the soft sites play a truly dominant role in the nucleation and growth of shear bands, i.e., in the very fundamental essence of plastic deformation and ductility in MGs. Therefore, when the spatial correlation of elastic heterogeneity is strong enough (i.e., ξ = 5 nm), the plastic deformation of the MG is controlled by nucleating multiple shear-band embryos to distribute plastic flow.
Shear-band formation at the strong correlation of elastic heterogeneity Above the critical correlation length, the MG still deforms by forming multiple shear bands, but the number of the bands is reduced. Akin to the case with a critical correlation length of ξ = 5 nm, the stress-strain response of MG with ξ = 7 nm features elastic-nearly perfect plastic flow and the soft sites predominantly control the process of shear-band formation, providing large soft regions to nucleate shear-band embryos. The difference comes from the decrease in the number of shear bands. It is anticipated that the further increase of spatial correlation results in the aggregation of soft regions/clusters enlarged in size but reduced in number. Consequently, fewer soft clusters are now available for the nucleation of shear-band embryos. To accommodate the same amount of plastic flow, the extent of strain localization in each band has to become more severe (Fig. 3k). Additionally, it seems that the large spatial correlation increases the probability to develop a percolated soft path throughout the sample, from which a primary shear band is likely to be formed.
A transition in shear-band formation mechanisms
The spatial correlation of elastic heterogeneity can control the organization of STZs into shear-band patterns, leading to a transition in shear-band formation from one of stress-dictated nucleation and growth to structure-dictated strain percolation. We can review the previous theories of shear band formation and propagation in MGs. As summarized by Greer, 33 three main viewpoints are most prominent on how shear bands form and propagate from the collective activities of STZs. First, the shear bands are considered as a percolating boundary that reaches a critical concentration of STZs before initiating simultaneous slip along the plane of highest resolved shear stress. 34 Second, shear bands are modeled as a propagating zone of rejuvenated glass, followed by a zone of glue-like material, and subsequently by liquid material, as adiabatic heating decreases the local strength. 35 Third, a two-step process, with a shear band nucleating from a small cluster of STZs and propagating quickly through the sample before initiating simultaneous slip. 36,37 In our simulations, when the correlation length (i.e., ξ < 5 nm) is small, the shear-banding process is reminiscent of the two-step model, in which the stress drives the formation/nucleation of a shear-band embryo before it rapidly grows into a dominant band. The nucleation and growth stages are well distinguished, coming in sequence over the course of deformation (Fig. 3c). The nucleation stage prefers the soft sites but the propagation exhibits no preference, involving both soft and hard sites driven by the concentrated local stress field. On the other hand, when the spatial correlation length is large (i.e., ξ > 5 nm), the two-step process of forming one band is not distinct and might disappear. Instead, several shear-band embryos nucleate at elastically soft sites. The formation of the shear bands occurs by process of connecting/percolating of the shear-band embryos. This process is more like the first model, as the shear bands are initiated from the inherent variation of the local shear moduli and formed via strain percolation. 33 Different shear-band morphologies and patterns also emerge due to the transition in the formation mechanisms. When the spatial correlation of the heterogeneity is weak, the stress plays a key role in the nucleation and growth of shear bands and straight shear bands are usually observed along the maximum shear paths (Fig. 3c). As the spatial heterogeneity increases, the shear bands display a non-straight configuration, with a variation in thickness along the line direction and indications of branching on propagation (Fig. 3g, k), features that have been reported experimentally. 38,39 For instance, Qu, et al. showed that the brittle titanium-based MG formed one straight shear band while a ductile Pd-based MG formed several curved shear bands in the early stages of plastic deformation. Though our model employs different sample sizes and the relaxation dynamics compared to the experiments, 38-40 our simulation results clearly reveal that the spatial correlation of the elastic heterogeneity at the nanoscale can be an important factor that significantly influences the shearbanding process and resultant shear-band patterns, which is consistent with the mechanistic insight provided by several recent works at atomic scale. [41][42][43] The connection between elastic heterogeneity and deformation behaviors of MGs To establish a connection between nanoscale elastic heterogeneity and deformation behaviors of MGs, we perform a statistical analysis of the spatial connectivity of the soft sites, termed soft cluster, in the MGs with different levels of spatial correlation, focusing on the number and size of the soft clusters. We then discuss the correlation between these soft cluster characteristics and shear-banding features in light of the resulting mechanical performance of MGs.
The Hoshen-Kopelman (H-K) algorithm is adopted to determine the cluster characteristics in our 2-D elastically heterogeneous MG samples (method), 44 the process of which is illustrated in Fig. 4a-c. Figure 4d shows the resultant soft cluster size distributions for different correlation lengths. As expected, when the correlation length increases, there is a decrease in the number of small-sized clusters and an increase in the number of largersized clusters. Consequently, the average cluster size increases but the total number of clusters decreases with the increase of the correlation length, as summarized in Fig. 4e. Moreover, the correlation between the STZ activation sites and the soft cluster locations are progressively enhanced. Figure 4f displays the fraction of the activated STZs within the largest soft cluster, denoted as P span , at the total strain of 0.04, vs. the correlation length. The value of P span 45 increases as the correlation length increases, reflecting that the STZ activations are gradually dictated by the spatial connectivity of the soft sites. Figure 5 illustrates the correlation between the spatial heterogeneity based on the soft cluster statistics and the resulting deformation behavior, focusing on the two regimes that exhibit different trends. It is noted that when the spatial correlation length increases, the key statistical factor that controls shear band formation changes from the size of the soft clusters to the number of the soft clusters, giving rise to the two regimes of mechanical behavior in MGs. In Regime I, the size of the soft clusters dictates the deformation behavior. As correlation length increases, the soft clusters aggregate and grow in size, facilitating the development of multiple strain-localization paths. In this regime, the shear bands form via nucleation and growth, and thus the larger the size of the clusters, the more effectively they act as nuclei for incipient shear band nucleation, delocalizing plastic flow and enhancing ductility. In Regime II, the number, rather than the size, of the soft Spatial correlation of elastic heterogeneity tunes N Wang et al.
clusters plays the dominant role, the enhanced spatial correlation leading to the reduction in the cluster number. In this regime, the shear bands form via strain percolation initiated at the large soft regions. But fewer soft clusters are now available to generate incipient shear bands and to accommodate the same plastic flow.
The extent of strain localization in each band has to become severe, equivalently reducing the ductility of MGs. We hypothesize that the critical shear band nucleus size is an underlying cause of the transition. As shown in Fig. 3c, in the early stage of plastic flow, STZs appear and begin to cluster into shear band nuclei, which grow before forming a dominant shear band. If the spatial correlation is weak, specifically less than the critical shear band nucleus size, a larger stress is required to activate the adjoining "hard" STZs to reach the critical shear band nucleus size; as such, the smaller the correlation length, the larger the participation ratio of hard sites in creating plastic flow (i.e., the smaller the participation ratio of the soft sites), the higher the resulting strength of the MGs. If instead, the spatial correlation is strong, i.e., equal or larger than the critical shear-band nucleus size, the shear bands are ready to initiate from the soft clusters upon yielding. In this regime, further increases in correlation length do not affect the participation ratio of the soft sites, at least in the early stages of deformation, such that the strength of the MGs remains unchanged. Consequently, an initial increase in correlation length enlarges the soft cluster size, enhancing the capability of distributing flow. When the soft cluster reaches the critical size, the soft cluster number is reduced with increasing correlation, giving rise to strain localization.
It is noteworthy that the prediction of the critical correlation length is restricted by the model limitations, including the simplified glassy state without a structural relaxation dynamics/ kinetics, 46 the 2D simulated sample geometry 47 and the limited finite sample size, 48 all of which could affect the critical correlation length and requires the further development of the multiscale model and the integration of the experimental characterization. The MD simulations can investigate the evolution of the local Fig. 4 The spatial correlation between STZ activation and the connected soft clusters. a-c An example of soft cluster identification using a simulated MG with ξ = 5 nm, including a the 2-D sample discretization into square cells, b a binary classification of all the cells and c the H-K algorithm to identify the largest soft cluster in the sample. d The histogram of the cluster sizes for three selected correlation lengths, ξ = 0.5, 3, 5, 7, and 9 nm, respectively. e The average cluster size in terms of area and the number of clusters vs. correlation length. f The fraction of the activated STZs within the largest soft cluster, P span vs. correlation length Spatial correlation of elastic heterogeneity tunes N Wang et al. elastic constant distribution subject to various thermomechanical stimuli and further correlate such evolution with STZ activation. The activation energy functionals of STZs that incorporate the local elastic moduli might be developed by using the methods such as the nudged-elastic band (NEB) 49,50 or the activationrelaxation technique (ART), 9,51 which explore the potential energy landscape of MGs. The development of such activation energy functionals of STZs could inherently capture the nature of disordered glassy structure and the structural dynamics of MGs. The implementation of functionals into STZ dynamics model will significantly improve the accuracy of the prediction of the deformation behaviors at large scales. Nevertheless, the current model could have important implications for improving the macroscale mechanical properties of MGs, specifically, by enhancing ductility of these already high-strength alloys to achieved high fracture toughness and damage-tolerance. It is known that annealing/quenching, mechanical deformation, and radiation can retain or rejuvenate the deformable amorphous state and hence influence the ductility of MGs. 3 Perhaps in addition to proliferating the soft spots to enhance the population heterogeneity, these methods can be used to tune the spatial heterogeneity of the glassy structure and increase the number of shear-band nucleation sites to the point that the MG deforms in a more homogeneous manner. A recent experiment provides direct evidence of the intrinsic correlation between β-relaxation and spatial heterogeneity in an MG, and the sub-T g enthalpy relaxation is demonstrated to be able to regulate the correlation length. 25 In addition, the critical correlation length highlights the importance of the spatial/length scale aspects in the heterogeneous amorphous structure, which could guide the design of the optimal spacing of chemical segregation of phase separating MGs, 52 the "grain size" in a nanoglass, 53 or even the crystallineamorphous dual-phase nanostructure, 54 in order to optimize ductility.
In summary, the important effect of spatial correlation of elastic heterogeneity in tuning the deformation behaviors of metallic glasses is demonstrated using multi-scale modeling techniques. At the atomistic level, MD simulations reveal a highly heterogeneous Gaussian-type distribution of shear moduli at the nanoscale; the soft spots exhibit a greater propensity to undergo STZ transitions. At mesoscale levels, an STZ dynamics simulation is used to investigate the organization of STZs into shear bands. The STZ dynamics model is extended to incorporate the elastic heterogeneity suggested by the MD simulations and to further tune the spatial correlation length of local elasticity using a reverse Monte Carlo algorithm.
A critical spatial correlation length of local shear moduli is identified, dividing the dependence of mechanical properties of MGs on the spatial correlation into two regimes. In the weak correlation Regime I, the strength decreases and ductility increases as the correlation length increases. In the strong correlation Regime II, the strength remains constant and ductility decreases as correlation length increases. Correspondingly, a transition of shear band formation mechanisms is observed. At weak correlations, the shear band forms by stress-driven nucleation and growth, associated with a stress overshoot on the stress-strain responses and characterized by straight shear bands along paths of maximum shear. At strong correlations, the formation of shear bands is controlled by the percolation of soft regions in the heterogeneous glassy structure. The shear bands are now characterized by non-straight bands, which show a variation in thickness along the line directions and evidence of branching upon propagation.
By relating the characteristics of the soft clusters with deformation behaviors, we hypothesize that the critical shear band nucleus size is an underlying cause of the transition. Below the critical size, the increase in correlation length enlarges the size of soft clusters, facilitating the nucleation of incipient shear bands, delocalizing plastic flow and enhancing ductility. Above the critical size, the increase in correlation length decreases the number of soft clusters, limiting strain percolation paths to form shear bands, intensifying the extent of strain localization and reducing the ductility of MGs. Since the population of soft spots and their spatial distribution can be tuned by such processes as rapid quenching, 14 high-pressure torsion, 55-57 laser processing, 58 and mechanical cycling, 59 and magnetization, 60 we envision that our results may offer a totally new approach of using the control of short-range to medium-range order in monolithic MGs to radically improve the ductility, and hence the fracture toughness, of these alloys. As such, this may provide new possibilities for the industrial application of these high-strength metallic alloys as structural materials with unique combinations of mechanical properties.
MD simulations
The MD simulations of Cu 64 Zr 36 MG with 32,000 atoms (as per the sample studied in reference 7 ) were performed using the optimized embedded atom method (EAM) potential, adopted from reference. 61 The melt was equilibrated for 10 ns at high temperature to assure equilibration and then quenched to room temperature (300 K) at cooling rates of 10 9 K/s and 10 11 K/s using the Nose-Hoover thermostat and zero-pressure barostat. 62 Periodic boundary conditions were applied in all three directions. Isothermal stiffness coefficients (C) were evaluated at room temperature using a fluctuation method. [22][23][24] For a canonical (NVT) ensemble, C can be calculated as the sum of three contributions 22 : where the superscripts I, II and III represent the Born, kinetic contribution, and the fluctuation terms, respectively (see ref. 22 for more details). To reduce the statistical error in our simulated samples, the average shear modulus (G) was evaluated as: where G can also be decomposed into three terms corresponding to those in Eq.
(2). The local elasticity can be evaluated from the atomic scale to the coarse-graining sub-cells, as introduced in ref. 22 Using this approach, the local shear modulus can be plotted with an adequate spatial resolution to reflect its heterogeneous distribution. The STZ dynamics framework The STZ dynamics modeling framework, originally developed by Homer et al. 63 combines the kMC with the FEM. The MG is simulated by an ensemble of potential STZs defined on a finite element mesh. The kMC algorithm is used to control the activation sequence of STZs and the time evolution of the system. FEM is used to solve the stress and strain redistribution after each STZ activation. The STZ dynamics model successfully captures the general deformation behavior of MGs and has been extended for various implementations. 31,36,[64][65][66][67] The key constitutive law which describes the activation rate _ s for an STZ to shear in one direction can be given by: where υ 0 is the attempt frequency (related to the Debye temperature), ΔF(µ) is the shear modulus dependent activation barrier of the STZ transition, T is the absolute temperature and k B is the Boltzmann constant. The activation rate is biased by the local shear stress τ; Ω 0 is the volume of the STZ, and γ 0 is a characteristic shear strain increment upon STZ transition. The values of the parameters which were used in this model are given in Table 1. The average shear modulus μ is the value for the Cu 64 Zr 36 MG obtained from the ref. 26 ΔF takes a simple functional form of µ, where the coefficient is obtained by fitting the yield strength of Cu 64 Zr 36 MG. 26 Ω 0 is in the range of values commonly reported in the literature, 68 and γ 0 is equal to the commonly accepted value. 1 For the STZ dynamics simulation, the configurations of the 2D simulated Cu 64 Zr 36 MGs were set with the length and width of 150 and 50 nm, respectively. The long sides of the simulated MGs along the tensile direction were unconstrained, and the top and bottom surfaces of the simulation were allowed to move laterally relative to each other. The simulated tensile test was displacement controlled. This was achieved by constraining the bottom nodes, while the top nodes moved at a fixed velocity corresponding to the desired initial strain rate. The spatial correlation of local shear moduli was tuned for various correlation lengths from 0.5 to 9 nm. For each correlation length, five samples with different spatial distributions of local shear moduli were prepared via RMC, in order to address the statistics. The MGs were loaded in uniaxial tension at a strain rate of 0.01 s −1 and a temperature of 300 K, using STZ dynamics simulation.
Statistical analysis of the connectivity of soft sites in the elastically heterogeneous MG The H-K algorism is an efficient cluster labeling technique that has been widely used in the determination of the percolation probabilities and cluster size distributions. The following steps were taken in our analysis using an MG sample with correlation length ξ of 5 nm as an example; this is illustrated in Fig. 4a-c. First, we set up a grid onto our 2-D sample and discretize the sample into square cells (Fig. 4a). The grid was fine enough such that each cell is about the size of a fundamental element yet smaller than the size of a potential STZ. Second, each of the square cells is determined to be soft or hard, based on the local shear moduli being respectively smaller or larger than the sample average value of 31 GPa. For this binary classification, all the soft sites are shown in Fig. 4b. Third, the H-P algorithm is used to identify soft clusters by scanning through the cells and labeling the clusters. A labeled soft cluster consists of all the connecting soft cells, the neighbors of which have a common edge. An example of the largest soft clusters is shown in Fig. 4c. In our analysis, we only consider the soft clusters with an area larger than 3.45 nm 2 , which is slightly larger than an STZ area~2.24 nm 2 .
Data availability
The data sets generated and analyzed during the current study are available from the corresponding authors upon reasonable request. | 2018-04-11T03:40:11.062Z | 2018-04-06T00:00:00.000 | {
"year": 2018,
"sha1": "99b25980a35e7a5cdb28a8931f374b8225edade9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41524-018-0077-8.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2e77f46950951533b84e183f2e6a37d4d11cff2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
251605727 | pes2o/s2orc | v3-fos-license | Efficacy of Spinetoram for the Control of Bean Weevil, Acanthoscelides obtectus (Say.) (Coleoptera: Chrysomelidae) on Different Surfaces
Simple Summary Contact toxicity of spinetoram on three different surfaces, concrete, ceramic floor tile and laminate flooring, against Acanthocelides obtectus (Say.) was evaluated in laboratory bioassays. Our results provide data on the insecticidal effect of spinetoram for the control of A. obtectus on various surfaces; however, its efficacy varies according to the surface type, exposure time and concentration. In conclusion, our laboratory tests indicated that spinetoram at 0.025 and 0.05 mg active ingredient (AI)/cm2 achieved satisfactory control at relatively short exposures by contact action of A. obtectus adults on three surfaces, commonly encountered in legume storage facilities and warehouses. Abstract In this study, the contact toxicity of spinetoram on three different surfaces, concrete, ceramic floor tile and laminate flooring, against Acanthocelides obtectus (Say.) (Coleoptera: Chrysomelidae) was evaluated in laboratory bioassays. Different concentrations were evaluated ranging from 0.0025 to 0.05 mg AI/cm2, against adults of A. obtectus. Adult mortality was measured after 1-, 3-, 5- and 7-day exposure. After 1-day exposure, the mortality was low on all surfaces, ranging from 0 to 27.2%. After 5- and 7-day exposure, spinetoram at concentrations of 0.01 mg/cm2 and above achieved 100% or close mortality on concrete and laminate flooring surface, whereas low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm2) resulted in significantly lower mortality levels, ranging from 1.6 to 30.8%, than high concentrations. In the case of ceramic floor tile surface, spinetoram treatments at all tested concentrations did not result in 100% mortality. Significant differences were recorded among the surfaces, depending on concentrations and exposure intervals. After 3-, 5- and 7-day exposure, mortality levels on ceramic floor tile surface were generally higher at low concentrations than those on the concrete and laminate flooring surfaces, whereas those on concrete and laminate flooring surfaces were significantly higher at high concentrations than ceramic floor tile surface. These results indicate that spinetoram at 0.025 and 0.05 mg AI/cm2 achieve satisfactory control at relatively short exposures on common types of surfaces and thus can be used as an effective insecticide against A. obtectus.
Introduction
Common bean (Phaseolus vulgaris L.) is a major grain legume consumed worldwide for its edible seeds and pods, since it is an important source of protein, vitamins, carbohydrates, etc., and provides 15% of protein and fulfills 30% of caloric requirements of the differences in spinetoram toxicity were noted only for T. confusum out of six stored-grain insect species and adult mortality on concrete and galvanized steel was higher than on ceramic tile and plywood in the absence of food. However, to our knowledge, the efficacy of spinetoram as a contact insecticide in surfaces against A. obtectus has not been tested so far. In this study, we tested spinetoram in surface treatments against adults of A. obtectus under laboratory conditions.
Test Insect
The original population of A. obtectus was collected from dry common beans processing companies in Mersin province (Turkey) during the year 2018. The common bean (P. vulgaris) with the local cultivar of "Şehirali-90" (11.8% moisture content) was used to feed the A. obtectus adults whose were put into glass jars (150 mm in diameter and 250 mm high) and covered with a cloth to allow aeration. The initial population started with at least 300 individuals per kg of bean grains and was reared in a growth chamber under a 12: 12 h light: dark photoperiod, at 27 ± 1 • C and 65 ± 5% r.h. In order to avoid possible insect infestations from the field, the bean grains were kept at ambient conditions of −10 • C for 14 days prior to be used for the culture of A. obtectus. One-two days old adults obtained from stock insect culture and pesticide/insect-free bean seeds were used in the biological tests.
Tested Insecticide
A formulation of spinetoram (Delegate 250 WG) with 250 g of active ingredient (AI, spinetoram) per liter was supplied by Dow Agro Sciences. Spinetoram was diluted in distilled water for various surface treatments. Delegate 250 WG has been registered against several insect pests in pear, apple, grape, cotton, maize, pistachio and cherry in many countries, as well as Turkey and Greece.
Surfaces
In biological tests, three different surfaces (concrete, ceramic floor tile and laminate flooring) were used. For preparation of concrete surface; the mortar consisted of mixture of 200 g cement and 40 mL water was prepared and poured into the plastic boxes (100 × 100 × 60 mm 3 ) with a thickness of 2 cm. Thereafter, the mortar in the plastic boxes was allowed to dry for 48 h at room temperature, and afterwards the surfaces were heated at 100 • C to cure in the drying oven (Memmert UN30, Memmert GmbH & Co., Schwabach, Germany) for 72 h before use in trials. The ceramic floor tiles (TOPAZ, Yurtbay Ceramic Trade Inc., Eskişehir, Turkey) were purchased at a local hardware store and cleaned with an industrial floor cleaner (KLORAK FT, Klorak Chemicals and Cleaning Products Industry Trade Inc.İzmir, Turkey), according to the manufacturer's instructions. The tiles used in the biological tests are made of a mixture of clay, kaolin, quartz, feldspat and limestone. The tempering is carried out at a temperature above 900 • C and reheated at 1100 • C during processing of the tiles with 150 × 150 × 5.5 mm 3 size, according to Turkish Standards (TS 202). The tiles were cut to size of 100 × 100 mm 2 used in the trials by tile cutting tool and placed into plastic boxes. The laminate flooring used in biological tests is in accordance with EN 717 E-1 standards, with increased resistance to moisture (HDF), and in dimensions of 8 × 195 × 1200 mm 3 . The dimensions of 100 × 100 mm 2 laminate flooring used in trials were cut with laminate flooring cutter and placed in plastic boxes. The gaps in the margin of ceramic floor and laminate flooring in plastic boxes were filled with hot silicone glue (Dremel ® high Temperature Glue Stick, Dremel Company, Racine, WI, USA) in the thin stripe layer.
Treatment of Surfaces and Insect Exposure
Biological tests were carried out on a concrete, ceramic floor tile and laminated flooring without the commodity at 26 ± 1 • C temperature and 65 ± 5% r.h. in a completely dark condition. Each replication of the three surface types was treated with 1 mL of distilled water (control treatment) or an aqueous suspension (1 mL) of spinetoram to provide deposits of 0.0025, 0.005, 0.0075, 0.01, 0.015, 0.025 and 0.05 mg AI/cm 2 . Surfaces were sprayed with distilled water and spinetoram suspension by using an artist's airbrush HSENG Airbrush AS18 model, (Ningbo Haosheng Pnömatik Machinery Co., Ningbo, China) connected to an air compressor providing 20-psi pressure. Separate airbrush guns were used for distilled water and spinetoram treatments. After treatment, surfaces were air dried for 12 h at room temperature in a laminar flow hood. Twenty-five adults of A. obtectus, one-or two-day-old, were introduced into each separate water-and spinetoram-treated surface. In the biological tests, concrete, ceramic floor tile and laminated flooring surface in plastic boxes with surface area of 100 cm 2 were used for each replicate. Each trial was replicated 5 times and 5 controls were left for each treatment. Surfaces were allowed to dry for 24 h at 25 ± 1 • C temperature and 50 ± 10% r.h. A thin coating of Fluon ® (Polytetrafluoroethylene, Sigma Aldrich product number 665800, Dorset, UK) was applied to the interior side-wall of each arena using a small paintbrush in order to prevent the insects escaping from the experimental arenas.
Mortality was assessed after 1, 3, 5 and 7 days of exposure. Alive (mobile) and death adults on surfaces were observed under a stereoscope in the treated and untreated surfaces. With the term alive we characterized the adults that were able to walk normally. The adults that were not mobile and had no visible movements in their legs and antennae were characterized as death. At each count, dead adults on the surface were removed from the experimental area with a small brush, and alive ones were left on spinetoram treated-surface for further counts.
Experimental Design and Data Analysis
The experiment was designed as factorial, based on completely randomized design with five replications. The experimental factors comprised surfaces at three levels and spinetoram concentration at seven levels and distilled water treatment on the surfaces was considered as a control. The response variables were percentage that was dead after exposure period of spinetoram treatment. All percentages (x) were corrected for mortality on control surfaces [64] and transformed to arcsine (x) 0.5 [65] to normalize heteroscedastic treatment variances. Mortality data were analyzed by using two-way ANOVA, while means were separated by Tukey test at 5% significant level using general linear models (GLM) of SAS [66].
Efficacy on Concrete Surface
The two-way ANOVA analysis indicated that all main effects (surface and concentration) and their associated interactions for each exposure interval were significant (Table 1).
Mortality on concrete surface was significantly affected by spinetoram concentration for each exposure interval ( Table 2).
After 1-day exposure, mortality was very low, ranging from 0 to 27% and there were no significant differences in mortality levels among the concentrations, except concentration of 0.025 and 0.05 mg AI/cm 2 . After 3-day exposure, mortality level increased significantly with increasing concentration and the high concentrations (0.025 and 0.05 mg AI/cm 2 ) resulted in significantly higher mortality with 91.2 and 92%, respectively, than the low concentrations (0.0025, 0.005, 0.0075 and 0.01 mg AI/cm 2 ). However, none of the concentrations achieved the complete mortality (100%). After 5-and 7-day exposure, spinetoram treatments at 0.01 mg AI/cm 2 and above concentrations resulted in 100% or close mortality (Table 2), whereas the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ) resulted in significantly lower mortality rates, ranging from 8 to 30.8%, than the high concentrations. Table 2. Mean corrected mortality (% ± SE) of Acanthoscelides obtectus adults after 1-, 3-, 5-and 7-day exposure to concrete surface treated with spinetoram at eight different concentrations.
Efficacy on Laminate Flooring Surface
Mortality of A. obtectus adults for laminate flooring surface was also significantly affected by spinetoram concentration for each exposure interval (Table 3). Table 3. Mean corrected mortality (% ± SE) of Acanthoscelides obtectus adults after 1-, 3-, 5-and 7-day exposure to laminate flooring surface treated with spinetoram at eight different concentrations.
Concentration (mg AI/cm 2 )
Exposure Interval (Day) After 1-day exposure, mortality was very low, ranging from 0 to 24.8%; no significant differences were noted in mortality levels at the concentrations ranging from 0.0025 to 0.01 mg AI/cm 2 . After 3-day exposure, the high concentrations (0.015, 0.025, and 0.05 mg Insects 2022, 13, 723 6 of 12 AI/cm 2 ) resulted in significantly higher mortality with 80.8 and 84.8%, respectively, than the low concentrations (0.0025, 0.005, 0.0075 and 0.01 mg AI/cm 2 ). However, none of the concentrations achieved the complete mortality. After 5-and 7-day exposure, spinetoram treatments at 0.015 mg AIcm 2 and above concentrations resulted in 100% or close mortality (Table 3), whereas the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ) resulted in significantly lower mortality rates ranging from 1.6 to 13.6% than the high concentrations.
Efficacy on Laminate Flooring Surface
Mortality of A. obtectus adults for ceramic tile surface was also significantly affected by spinetoram concentration for each exposure interval (Table 4). Table 4. Mean corrected mortality (% ± SE) of Acanthoscelides obtectus adults after 1-, 3-, 5-and 7-day exposure to ceramic floor tile surface treated with spinetoram at eight different concentrations. After 1-day exposure, the mortality of A. obtectus was extremely low, ranging from 0 to 16.1% while there were no significant differences in mortality rates at the concentrations ranging from 0.0025 to 0.015 mg AI/cm 2 . After 3-day exposure, none of the concentrations achieved the complete mortality and the highest mortality with 53.1% was achieved at 0.025 mg AI/cm 2 , which was not significantly different than those at 0.015 and 0.05 mg AI/cm 2 . After 5-and 7-day exposure, spinetoram treatments at all tested concentrations did not result in 100% mortality (Table 4). Whereas after 7-day of exposure, the high concentrations with 0.015 mg AI/cm 2 ≤ resulted in nearly 100% mortality (≤92%), which were significantly higher than at the low concentrations (0.0025 and 0.005 mg AI/cm 2 ).
Comparison of Tested Surfaces
The mortality of A. obtectus adults significantly varied by surface type for each exposure interval depending on spinetoram concentrations (Figure 1). The mortality of A. obtectus after 1 day of exposure on all treated surfaces was very low, ranging from 0 to 27.2%. There were no differences in mortality levels among the surfaces at 0.0025, 0.005, 0.0015 and 0.025 mg AI/cm 2 , while the mortality levels at 0.0075, 0.01 and 0.05 mg AI/cm 2 were significantly higher on the treated concrete surface than on the ceramic floor tile surface. After 3-day exposure, at the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ), mortality levels on the ceramic floor tile surface were generally higher than those on the concrete and laminate flooring surfaces, whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ), those on the concrete and laminate flooring surfaces were significantly higher than ceramic floor tile surface. Although spinetoram treatment on the concrete surface generally achieved higher mortality levels than the laminate flooring surface, with the exception of spinetoram concentration of 0.015 mg AI/cm 2 , there were not significant differences in mortality levels at between the concentrations of 0.0025, 0.005, 0.015, 0.05 mg AI/cm 2 . The highest mortality (92%) was noted at the concrete surface at 0.025 mg/cm 2 after 3 days of exposure. Similarly, after 5-day exposure, the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ) resulted in significantly higher mortality levels on the ceramic floor tile surface than those on the concrete and laminate flooring surfaces. Whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ) the concrete and laminate flooring surfaces had significantly higher mortality levels than the ceramic floor tile surface. On the other hand, there were no significant differences in mortality levels between the concrete and laminate flooring surface. The complete mortality was achieved only on the concrete and laminate flooring surface at 0.025 and 0.05 mg AI/cm 2 , respectively. After 7 days of exposure, at low concentrations, the similar results to those for 3-and 5-day exposure were obtained, whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ), there were no significant differences in mortality levels between the tested surfaces. The complete mortality was achieved only on the concrete and laminate flooring surface at 0.015 and 0.025 mg AI/cm 2 , respectively, which the complete mortality was not reached on ceramic floor tile surface. concrete surface generally achieved higher mortality levels than the laminate flooring surface, with the exception of spinetoram concentration of 0.015 mg AI/cm 2 , there were not significant differences in mortality levels at between the concentrations of 0.0025, 0.005, 0.015, 0.05 mg AI/cm 2 . The highest mortality (92%) was noted at the concrete surface at 0.025 mg/cm 2 after 3 days of exposure. Similarly, after 5-day exposure, the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ) resulted in significantly higher mortality levels on the ceramic floor tile surface than those on the concrete and laminate flooring surfaces. Whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ) the concrete and laminate flooring surfaces had significantly higher mortality levels than the ceramic floor tile surface. On the other hand, there were no significant differences in mortality levels between the concrete and laminate flooring surface. The complete mortality was achieved only on the concrete and laminate flooring surface at 0.025 and 0.05 mg AI/cm 2 , respectively. After 7 days of exposure, at low concentrations, the similar results to those for 3-and 5-day exposure were obtained, whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ), there were no significant differences in mortality levels between the tested surfaces. The complete mortality was achieved only on the concrete and laminate flooring surface at 0.015 and 0.025 mg AI/cm 2 , respectively, which the complete mortality was not reached on ceramic floor tile surface.
Discussion
Spinetoram has been reported to be effective against major stored product insects of stored-grain on different surfaces [62]. Our study is, however, the first to evaluate this insecticide against A. obtectus on different surfaces. The mortality of A. obtectus adults Insects 2022, 13, 723 8 of 12 was significantly affected by spinetoram concentration, regardless of the surface type and exposure interval. Our overall data for spinetoram show that sufficient control can be achieved with increasing concentration after 5-and 7-day exposure, spinetoram treatments at 0.01 mgAI/cm 2 and above concentrations achieved 100% or close mortality on concrete and laminate flooring surface. In the case of ceramic floor tile surface, spinetoram treatments at all tested concentrations did not result in 100% mortality. Our results indicated that 0.015 and 0.025 mg AI/cm 2 concentration of spinetoram on concrete and laminate flooring surface, respectively, is enough to obtain the complete mortality of A. obtectus for 7-day exposure, whereas even the highest concentration on ceramic floor tile surface did not reach 100% mortality.
To our knowledge, this is the first study that examined the efficacy of spinetoram against A. obtectus on different surfaces. However, spinetoram has been evaluated against other stored product insect species with high adult mortality [56,62]. Vassilakos et al. [62], testing spinetoram at the three high concentrations (0.025, 0.05 and 0.1 mg AI/cm 2 ) on concrete, ceramic tile, galvanized steel and plywood surface found that mortality levels of T. confusum did not significantly increase with increasing concentration. Similarly, Vassilakos and Athanassiou [57] reported that the increasing concentration from 0.025 to 0.1 mg AI/cm 2 on concrete surface did not result in increased mortality of T. confusum, S. oryzae and O. surinamensis adults after 3-and 7-day exposure, except of T. confusum for 7-day exposure. These findings are not in agreement with our results for A. obtectus adults on concrete and ceramic tile surface. This may be due to the difference in tested insect species and formulation type. Previous studies reported that the insecticide formulation and the insect stage, as well as the insecticide exposure method are critical in insecticide toxicity [67][68][69]. Low mortality was recorded after 1-day exposure, on tested surfaces, whereas complete mortality was achieved after 5-day exposure. These results indicated that there was delayed mortality for A. obtectus. According to Vassilakos and Athanassiou [54], and Athanassiou et al. [70], spinosad and spinetoram are considered as relatively slow-acting insecticides in stored grain beetles. Vassilakos and Athanassiou [54], working with spinetoram-treated wheat reported that after 72 h of exposure, immediate mortality levels of R. dominica and S. oryzae were low.
Significant differences were noted among surfaces in mortality of A. obtectus adults for each exposure interval depending on spinetoram concentrations. After 3-, 5-and 7-day exposure, at the low concentrations (0.0025, 0.005 and 0.0075 mg AI/cm 2 ), mortality levels on the ceramic floor tile surface were generally higher than those on the concrete and laminate flooring surface, whereas, at the high concentrations (0.015, 0.025 and 0.05 mg AI/cm 2 ), those on the concrete and laminate flooring surface were significantly higher than on the ceramic floor tile surface. After 7-day exposure, the complete mortality was achieved only on the concrete and laminate flooring surface at 0.015 and 0.025 mg AI/cm 2 , respectively, whereas it was not reached on ceramic floor tile surface at any of spinetoram concentrations. In previous studies, Vassilakos et al. [62] noted that mortality of T. confusum adults on concrete and galvanized steel treated by spinetoram at two concentrations (0.025 and 0.05 mg AI/cm 2 ) was higher than on ceramic tile and plywood in the absence of food, whereas there were no differences among surfaces for S. oryzae and R. dominica, due to the high efficacy of spinetoram in all surfaces, at relatively short exposures. Vassilakos and Athanassiou [56] also reported that there were no significant differences in the residual efficacy of spinetoram (0.025 and 0.1 mg AI/cm 2 ) on concrete and galvanized steel surfaces against S. oryzae and O. surinamensis adults, whereas the higher mortalities of T. confusum adults were noted in the concrete surface than those in galvanized steel. Similarly, Toews et al. [41], evaluated spinosad at concentrations of 0.05 and 0.1 mg AI/cm 2 in different surfaces, and found lower mortality levels of T. confusum and T. castaneum adults in treated unwaxed floor tile, steel and waxed floor tile surfaces than concrete surface. These findings are in agreement with our results for A. obtectus adults exposed to spinetoram on concrete, ceramic floor tile and laminate flooring surface.
Several studies indicated that the susceptibility of stored product insects to spinetoramtreated surfaces varied with insect species [62,71]. Vassilakos et al. [62] found that, after 5-day exposure, the complete mortality of S. oryzae was achieved on the concrete, ceramic tile and galvanized steel surface at 0.025, 0.05 and 0.1 mg AI/cm 2 , respectively, whereas that of R. dominica was obtained only on concrete surface at 0.025 mg/cm 2 and none of spinetoram treatments on tested surfaces gave the complete mortality of T. confusum. Saglam et al. [59] reported that spinetoram at 0.05 and 0.1 mg AI/cm 2 on concrete completely controlled T. confusum adults after 14-day exposure. The present study indicated that the 0.025 mg AI/cm 2 concentration of spinetoram on concrete surface is enough to obtain the complete mortality of A. obtectus for 5-day exposure. These findings show that A. obtectus is apparently more susceptible to spinetoram-treated concrete than T. confusum whereas susceptibility of A. obtectus to spinetoram-treated is similar to that of S. oryzae and R. dominica.
Generally, the physical characteristics of the surfaces play an important role in residual efficacy of the insecticides. The insecticides applied in nonporous materials, such as steel and tile, are considered more effective than in porous materials, such as concrete and wood [61,63,72]. Porous surfaces, such as concrete, have low insecticide persistence than other nonporous surfaces [63,68,[73][74][75][76]. Arthur [76] and Collins et al. [63] found that deltamethrin and organophosphates (OPs), respectively, were more effective on steel surfaces than on concrete. However, several studies, conversely, reported that some insecticides were more effective on concrete surface than on steel and ceramic tile surfaces. On the other hand, Arthur [77] showed that the pyrrole insecticide, chlorfenapyr, was more effective on concrete than on other nonporous surfaces, vinyl tile and plywood. Likewise, Vassilakos and Athanassiou [56] reported that spinetoram was more effective on concrete than on galvanized steel and ceramic tile. Similar results for A. obtectus adults were obtained in the present study. These findings suggest that different insecticide active ingredients show different efficacy and patterns of interactions with the surface on which they are applied, and that the distribution in different surfaces varies among insecticides [77].
In conclusion, our laboratory tests indicated that spinetoram at 0.025 and 0.05 mg AI/cm 2 achieved satisfactory control at relatively short exposures by contact action of A. obtectus adults on three surfaces commonly encountered in legume storage facilities and warehouses. Thus, our results provide data on the insecticidal effect of spinetoram for the control of A. obtectus on various surfaces; however, its efficacy varies according to the surface type, exposure time and concentration. Even so, further research is required to evaluate more aspects on long-term protection of spinetoram against A. obtectus adults on different surfaces, doses and exposure times on the persistence of the residues and the behavior of this ingredient in "real world" conditions. | 2022-08-17T15:19:08.219Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "cd2194d104496a4d1b250a77b5e7189e9ec1694f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4450/13/8/723/pdf?version=1660532371",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0664c2197601a937a5f28c725b408fdcb646de1c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
237468387 | pes2o/s2orc | v3-fos-license | Anaplastic Lymphoma Kinase Positive Large B-Cell Lymphoma: Diagnostic Perils and Pitfalls, an Underrecognized Entity
Anaplastic lymphoma kinase (ALK) positive large B-cell lymphoma (ALK+ LBCL) is an extremely uncommon non-Hodgkin lymphoma (NHL) with a distinctive histomorphologic, immunophenotypic and cytogenetic profile. It is unlike the more common ALK-positive anaplastic large cell lymphoma, although the latter shares the ALK rearrangement pathognomonic for this entity. ALK+ LBCL is an underrecognized entity since it is rare and unfamiliar, and shares morphologic and immunohistochemical features with a variety of other neoplasms that can result in misdiagnosis. This lymphoma exhibits plasmacytoid morphology and negativity for classical immunomarkers of B- and T-cell lineages, and CD30; however, it expresses terminally differentiated B-cell/plasma cell markers such as CD38, CD138, and MUM-1. Precise identification of this entity is pivotal because of its aggressive behaviour, poor response to standard chemotherapy regimens and the potential for the development of novel targeted therapy. A high index of suspicion on morphology and an extensive immunohistochemistry armoury are required for the veracious detection of this lymphoma, especially at extranodal sites. The purpose of bringing forth this present case, an extranodal neoplasm with plasmacytoid morphology at vertebral location in a young adult, is to highlight the diagnostic perils and pitfalls, the clues to unravel the quandaries and thus, the incredible utility of histopathological examination and immunohistochemical analysis in attaining the unerring diagnosis.
Introduction
Anaplastic lymphoma kinase positive large B-cell lymphoma (ALK+ LBCL) is an unusual variant of diffuse large B-cell lymphoma (DLBCL) accompanied by exclusive Anaplastic lymphoma kinase (ALK) rearrangements. A tyrosine kinase receptor belonging to the superfamily of insulin receptors, ALK contributes significantly to the development of the brain, and regulates the proliferation of nerve cells [1]. ALK+LBCL displays an immunoblastic or more commonly a plasmablastic histomorphology and an immunoprofile of CD138, ALK, epithelial membrane antigen (EMA), and immunoglobulin (Ig) A expression. Clinically, this lymphoma behaves more aggressively than typical DLBCL with an unfavorable response to conventional chemotherapy [2]. The diagnosis of ALK+LBCL can be taxing due to fewer number of cases having been documented, a dearth of cognizance of this disease, and considerable overlap of morphology and immunoprofile with various haematopoietic and non-haematopoietic tumors. Nonetheless, increased knowledge of the salient features of this neoplasm and thus, appreciation of its existence are crucial for pathologists as well as clinicians, especially keeping in mind the progress in evolving therapeutic remedies. Herein, the author reports a case of this rare disease, and attempts to emphasise the diagnostic pitfalls, and also the pointers to decode the consequential dilemmas.
Case Presentation
An 18-year-old male presented with dull aching abdominal pain for four months and weakness of both lower limbs for three months, accompanied by fever and weight loss. Magnetic resonance imaging (MRI) scan of the thoraco-lumbar spine performed for the complaints of abdominal pain and limb weakness revealed a T2/ST1R hyperintense homogenously enhancing mantle of soft tissues involving the dorsolumbar pre-and para-vertebral regions with epidural, subpleural extension and rib destruction, with the epidural component at D8-D10 levels causing significant cord compression. Bulky bilateral axillary lymphadenopathy measuring 8.6x6.2x12 cm, bulky prevascular lymphadenopathy measuring 2.9x7.7x9.4 cm, along with multiple nodes in the mediastinum and bilateral supraclavicular region were also discovered. Biopsy of the epidural space-occupying lesion at D8-D10 level had been performed and the preliminary histopathology report from a private diagnostic laboratory was small round cell tumor, probably non-Hodgkin lymphoma (NHL). The patient was then referred to our tertiary cancer centre, and the outside reported tissue paraffin blocks and slides were reviewed. Appraisal of the histopathology slides revealed fibrocollagenous tissue infiltrated by sheets of mostly medium-sized atypical cells separated by fine fibrovascular septa imparting an alveolar pattern. The cells were predominantly plasmacytoid possessing eccentric round to oval nuclei and moderate to abundant cytoplasm. Marked pleomorphism was noted at places, with some cells displaying large irregular hyperchromatic nuclei and scant cytoplasm. Focal necrosis was evident ( Figure 1). The differential diagnosis initially considered on histopathological examination (HPE) was broad, encompassing plasmacytoma, small round cell tumors such as rhabdomyosarcoma and Ewing sarcoma, poorly differentiated carcinoma (metastasis), neuroendocrine tumor, melanoma and a host of NHLs, including anaplastic large cell lymphoma (ALCL), DLBCL, plasmablastic lymphoma (PBL) and primary effusion lymphoma (PEL). An elaborate panel of immunohistochemical (IHC) antibodies was executed to attain the diagnosis, which divulged the atypical plasmacytoid cells to be positive for CD138 (diffuse and strong), Mum1 and CD45, and negative for EMA, desmin, CD30, CD20, CD3, CD43, CD56, FLI1, S100, HMB45 and CD79a ( Table 1).
S. No
Antibody Dilution Source The sample was not subjected to cytogenetic/molecular analysis. Hinging on the histomorphology and the immunoprofile, a final diagnosis of ALK+LBCL was proffered. Bone marrow evaluation did not reveal involvement. Currently, the patient is under treatment protocol, which includes three cycles of CHOP chemotherapy to be followed by radiation therapy.
Discussion
ALK+ LBCL was originally described by Delsol et al. in 1997 [3]; in an otherwise large series of classical T-/null cell ALK-positive ALCLs, this lymphoma was recognized due to its characteristic lack of CD30 expression. It evinced very aggressive behavior, high relapse rate, and minimal response to standard regimens. This NHL has now been registered as a distinct entity in the 2017 revised edition of the WHO Classification of Haematopoietic and Lymphoid Tissues. Accounting for < 1% of all DLBCLs, it seems to be exceedingly rare and predominantly affects young males, with a male-to-female ratio of 5:1, but does not spare any age group [4]. Primarily a disease of the lymph nodes [3,4,5] or the mediastinum [6][7][8], extranodal involvement of sites like the tongue [6], nasopharynx [9], stomach [10], liver [4], spleen [4], bone [9], soft tissue [11], and skin [4,12] have been chronicled as well. Generalised lymphadenopathy is the usual presentation, and 60% turn up in advanced stage III/IV. Bone marrow infiltration is observed in approximately one-fourth of the cases [4,12].
Compilation of the histomorphologic features and the immunoprofile, assisted by cytogenetic data when accessible forms the basis of diagnosing ALK+LBCL. In lymph nodes, diffuse or partial effacement of the nodal architecture by the atypical cells can be detected, although initial published reports documented a mainly sinusoidal infiltration. The atypical cells are monomorphic, intermediate to large in size, and flaunt an immunoblastic or plasmablastic morphology with round nuclei, coarse chromatin, single, prominent, central nucleolus and moderate amount of amphophilic cytoplasm. Infrequently, the neoplasm may assume an epithelioid appearance by virtue of a sheeting pattern (more commonly noticed at extranodal sites), and eosinophilic appearance of the cytoplasm. Necrosis, brisk mitosis and a starry-sky appearance may be spotted [4,13,14].
In view of the morphologic overlap with a variety of neoplasms, immunophenotypic analysis is crucial for the diagnosis. The atypical cells are habitually negative for B cell markers -CD20 and CD79a, but parade unflinching positivity for CD138, Mum1 and CD38, compatible with a postgerminal center phenotype, thus simulating a plasma cell neoplasm. However, the most distinctive IHC profile is the positivity for the ALK protein, staining 100% of tumors, with most demonstrating a restricted granular cytoplasmic staining pattern, while the rest display cytoplasmic, nuclear and nucleolar ALK staining [4,15]. CD45, EMA, and MUM-1 are also frequently positive, although CD45 can be weak or even negative [3,4,5,12]. CD4 staining is present in 40-75% of cases [4,13]. IgA with monotypic light chain restriction is consistently observed.
A medley of neoplasms enter the differential diagnoses, hematopoietic and non-hematopoietic as well, especially at extranodal sites. ALK+ LBCL occasionally forms nests and can display round cell morphology and thus, mimic poorly differentiated carcinoma, rhabdomyosarcoma, Ewing sarcoma, neuroendocrine tumors and melanoma; most of these can felicitously be distinguished on the basis of IHC findings, and usually do not cause much headache to the pathologists. However, at times IHC can be fallacious: EMA and occasionally focal cytokeratin are expressed in ALK+ LBCL while the customary B-cell and T-cell markers are not embodied. To boot, CD138 is not specific to the hilt; apart from labelling ALK+ LBCL and plasma cell neoplasm, it also stains the tumor cells in most carcinomas. Hence, in such scenarios, additional IHC with different cytokeratins, other lineage markers, Mum1 and ALK protein ought to resolve the conundrum.
The more arduous quandaries relate to the hematopoietic neoplasms, the most formidable being ALK+ ALCL and plasmacytoma; B cell NHLs disclosing an immunoblastic/plasmablastic morphology, particularly PBL, PEL, HHV8 positive large B cell lymphoma and DLBCL, NOS round off the list. Clinical presentation, sinusoidal template of involvement, and expression of markers like ALK, CD45, EMA (83% in ALK+ ALCL and 93% in ALK+ LBCL), and CD4 (40-70% in ALK+ ALCL and 40-75% in ALK+ LBCL) [4,16,17] are noteworthy overlapping attributes between ALK+ LBCL and ALK+ ALCL. The core distinguishing point is CD30 -diffuse positivity mandated for a diagnosis of ALCL, whereas CD30 is mostly negative in ALK+ LBCL; even in the cases where CD30 exhibits positivity, the staining is focal and weak. ALK+ LBCL is not a T cell neoplasm unlike ALK+ ALCL and hence, lacks rearrangement of clonal T-cell receptor (TCR-gamma or -beta) gene, and instead divulges clonal immunoglobulin (Ig) gene rearrangement. Another notable diagnostic pitfall relates to plasmacytoma including variants such as anaplastic plasmacytoma. There is significant morphologic and immunophenotypic overlap including expression of markers of plasmacytic differentiation. Plasmacytoid appearance of a tumor at extranodal sites, such as vertebra in our case can steer the pathologist towards deducing plasmacytoma, unless one is aware of this rare entity, i.e., ALK+ LBCL and has a high index of suspicion with all the features not conforming to a diagnosis of plasma cell neoplasm. This dilemma can be resolved with the aid of features such as the presence of bone involvement, myeloma component, and lack of ALK and CD4 immunostaining in plasmacytoma. Conversely, the expression of plasma cell markers in ALK+ LBCL accompanied by lack of quintessential B-cell markers and the cytology can mislead to a diagnosis of PBL and/or PEL. PBL and PEL affect HIV-positive patients more often than not and are associated with EBV and HHV8, respectively. On the other hand, immunosuppression, EBV or HHV8 infection have no relation with ALK+ LBCL, and evidence of ALK protein expression excludes PBL, PEL, and HHV8 positive LBCL. Other variants of DLBCL, including DLBCL, NOS can be relatively comfortably segregated from ALK+LBCL due to strong expression of typical B-cell markers such as CD20, CD79a and PAX5 and the truancy of staining for ALK and CD138. Lastly, conspicuous CD4 positivity as observed in the present case should not transpire as a diagnostic peril and lead the pathologist astray towards histiocytic neoplasm and/or myeloid sarcoma. Thus, the plasmablastic morphology/ immunophenotype (CD138, CD38 and Mum1), ALK expression, negativity for CD30 and the non-existence of viral etiologies (EBV, HIV and HHV8) abet the diagnosis of ALK+ LBCL, as illustrated in Table 2.
12.
Light chain restriction + -+/- The natural course of ALK+ LBCL is dismal, which is not disparate from other large B-cell lymphomas with plasmablastic differentiation. Promising results have not been reaped with standard lymphoma therapeutics, including CHOP or CHOP-derived chemotherapy with/without radiation and stem cell transplant (autologous as well as allogeneic). This lymphoma being primarily CD20 negative, rituximab is not expected to churn out quantifiable benefits [2,4,8,13,15,16]. Of late, a new class of drugs, namely ALK inhibitors has bagged the spotlight. Patients suffering from relapsed and resistant ALK+ ALCL have proclaimed favorable responses when treated with crizotinib, a small-molecule dual inhibitor of the c-Met and ALK receptor tyrosine kinases. This drug thus, could prove to be a tenable targeted therapy in ALK+LBCL, as well [2,13,15,16,[18][19][20]. Nevertheless, the strongest factor associated with survival in ALK+ LBCL from the literature appears to be the clinical stage at presentation; appreciable longer survival has been observed in patients befalling with localised disease (stage I-II) [4,8,12].
Conclusions
To conclude, ALK+LBCL is a rare aggressive B-cell NHL with discrete morphologic, immunophenotypic and cytogenetic/molecular findings. In a tumor of plasmacytoid/immunoblastic morphology presenting at nodal/extranodal sites exhibiting immunopositivity for ALK, plasma cell markers and sometimes CD4 and EMA and occasionally focal and weak cytokeratin, and negative staining for both B and T cell markers, CD30 and even CD45 at times, the possibility of this neoplasm should be considered. Owing to the extreme rarity of this entity and consequently lack of a high index of suspicion, morphologic overlap with other hematopoietic and non-hematopoietic neoplasms, unusual immunoprofile and seldom regular employment of ALK IHC, the diagnosis remains challenging and may be missed. Accurate recognition of this entity is of paramount importance, as the promise of a targeted therapy bestows a fascinating recourse for patients with this malady. | 2021-09-11T05:26:01.404Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "c3567c21a3d84562d209490ee8cea5cd4f183d4e",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/65095-anaplastic-lymphoma-kinase-positive-large-b-cell-lymphoma-diagnostic-perils-and-pitfalls-an-underrecognized-entity.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3567c21a3d84562d209490ee8cea5cd4f183d4e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201062909 | pes2o/s2orc | v3-fos-license | A Comparison of the Oligosaccharide Structures of Antithrombin Derived from Plasma and Recombinant Using POTELLIGENT ® Technology
Japan Human antithrombin (AT) has two isoforms of which the predominant α -form is glycosylated on all four possible glycosylation sites and the lower abundant β -isoform lacks the oligosaccharide on Asn135. The main oligosaccharide structure of human AT consists of biantennary complex-type oligosaccharides lacking a core fucose. Generally, Chinese hamster ovary (CHO) cells produce recombinant human AT (rhAT) with core-fucosylated oligosaccharides. However, rhAT lacking core-fucose oligosaccharides can be produced by POTELLIGENT ® technology, which uses FUT8 knockout CHO cells in production. The rhAT has more variable glycan structures, such as tetra-antennary complex type, high-mannose type, and mannose 6-phosphate species as minor components compared to plasma-derived human AT (phAT). In addition, the site-specific glycan profile was different between two ATs. We evaluated the effect of these properties on efficacy and safety based on a comparison of rhAT made by that technology with phAT in terms of their respective oligosaccharide structures, site-specific oligosaccharide profiles, and the ratio of α - and β -forms. Although some structural differences were found between the rhAT and phAT, we concluded that these differences have no significant effect on the efficacy and safety of rhAT. α -AT by column chromatography purification. To compare the efficacy and safety of rhAT to phAT, an open-label, randomized, phase 3 study was conducted. 23 This study showed that the efficacy and safety were similar for 36 IU/kg/day rhAT and 30 IU/kg/day phAT from the clinical point of view. In this report, we present a comparison between rhAT and phAT in terms of the oligosaccharide structure, site-specific oligosaccharide profile, and the ratio of α - and β -AT, and consider the effect on efficacy and safety derived from the structural differences between rhAT and phAT.
Introduction α-AT by column chromatography purification.
To compare the efficacy and safety of rhAT to phAT, an openlabel, randomized, phase 3 study was conducted. 23 This study showed that the efficacy and safety were similar for 36 IU/kg/day rhAT and 30 IU/kg/day phAT from the clinical point of view.
In this report, we present a comparison between rhAT and phAT in terms of the oligosaccharide structure, site-specific oligosaccharide profile, and the ratio of αand β-AT, and consider the effect on efficacy and safety derived from the structural differences between rhAT and phAT.
Release and fluorescence derivatization of N-linked oligosaccharide from antithrombin N-linked oligosaccharides were released enzymatically with PNGase F; 0.6 mg protein was mixed with 6 × 10 3 U PNGase F and the mixture was incubated at 37 C overnight. After digestion, ice-cold ethanol was added and the mixture was centrifuged to remove proteins. The supernatant was dried using a centrifugal vacuum evaporator. Oligosaccharides were labeled according to a previously reported procedure 24 with slight modification. A labeling reagent (0.7 M 2-AB, 1.6 M NaBH3CN in 70:30 (v/v) DMSO/acetic acid) was added to the released and purified N-glycan samples. The reaction mixtures were incubated at 37 C overnight. The excess labeling reagent was removed by an HLB column (10 mg sorbent per cartridge, 30 μm particle size) (Waters). The cartridge was first washed with 1 mL of 5% acetonitrile. The reaction mixture was then loaded on the column and washed with 1 mL of 5% acetonitrile, twice. After washing, the 2AB-labeled N-glycans were eluted with 1 mL of 20% acetonitrile. The elution from the column was dried with a centrifugal vacuum evaporator, and the residue was dissolved in water for HPLC analysis.
HPLC analysis of 2AB-labeled oligosaccharides and fraction
The HPLC system was an Alliance2695 system (Waters). The 2AB-labeled N-glycans were separated on an anion-exchange column (CarboPac PA-1, 4 × 250 mm, Thermo Scientific). The mobile phases were water (A), 0.5 M sodium acetate (B) and 0.5 M sodium hydroxide (C). The column was equilibrated with 10% C. After holding for 15 min in 10% C, the 2AB-labeled N-glycans were eluted using a gradient of 0.36% B/min for 97 min and 2.5% B/min under 10% C. The flow rate was 0.5 mL/min, and fluorescence detection was performed with excitation at 330 nm and emission at 420 nm. Each oligosaccharide peak was fractioned and desalted using a reverse-phase solid phase extraction column (Sigma-Aldrich, St. Louis, MO).
Samples were dissolved with water and then mixed with a DHB matrix solution. One microliter of the mixture was spotted on a stainless-steel sample target board.
The sialylated oligosaccharide samples were measured in the linear negative ion mode. The neutral oligosaccharides samples were measured in the reflector positive ion mode.
Site-specific N-glycosylation analysis
For analysis of site-specific oligosaccharides profile, Asp-N peptide mapping was employed. Sample preparation was performed following the manufacturer's protocol. Antithrombin was denatured by adding 0.13 M Tris, 7 M guanidine HCl, pH 8.0. For reduction, 40 mg/mL DTT was added, followed by incubation at 37 C for 1 h. Then, for alkylation, 100 mg/mL IAM was added, followed by incubation at 37 C for 1 h. As a next step, the buffer solution was exchanged to a digestion buffer (0.1 M Tris, pH 8.0) using NAP5 column (GE Healthcare Life Sciences, Piscataway, NJ). The elution of NAP-5 filtration was mixed with Asp-N solution at a 1:250 enzyme:substrate ratio and incubated at 37 C for 16 h.
The Asp-N digested peptide mixture was separated by RP-HPLC (Agilent 1200) on a C18 column (YMC-Pack Pro C18, 4.6 × 250 mm) and fractioned into four glycopeptide peaks. The mobile phases consisted of 0.1% TFA in water (mobile phase A) and 0.1% TFA in acetonitrile (mobile phase B). After holding for 20 min in 10% B, the peptides were eluted using a gradient of 0.41% B/min for 140 min with a flow rate of 0.5 mL/min. UV absorption was measured at a wavelength of 214 nm.
Analysis of the β-AT contents using capillary electrophoresis-SDS (CE-SDS)
CE-SDS analysis was performed using a Proteomelab PA800 system with a LIF detector using 488 nm of an argon laser as a light source and an emission band-pass filter of 560 nm (Beckman Coulter, Inc., CA) with a fused silica capillary (Beckman Coulter, Inc.) of 50 μm i.d., a total length of 40 cm and effective length of 30 cm. Neuraminidase and galactosidase were added to an antithrombin sample and incubated at 37 C overnight. The sample solution was equilibrated with a buffer by passing through a NAP-5 column previously equilibrated with 0.1 M sodium bicarbonate, pH 8.3. A 10-μL portion of 5-TAMRA.SE dissolved in DMSO at a concentration of 0.3 mM was added to 190 μL of the sample solution. Then, the solution was incubated at 30 C for 5 min. Excess dye was removed by NAP-5 and the buffer exchanged into 85 mM citrate-phosphate buffer (pH 6.5). The labeled sample was mixed with a SDS solution containing DTT, and incubated at 90 C for 5 min.
The SDS gel buffer was loaded into the capillary at 70 psi for 15 min from the outlet side. A sample solution was injected electrokinetically at 5 kV for 20 s. Separation was conducted in the negative polarity mode at 15 kV for 70 min.
Results and Discussion
Oligosaccharide profile of rhAT and comparison with phAT 2-AB labeled N-linked oligosaccharides from rhAT were analyzed by anion-exchange HPLC and the structure of each peak was identified by MALDI-TOF/MS. Figure 1A shows a chromatogram of the oligosaccharide profile of the rhAT and Table 1A gives the assigned carbohydrate structure of each peak. Most of the glycans in the rhAT are terminal-sialylated and non-core fucosylated complex type glycans, and the most abundant structure is two sialylated, bi-antennary, and nonfucosylated complex type N-glycan in rhAT. As minor structures, high mannose type, sialylated tri-antennary, and tetra-antennary N-glycans were also detected. In addition, some peaks of mannose 6-phasphate (M6P) or sulfate glycan were detected. The presence of M6P glycans were confirmed by the shift of the retention time of their peaks in HPLC by phosphatase digestion (data not shown), and the presence of sulfate glycans were confirmed by MALDI-TOF/MS analysis. M6P is known to allow the transport of proteins to the lysosome by interacting with the M6P receptor 25,26 and sulfate glycan is reported to have important functions concerning inflammation. 27,28 But, it is unclear why the rhAT molecules having these oligosaccharides were expressed. There is no core-fucosylated glycan species because of the potelligent technology. Hence, there is no trace of O-linked saccharide using LC/MS analysis of the rhAT peptide mapping (data not shown). Figure 1B shows a chromatogram of the oligosaccharide profile of phAT and Table 1B gives the carbohydrate structure of each peak. Most of the glycans in the phAT are terminalsialylated and non-core fucosylated complex type glycans, and the most abundant structure is two sialylated, bi-antennary, and non-fucosylated complex type N-glycans in phAT, the same as in the rhAT. The structure of two peaks (No. g and i) was not identified though it was confirmed the two glycans were sialylated and afucosylated by the shift of the retention time of their peaks in HPLC by neuraminidase and fucosidase digestion. Though the main peaks of phAT and rhAT have the same two sialylated, bi-antennary, and non-fucosylated complex-type structure, the elution time is slightly different. This difference is assumed to be due to the difference in the sialic acid binding position (Terminal sialic acids of phAT are α2-6 linkages, 29 while that of rhAT are presumed to be α2-3 linkages from the literature 30,31 ). As minor structures, sialylated tri-antennary type, core-fucosylated type, and Lewis x or Lewis a antigen structures were detected. However, the M6P and sulfate glycan species detected in the rhAT were not detected in the phAT. The main oligosaccharides of rhAT and phAT were the same (two sialylated, bi-antennary, and non-fucosylated complex type), but their minor oligosaccharides were not similar, and in particular, rhAT has more variable oligosaccharide structures. RhAT has high-mannose type, sialylated tetra-antennary type, M6P type, and sulfated oligosaccharides, which are not contained in phAT. It is reported that there is no adverse event caused by high-mannose type 32 and M6P type oligosaccharides in enzyme replacement therapy for lysosomal storage disease, i.e. agalsidase beta contains substantial amounts high-mannose and M6P oligosaccharides. 33 Sialylated tetra-antennary type oligosaccharide linked glycoproteins and sulfated glycan related molecules are known to be abundant in human plasma. [34][35][36] Overall, the impact of those glycan species detected only in rhAT on clinical safety is assumed to be low.
Site-specific oligosaccharide profile of rhAT
Sample preparation involved the digestion of AT with Asp-N and fractionation of the resulting peptides using RP-HPLC. The chromatogram of Asp-N peptide map is shown in Fig. 2, with the glycopeptide peaks identified and separated from each other. Oligosaccharide profiles of these collected fractions were acquired. Table 2 shows the ratio of each peak area sorted by the sialic acid number, M6P, and sulfated glycan residue in each glycopeptides.
Asn135 residue of the rhAT has a higher level of M6P type oligosaccharides than other glycan-linked sites. From the percentage of the N0 peak, Asn155 residue has a higher level of immature processing oligosaccharides than others. The reason why the N-glycan at Asn155 is much less completely processed is unclear. But the Asn155 glycans of the rhAT were immature as well as rhAT produced in BHK cells. 20 The glycan of Asn192 residue is the most sialylated in four glycosylated sites because of the highest SA/N value in Table 2.
The glycan structure of rhAT and phAT has nothing in common, except that all glycosylated Asn has the main glycan structures bi-sialylated, biantennary, and non-fucosylated. The Asn showing the most variability in phAT is the Asn155 residue, but Asn135 has the most variable glycan structures in rhAT. The cause of the difference between the two molecules is unclear.
There are some reports that glycosylation differences at Asn155 cause the overall affinity of AT. 20,40 The negative charge of sialylated species in AT glycans may reduce the binding affinity to the negatively charged heparin molecule. As regards the pharmacokinetics, a multiple-dose study showed that 72 IU/kg rhAT is bioequivalent to 60 IU/kg phAT in healthy volunteers (data on file; Kyowa Kirin Co., Ltd., Tokyo, Japan). This is mainly due to the fact that the clearance of rhAT is a little faster than that of phAT. The sialic acid content per AT molecule of rhAT and phAT, calculated from oligosaccharide profiles, was 6.7 and 7.9 mol/mol, respectively. Since it is reported that the number of sialic acid residues at the nonreducing end of N-linked oligosaccharides affects the half-life in plasma, 41 their difference in the clearance of AT might be due to the difference in their sialic acid content. In addition, AT bearing the mannose residues at the nonreducing end of N-linked glycan only in rhAT may affect the plasma circulation time through mannose receptor-mediated uptake in the liver and macrophages. [42][43][44] Although there is a difference in the clearance between rhAT and phAT, it is confirmed that the difference of the site-specific oligosaccharide profile has no effect on their efficacy because 36 IU/kg/day rhAT and 30 IU/kg/day phAT showed the same efficacy in a phase 3 study. 23 β-AT content of rhAT and phAT The β-AT lacks an N-linked oligosaccharide on Asn135 residue. The β-AT is known to bind heparin with higher affinity than α-AT, and the β-AT bearing the complex type N-linked glycan is reported to show a much shorter half-life than the α-AT. 13,20,45 If the contents of α-AT and β-AT are different from phAT, rhAT will not show the equivalency with blood products of phAT in efficacy and pharmacokinetics (PK). The α-AT and β-AT were separated using CE-SDS based on the difference of their molecular weight. In addition, to reduce the heterogeneity of oligosaccharides on AT causing broad shapes in CE-SDS analysis, neuraminidase and galactosidase digestion was employed for sample preparation. Though the non-reducing ends of the oligosaccharide linked to rhAT are sialic acid or galactose residue, except for high-mannose species, the digestion of these glycosidases makes the non-reducing ends of the oligosaccharides to be the N-acetyl glucosamine residue. The electropherogram of the rhAT and phAT is shown in Fig. 3. The β-AT peak appeared in front of the main peak, which is the α-AT peak. These peaks were assigned from an experiment using rhAT with different linkage numbers of N-linked glycans prepared by changing the concentration of PNGase F (data not shown). In addition, the hydrolyzed species appeared at around 20 min. The hydrolyzed species are assumed to be a cleaved form of AT which is the C-terminal fragment hydrolyzed around Arg393. 46 The content of β-AT, calculated from the peak area in CE-SDS, was 0.9% for rhAT (Lot-to-lot variation: 0.6 to 4.0%) and 2.0% for phAT. RhAT contains a high concentration of α-AT at the same level as phAT by column chromatography purification. Since there is a difference in the efficacy and PK between α-AT and β-AT, it is important to for a manufacturer to reconcile the content of α-AT in rhAT with that in phAT.
It is reported that the binding strength of β-AT to heparin is higher than that of α-AT and the biological activity of the β-AT is higher than that of the α-AT. But, since the content of β-AT is very low in both rhAT and phAT, it is thought that there is no difference in biological activity between them. Currently, some AT drugs are commercially available in Japan. The difference between the contents of α-AT and β-AT is thought to be small among them, and there is no report that the difference shows any effect on their clinical effectiveness.
Conclusions
Several phAT drugs are on the market for the treatment of DIC. Though phAT drugs are manufactured with safety measures to prevent infectious diseases, there still may be unknown infectious agents. RhAT drug product can provide clinical benefits compared with phAT from the viewpoint of the risk of infection. Antithrombin gamma, which is an rhAT drug with POTELLIGENT ® technology and chromatographic purification, has glycans lacking core-fucose like phAT and the main oligosaccharide structure of rhAT and phAT is two sialylated, and non-fucosylated complex type. We have compared rhAT and phAT in terms of the oligosaccharide structures, site-specific oligosaccharide profiles, and the ratio of αand β-AT. Although some structural differences were found between rhAT and phAT regarding their properties, we concluded that these differences have no significant effect on the efficacy and safety of rhAT. | 2019-08-20T13:03:18.776Z | 2019-08-16T00:00:00.000 | {
"year": 2019,
"sha1": "af6e6e9211cda13a0dca7fadc7f9674a5c481679",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/analsci/35/12/35_19P181/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d290ae3ce6b7c555663f458d94e54d0f591f2e30",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
230599596 | pes2o/s2orc | v3-fos-license | CRITIC Method and Grey System Theory in the Study of Global Electric Cars
: Scienceandtechnologydevelopmentisacrucialfortheeliminationofairpollutants. Theelectric car industry, for example, contributes to minimizing emissions and climate change. The purpose of this study is to present an overview of electric car sales and its market share in 14 countries, from past to future, by integrating important criteria through the inter-criteria correlation (CRITIC) method in multi-criteria decision-making (MCDM), grey model first-order one variables (GM(1,1)), and grey relation analysis (GRA) method in grey system theory. First, the GM(1,1) estimates future terms based on historical time-series. Second, the objective weights of each variable, in every year, are determined by the CRITIC method. Finally, the research uses the GRA method for computing grades and ranks. The empirical result then reveals the performance and rank of electric car sales during the time period of 2016–2023. The analysis results thus reveal market share picture and direction of growth in the electric car industry.
Introduction
Climate change is taking toll on our environment, and scientists around the world are studying a variety of smart devices and machines, to reduce air and environmental pollutions. Electric vehicles, for example, can reduce CO 2 emissions and air pollutants [1], as an investigation of electric vehicle charging optimization [2] and plug-in hybrid electric vehicle [3] illustrated the minimization of greenhouse gas emissions. As a result of these benefits, the market share of electric vehicles is expanding in many countries: The market share of battery-electric vehicles extended from 50% in 2012 to 68% in 2018; the market share of plug-in hybrid electric vehicle sales dominated in Finland (76%) and Sweden (61%) in 2019; and electric car sales in Europe saw robust development at 50% in 2019, a growth rate higher than in the previous term (32%) [4]. Observation of electric car sales and related market share is conducted in this study by applying the inter-criteria correlation (CRITIC) method in multi-criteria decision-making (MCDM), grey model first-order one variables (GM(1,1)), and grey relation analysis (GRA) method in a grey theory system.
Electric vehicles run on electricity that is propelled by one or multiple electric motors. These vehicles utilize power from a traction battery pack and have a variety of advantages, such as reduced fuel consumption and emission and recovering energy from regenerative braking [5]. Further, scientists have innovated technology for smart and flexible electric vehicles; this new technology intended to utilize renewable energy and preserve energy resources [6]. These vehicles reduce energy costs [7] and unnecessary emissions from medium and heavy-duty sectors [8]. Electric vehicles can also minimize energy consumption, operation costs, and emissions. Electric cars have various characteristics: The electric car is propelled by electric motors and uses energy stored in rechargeable batteries. The energy is drawn from electric-cells and converted to power, using electric motors. The battery-electric car utilizes electricity stored in a battery pack, to power an electric motor and turn the wheels. Components of a battery-electric car include the electric motor, inverter, battery, control module, and drive train. The operation process does not produce tailpipe pollution. Thus, it is considered a renewable energy source. The plug-in hybrid electric car is powered by conventional fuel, alternative fuel, and a rechargeable battery pack. Components of a plug-in hybrid electric car include the electric motor, engine, inverter, battery, fuel tank, control module, and battery charge.
In general, electric cars optimize operation costs and emissions. Considering these characteristics, electric vehicles are trending, and the electric car industry has seen robust growth all over the world. Development of consumer and market segmentation of each country, from past to future, was conducted and analyzed by the criteria importance through inter-criteria correlation (CRITIC method) and GRA method approaches.
The CRITIC method in multi-criteria decision-making (MCDM) calculates the weights of the criteria [9]. Previous studies have used many applications of the CRITIC method, e.g., a study of Spanish savings banks, which were transformed into private capital banks in the future [10]; the index weight of the soft power of 17 cities in Shangdong Province was employed via the CRITIC method [11]; a calculation of weights of the financial ratios of 14 large-scale conglomerates, from 2009 to 2011, was conducted by the CRITIC method [12]; the attribute weights of electric vehicle charging stations were computed by the algorithms of the CRITIC method [13]. Thus, the CRITIC method is a useful tool for determining objective weights which support to calculating the grade and determining the position of each unit via the GRA method. Deng (1982) introduced the grey theory to deal with small samples and poor information [14]. The theory uses system analysis, data processing, modeling, prediction, decision-making, control, and optimization techniques [15]. It also analyzes the original data and searches for intrinsic regularities [16]. The grey method includes five fields, i.e., grey generating, grey relational analysis, grey forecasting, grey decision-making, and grey control [16], whereas, grey forecasting is not the same as other statistical regression models [17]. Other models require data for establishing a prediction model, while grey forecasting utilizes variations within the system, to explore the relations between sequential data, and then conducts the forecasting value.
Many universities offer courses in grey theory, e.g., Nanjing University of Aeronautics and Astronautics, Huazhong University of Science and Technology, Fuzhou University, De Monfort University, Bucharest University of Economics, Kanagawa University, and so on [17]. In addition, researchers typically apply grey system theory to various sectors. For instance, an evaluation of the intranet quality pointed out the important quality attributes via analysis and modeling of the grey system [18]. Estimated values of third-party logistics providers were computed by GM(1,1) [19]. Further, application of the grey relational theory is used in marine economics [20], as well as in the analysis of socioeconomics systems [21]. Finally, the grey system theory explored the future and position of the medical tourism industry [22].
Here, we study the grey theory system in the electric car industry. Notably, grey forecasting helps to clarify situations [17] that other model theories cannot. In this research, we exhibited the latest pictures of the electric car, battery-electric car, plug-in hybrid electric car, and market share; grey forecasting is the best tool through which to compute these future data. In addition, the estimated values were also checked for accuracy levels by the mean absolute percentage error. Therefore, the aim of this research was to use GM(1,1) in grey theory for predicting future terms; then, integration of the CRITIC method in MCDM and GRA in grey theory was used to calculate grades and rankings of electric car sales in 14 countries, from past to future. The analysis results present the status quo of the electric car segment in 14 countries. The research recommends a foreseen trend of electric car market share for electric automobile manufacturers. This paper comprises four sections. Section 1 gives a general overview of electric vehicles, the CRITC method, and the grey system theory. Section 2 sets up equations of CRITIC, GM(1,1), and GRA, and then it proposes variables of the objective research. Section 3 computes the empirical results of electric car sales in 14 countries. Section 4 discusses the main finding. Section 5 reviews important points and recommends future research.
Research Framework
The electric car research process is arranged as follows: data collection, GM(1,1) model, checking the accuracy by mean absolute error percentage (MAPE) indicator, escalating the objective weights via the CRITIC method, and conducting grades and ranks via GRA method, as shown in Figure 1. This paper comprises four sections. Section 1 gives a general overview of electric vehicles, the CRITC method, and the grey system theory. Section 2 sets up equations of CRITIC, GM(1,1), and GRA, and then it proposes variables of the objective research. Section 3 computes the empirical results of electric car sales in 14 countries. Section 4 discusses the main finding. Section 5 reviews important points and recommends future research.
Research Framework
The electric car research process is arranged as follows: data collection, GM(1,1) model, checking the accuracy by mean absolute error percentage (MAPE) indicator, escalating the objective weights via the CRITIC method, and conducting grades and ranks via GRA method, as shown in Figure 1. Stage 2. With the function of dealing with the short time-series, the GM(1,1) was chosen to forecast future terms. All estimated values must be tested to ensure accuracy levels via the mean absolute error percentage (MAPE). If the MAPE index is unsuitable, the raw data must return and reselect or use another model. If the MAPE index achieves the requirement, the predicted values will be utilized to count the grades and ranks of electric cars in the future. Stage 3. Objective weights to each variable in every year were calculated by the CRITIC method. These values are used to conduct the grade and position in the next step.
Stage 4. The GRA method in the grey theory system is applied to calculate the score and determine the ranking of each country.
Data Collection
The study reviewed statistics of new electric car sales and market share of electric cars, in 14 countries, during the time period of 2016-2019. The research chose 14 countries, as shown in Table 1, and their four variables, i.e., new electric car sales (ECS), new battery-electric car sales (BECS), new plug-in hybrid electric car sales (PHECS), and market share of sales (MSS). ECS, BECS, and PHCES express the total sales of electric cars, battery cars, and plug-in hybrid electric cars, respectively, which are measure by thousands of vehicles; MSS is the market share of ECS, BECS, and PHCES and is measured by percentage (%). These data were posted on IEA [4]. Stage 2. With the function of dealing with the short time-series, the GM(1,1) was chosen to forecast future terms. All estimated values must be tested to ensure accuracy levels via the mean absolute error percentage (MAPE). If the MAPE index is unsuitable, the raw data must return and reselect or use another model. If the MAPE index achieves the requirement, the predicted values will be utilized to count the grades and ranks of electric cars in the future.
Stage 3. Objective weights to each variable in every year were calculated by the CRITIC method. These values are used to conduct the grade and position in the next step.
Stage 4. The GRA method in the grey theory system is applied to calculate the score and determine the ranking of each country.
Data Collection
The study reviewed statistics of new electric car sales and market share of electric cars, in 14 countries, during the time period of 2016-2019. The research chose 14 countries, as shown in Table 1, and their four variables, i.e., new electric car sales (ECS), new battery-electric car sales (BECS), new plug-in hybrid electric car sales (PHECS), and market share of sales (MSS). ECS, BECS, and PHCES express the total sales of electric cars, battery cars, and plug-in hybrid electric cars, respectively, which are measure by thousands of vehicles; MSS is the market share of ECS, BECS, and PHCES and is measured by percentage (%). These data were posted on IEA [4]. All data were collected as shown in Tables 2-5, and then these values were applied to predict the future, from 2020 to 2023, by the GM(1,1). Next, the actual and forecasted values were used for computing the grades and ranks of 14 countries, every year, based on the CRITIC and GRA methods.
Critic Method
In this research, the authors used the critic method in MCDM to determine objective weights of the relative importance of four variables, namely ECS, BECS, PHCES, and MSS. The method was performed by the analytical investigation of the evaluation matrix for extracting information in the evaluation criteria [23]. The basic characteristic of the CRITIC is the intensity of the constant [9]. The objective weights calculation is built as follows: Step 1. Let n represent alternatives, h represent evaluation criteria, and x j represent a given system of evaluation criteria.
Step 2. Define a membership function for every criterion of the multi-criteria problem.
Step 3. Convert the initial matrix into a matrix of a relative score. A measured valuation of the criterion to the decision-making process shows the standard deviation.
Step 4. Construct a measure of the conflict.
Step 5. Calculate the amount of information in the relation, based on the multiplicative aggregation.
Step 6. Compute the result of objective weights.
As a result, the objective weights of every variable (ECS, BECS, PHCES, and MSS) in every year are determined when executing the above steps. The objective weighs are accepted when they rank from 0 to 1 [23]. These values are used in the GRA method; furthermore, the unappreciated objective weights must also be removed and rechecked.
Grey System Theory
Deng (1982) [14] introduced the grey theory system, which presents forecasting data based on the historical time-series and grey relation analysis (GRA), based on the available dataset at the same time. Grey system theory includes main components such as system analysis, decision-making, etc., which are applied in the service industry [24], economics [25], textile and apparel industry [26], etc. Here, we examine predictions and evaluations in the electric car industry. Grey prediction produces quantitative forecasts on the basis of the GM model. Grey cluster evaluation comprises grey variable weight and cluster evaluation. First, the research uses grey forecasting to predict future data, based on the historical time-series. Second, the study combines all actual and estimated data and then utilizes grey relation analysis, to compute the grades of every nation in the electric car industry.
The GM(1,1)
The authors set up the historical time-series of ECS, BECS, PHCES, and MSS is calculated by the following: The sequence (Z (1) = (z (1) (1), z (1) (2), . . . , z (1) (n)) is generated from X (1) by adjacent neighbor means, and its equation is given as follows: The equation of the a value and b value is counted as follows: The matrix of a sequence of parameters is expressed as follows: The least-squares estimate of the sequence of GM(1,1) needs to satisfy the following: The whitening equation is calculated by the following: Let the predicted valuation beX (0) = x (0) (1),x (0) (2), . . . ,x (0) (n) (n = 0, 1, 2, . . . , n), and the estimated value is computed by the following: All predicted values must be checked the accuracy level by mean absolute error percentage (MAPE), as follows: The MAPE indicator has four classifications [27], namely excellent classification (smaller than 10%), good classification (from 10% to 20%), reasonable classification (from 20% to 50%), and poor classification (higher than 50%). If the forecasted values of four variables (ECS, BECS, PHECS, and MSS) in 14 countries are not suitable with the MAPE condition, they are removed.
Grey Relational Analysis
After having all actual and forecasted values of four variables (ECS, BECS, PHECS, and MSS) in 14 countries, these values are applied to compute the grade and position via the GRA method, as follows: Let a value of the GRA x * i (h) be generated, and the GRA method is performed as follows. Upper effect measurement: Lower effect measurement: The deviation sequence, ∆ oi , is given by the following.
Moderate effect measurement: where ξ is called the distinguishing coefficient; this value normally equals 0.5. The grey relational grade is computed as follows: Depend on the grey relational grade,γ i , and the rank of each unit in every term is ordered. The grade and rank of the electric vehicle sale in 14 countries, from past to future, are determined by the GRA method.
Estimated Values
The research uses the data collected in Section 2.3 to forecast the number of electric vehicle car sales and its market shares in 14 countries. The forecasted values are exhibited after using the previous series variables for calculation. We apply the equations of the GM(1,1) in Section 2.4.1; the ECS variable of Australia (AU) is used for illustration.
The historical time-series from 2016 to 2019 is determined as follows: The matrix is transposed accordingly.
The least-squares of the sequence is defined by the following: The predicted value is conducted as follows: Tables 6-9.
However, all predicted values need to check accuracy levels via the MAPE indicator, as shown in Equation (14), to remove unsuitable values. Table 10 indicates that the minimum and maximum MAPEs are 1.087% and 25.778%, respectively. According to Lewis [27], these MAPEs reveal good measurement; thus, the forecasted values attain a standard level and have a high reliability.
Objective Weights
From the equations in Section 2.1, the objective weights of ECS, BECS, PHECS, and MSS every year are produced, as shown in Table 11. The values in Table 11 express the objective weights of four variables in every term. The objective weights of each variable over the time period of 2016-2023 range from 0.2 to 0.5. According to Diakoulaki et al. [23], these values have good significance; thus, they are suitable for the GRA method.
Performance and Position
With the actual and forecasted data, the study reviewed the efficiency and rank of each country. First, the critical method was used for estimating the weights of ECS, BECS, PHECS, and MSS.
Observing the grades of each country, in every term, as shown in Table 12, the grade of sales and market shares for the electric car industry of 14 countries shows upward and downward trends in Observing the grades of each country, in every term, as shown in Table 12 From the grade shown in Table 12, the position of each country in every term is determined as shown in Figure 2.
Discussion
The electric car industry is a revolution of power minimization [28], as it utilizes electric energy sources instead of gasoline. According to Helmers and Marx, electric vehicles can reduce CO2 equivalent emissions by 80% [29]. Therefore, total sales and market shares of the electric car industry are expanding and increasing all over the world. The market share of electric cars in 2005 increased in four countries, i.e., France, Germany, United Kingdom, and United States; however, in recent years, the shares have extended all over the world. According to Smith, the electric car market is expected to reach $1.5 trillion by the year 2025 [30]. The study carries on a prediction of four related electric car sale elements by GM (1,1). The forecasted values denote total ECS, BECS, and PHECS, as shown in Figure 3, and the percentage of MSS shows an upward trend. Particularly, PHECS will reach the highest development with total sales.
Discussion
The electric car industry is a revolution of power minimization [28], as it utilizes electric energy sources instead of gasoline. According to Helmers and Marx, electric vehicles can reduce CO 2 equivalent emissions by 80% [29]. Therefore, total sales and market shares of the electric car industry are expanding and increasing all over the world. The market share of electric cars in 2005 increased in four countries, i.e., France, Germany, United Kingdom, and United States; however, in recent years, the shares have extended all over the world. According to Smith, the electric car market is expected to reach $1.5 trillion by the year 2025 [30]. The study carries on a prediction of four related electric car sale elements by GM(1,1). The forecasted values denote total ECS, BECS, and PHECS, as shown in Figure 3, and the percentage of MSS shows an upward trend. Particularly, PHECS will reach the highest development with total sales. Analysis results via the GRA method reveal that the electric car industry is increasing. It is recognized as a sustainable condition for the economic growth contributions and pollutant optimization as well. The main findings reveal that total electric car sales in China is increasing; thus, China always ranks first in total sales and market shares in the electric car industry. Several countries, such as FI, NO, Portugal (PT), and Sweden (SE), are often at the bottom position; thus, they should advertise the useful functions of electric cars and promote product strategies, to inspire drivers to invest in electric car. Analysis results via the GRA method reveal that the electric car industry is increasing. It is recognized as a sustainable condition for the economic growth contributions and pollutant optimization as well. The main findings reveal that total electric car sales in China is increasing; thus, China always ranks first in total sales and market shares in the electric car industry. Several countries, such as FI, NO, Portugal (PT), and Sweden (SE), are often at the bottom position; thus, they should advertise the useful functions of electric cars and promote product strategies, to inspire drivers to invest in electric car.
The quick development of the electric car industry contributes to not only the economy but also in reducing pollutants. Increasing the total use of the electric car is necessary and has important meaning, while the industrialization and modernization process is being optimized for quick growth. To create a fresh life environment, each person, organization, and enterprise should reduce the manufacture of gasoline vehicles and replace them with electric vehicles. Generally, electric vehicles create important valuable economics and environments; therefore, governments should introduce policies to promote the production of electric vehicles.
Conclusions
An investigation of the electric car in 14 countries was employed by combining the CRITIC method and grey system theory. Estimated values of variables, including ECS, BECS, PHECS, and MSS, of each country, during the time period of 2020-2023, are realized via GM(1,1). CN is expected to increase total sales of electric cars, battery-electric cars, and plug-in hybrid electric cars over the next four years; further, the number of ECS, BECS, and PHECS is expected to increase to 2956.46, 2285.18, and 669.79, respectively.
Next, analysis results exhibit the grade and position of the electric car industry when integrating CRITIC and GRA methods. The result of the GRA method indicates that CN maintains high performance to remain the top country for total electric car sales over the whole term. NO is the worst country, as its total sales are always at the lowest level; this country should promote marketing strategies to upgrade its electric car industry.
Moreover, analysis results reveal the potential development of the electric car industry in the future term when actual and future data express an expanded and increased continual trend for previous and future terms. Furthermore, manufacturers can foresee these trends, which will provide economic advantages. | 2020-12-17T09:10:46.483Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "3828e012542fcc5b7b476209515be29aac24ea1b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2032-6653/11/4/79/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6e6f85b1bdce58fdc1f00319316036665988b7e3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258273663 | pes2o/s2orc | v3-fos-license | Harvesting Aurantiochytrium sp. SW1 via Flocculation Using Chitosan: Effects of Flocculation Parameters on Flocculation Efficiency and Zeta Potential
The use of chitosan as a flocculant has become a topic of interest over the years due to its positively charged polymer and biodegradable and non-toxic properties. However, most studies only focus on microalgae and wastewater treatment. This study provides crucial insight into the potential of using chitosan as an organic flocculant to harvest lipids and docosahexaenoic acid (DHA-rich Aurantiochytrium sp. SW1 cells by examining the correlation of flocculation parameters (chitosan concentration, molecular weight, medium pH, culture age, and cell density) toward the flocculation efficiency and zeta potential of the cells. A strong correlation between the pH and harvesting efficiency was observed as the pH increased from 3, with the optimal flocculation efficiency of >95% achieved at a chitosan concentration of 0.5 g/L at pH 6 where the zeta potential was almost zero (3.26 mV). The culture age and chitosan molecular weight have no effect on the flocculation efficiency but increasing the cell density decreases the flocculation efficiency. This is the first study to reveal the potential of chitosan to be used as a harvesting alternative for thraustochytrid cells.
Introduction
Long-chain ω-3 polyunsaturated fatty acids (PUFAs) such as docosahexaenoic acid (DHA, C22:6n3) and eicosapentaenoic acid (EPA, C20:5n3) play various important roles in human physiology, especially for the development of neural and retinal tissues [1]. However, humans cannot synthesize these fatty acids due to insufficient levels of elongates and delta-6-desaturases [2]. Although the conversion of alpha-linolenic acid into omega-3 fatty acids such as docosahexaenoic acid occurs in humans, it happens at a very slow rate and thus it must be acquired from dietary sources [3]. Hence, they are considered essential fatty acids. At present, oil extracted from marine fatty fish such as salmon and tuna is the major source of DHA [4]. However, the expanding demand for DHA worldwide is now placing pressure on both fisheries and the fish oil supply, and it is expected to be unable to meet the global market demand soon [5]. There are also possible health risks associated with the consumption of fish oils, such as food poisoning and allergies, particularly when there are issues of seawater contamination or toxicity and outbreaks of fish diseases [6]. Considering the disadvantages of fish-oil-based DHA and in order to achieve better commercial gains, efforts have been made to find alternative sustainable sources for omega-3 fatty acid production.
Thraustochytrids such as Aurantiochytrium, Schizochytrium, and Thraustochytrium are marine heterotrophic protists that have shown significant potential to be used for the commercial production of DHA as they are capable of producing up to 35-55% DHA from the total fatty acids (TFAs) as well as other interesting compounds such as carotenoids and squalene [7,8]. The production of DHA from thraustochytrids commonly involves upstream and downstream processing. Upstream processing such as strain development, as well as the development of fermentation processes that induce the cells to produce high biomass and lipid production, has been extensively studied and developed to date [9][10][11][12]. However, less focus has been placed on the development of downstream processing, especially the harvesting and dewatering of the lipid biomass, even though it is one of the major hurdles in lipids and DHA production associated with these microorganisms.
The main issues during downstream processing concerning cell recovery are harvesting efficiency and cost. Various approaches have been utilized in the recovery of cells of microalgae (photosynthetic Stramenopiles) such as centrifugation, filtration, gravity sedimentation, flotation, and flocculation [13]. None of these approaches combine technoeconomic feasibility factors such as harvesting effectiveness costs and energy efficiency. It is reported that the cost of harvesting cell biomass is up to 20-30% of the cost of the entire downstream processes [14,15], primarily due to the dewatering process, which involves bulk water removal using a conventional technique such as centrifugation and filtration to separate biomass from the medium. Currently, the majority of thraustochytrids cells are harvested using the centrifugation method. Although effective for harvesting thraustochytrids cells, only up to 80% of the cells can be harvested, and it is also energy and cost-intensive [16], especially if large-scale cultures are used. A study conducted by Wong et al. [17] on Aurantiochytrium mangrovei MP2 cultures showed that after centrifugation, in addition to the pellet, a thick fraction consisting of vegetative cells containing lipid droplets was also observed in the top fraction of the supernatant. In the same study, the DHA concentrations in the top fraction were significantly higher compared to those in the bottom layer (up to 54.16%). Kim et al. [18] also reported the same phenomenon where centrifugation of Aurantiochytrium sp. KRS101 cultures at 9000× g for 30 min results in two similar fractions, which reduce cell recovery by 12.8-15.4%. Additionally, a similar observation was also reported by Patel et al. [19] where significant amounts of Schizochytrium limacinum cells (the harvesting efficiency was not indicated) were not able to be recovered after centrifugation at 8000 rpm (7881× g). Other methods, such as filtration, are ineffective as the filter paper's pores are often clogged, which slows down the harvesting process [20]. Therefore, it is essential to explore potential approaches that are cost-effective and efficient for the harvesting process.
Alternatively, the flocculation method has been identified as one of the most low-cost and effective harvesting methods for harvesting thraustochytrids [18,20] and microalgal cells. The application of flocculation occurs through the neutralization of the surface charge of the cells resulting in flocs formation; hence, facilitating sedimentation via gravity has attracted researchers' attention due to its simplicity and efficiency. Despite efficiencies, most reported flocculation studies use inorganic flocculants such as aluminum sulphate [AL 2 (SO 4 ) 3 ] and ferric chloride (FeCl 3 ), which could potentially contaminate the final extracted lipid product [21]. Therefore, organic-based flocculants, such as chitosan, are preferred due to their non-toxic and biodegradable properties [22,23].
Chitosan is a cationic polyelectrolyte that can be obtained from chitin in fungi and exoskeletons of aquatic life and insects. Many studies have reported chitosan as an effective flocculant for harvesting microalgae cells such as Chlorella sp. and Nannochloropsis sp. [22,[24][25][26]. However, to the best of the author's knowledge, no study has been conducted using chitosan to harvest the thraustochytrids cells. Flocculation efficiency is reported to be dependent on the cell species [27]. This is because different cell species produce different amounts of biomass with different lipid content. For example, thraustochytrids produce lipids over 50% (g/g biomass), rendering the cells to be more challenging to sediment with higher biomass concentrations of up to 20 g/L compared to photosynthetic microalgae cultures where low biomass concentrations (<5 mg/L) are involved. Furthermore, different species also have different surface charge values, and the zeta potential of different cells could be differently affected by various parameters such as the cell density, type of medium used, age of the culture, and culture pH, which will affect the charge neutralization process during flocculation.
Flocculation has the potential to overcome the problems mentioned above by being an alternative harvesting method or easing the burdens of the dewatering step, which will further improve subsequent harvesting methods, such as centrifugation or filtration. A two-step harvesting procedure has been reported to be economically better than the single-step harvesting procedure where it can save costs and increase biomass recovery [28]. For example, flocculation will lower the cost of the subsequent dewatering step by preconcentrating the biomass from the medium and maximizing water removal before the final dewatering by centrifugation can be performed using the concentrated biomass [29]. This was also recommended by Kim et al. [18] where coagulation of thraustochytrid cells prior to dynamic filtration would result in the reduction of dewatering operation time from 180 min to 90 min. Therefore, this work represents the first intensive report focusing on the application of chitosan in harvesting lipids and DHA-rich thraustochytrids. In this study, the effects of flocculation parameters such as the flocculant concentration, flocculant molecular weight, medium pH, culture age, and cell density on the flocculation efficiency of Aurantiochytrium sp. SW1, a Malaysian thraustochytrid capable of producing between 50 and 60% lipids (g/g biomass) containing up to 50% DHA [30], was investigated. Furthermore, the correlation between the zeta potential, which represents the degree of charge neutralization between chitosan and cells' flocculation efficiency, was evaluated.
Effect of Chitosan Concentration
The effect of the chitosan concentration on flocculation efficiency on Aurantiochytrium sp. SW1 cells was investigated by subjecting a 120 h SW1 culture to different concentrations of medium-molecular-weight chitosan (190-310 kDA) at the final pH of 6.6. Figure 1 shows the flocculation efficiency of SW1 cells using different concentrations of medium-molecularweight chitosan.
The highest flocculation efficiency (95.58%) was achieved at a chitosan concentration of 0.5 g/L but decreased to 83.36% as it increased to 1 g/L. The flocculation efficiency dropped sharply with a decrease in chitosan concentration to 33.33% and 18.48% when more diluted chitosan concentrations were used (0.2 and 0.15 g/L, respectively). This shows that the chitosan concentration affects cell flocculation efficiency, as reported in previous studies [26,31]. It can be seen that a high chitosan concentration can still form large visible flocs but is ineffective because the supernatant was still cloudy, whereas at optimal concentrations (0.5 g/L), the supernatant was clear (Figure 1), which is consistent with the flocculation efficiency. Conversely, at low chitosan concentrations, the flocs formed are fine and hardly visible. The lower harvesting efficiencies observed at the higher concentrations of chitosan were due to the excessive chitosan causing the restabilization of cells that then repel each other and reduce the frequency of flocs being formed [32]. However, at a low chitosan concentration, the cell charges cannot be completely neutralized by chitosan, causing cells in a stable state to spread throughout the medium. Previous microalgae flocculation studies using chitosan show that the chitosan flocculation capacity (g chitosan/g biomass) achieved was among the most efficient (0.03 g/g) compared to previous reports on other photosynthetic freshwater microalgae cultures, Chlorella vulgaris and Nannochloropsis salina (Table 1). This is likely due to different species having different surface charge values, which will affect chitosan's charge neutralization efficiency. The highest flocculation efficiency (95.58%) was achieved at a chitosan concentration of 0.5 g/L but decreased to 83.36% as it increased to 1 g/L. The flocculation efficiency dropped sharply with a decrease in chitosan concentration to 33.33% and 18.48% when more diluted chitosan concentrations were used (0.2 and 0.15 g/L, respectively). This shows that the chitosan concentration affects cell flocculation efficiency, as reported in previous studies [26,31]. It can be seen that a high chitosan concentration can still form large visible flocs but is ineffective because the supernatant was still cloudy, whereas at optimal concentrations (0.5 g/L), the supernatant was clear (Figure 1), which is consistent with the flocculation efficiency. Conversely, at low chitosan concentrations, the flocs formed are fine and hardly visible. The lower harvesting efficiencies observed at the higher concentrations of chitosan were due to the excessive chitosan causing the restabilization of cells that then repel each other and reduce the frequency of flocs being formed [32]. However, at a low chitosan concentration, the cell charges cannot be completely neutralized by chitosan, causing cells in a stable state to spread throughout the medium. Previous microalgae flocculation studies using chitosan show that the chitosan flocculation capacity (g chitosan/g biomass) achieved was among the most efficient (0.03 g/g) compared to previous reports on other photosynthetic freshwater microalgae cultures, Chlorella vulgaris and Nannochloropsis salina (Table 1). This is likely due to different species having different surface charge values, which will affect chitosan's charge neutralization efficiency. To further validate the flocculation efficiency, the value of zeta potential was correlated with the flocculation parameters tested (Figure 1). Flocculation usually occurs when the zeta potential value is near zero due to charge neutralization between the flocculant and cells [35]. At high chitosan concentrations (1 g/L and 0.75 g/L), the zeta potential values of the remaining free cells in the supernatant were 6.79 mV and 7.1 mV, respectively. This was due to the excess chitosan causing charge inversion in the cells, re-stabilization, and thus contributing to the positive zeta potential values. The zeta potential increased to −5.36 mV when the cultures were treated with 0.5 g/L, resulting in a 95.58% flocculation efficiency, in comparison to the controls (−15 mV). The increasing zeta potential value nearing zero indicates efficient neutralization occurred at this optimal chitosan concentration. Conversely, when a low chitosan concentration was used (0.15 g/L), significantly lower flocculation efficiency was observed. This is due to the insufficient chitosan to optimally neutralize cell surface charges, which was proven by the highly negative zeta potential value of the remaining free cells.
Effect of Molecular Weight
The experiments were repeated using high-(310-375 kDA) and low-molecular-weight (50-190 kDA) chitosan using the same concentration range at pH 6.6. Both showed the highest flocculation efficiency of >90% at 0.5 g/L, similar to the optimal concentration achieved with medium-molecular-weight chitosan ( Figure 2). Therefore, different chitosan molecular weights showed no significant impact on the flocculation of SW1 cells. This is similar to what was reported by Low and Lau [36] in which the effect of the chitosan molecular weight becomes less significant at the optimum concentration. However, the morphology of the formed flocs showed observable differences where rigid and more compact flocs formed with the application of high and medium-molecularweight chitosan whereas looser and fragile flocs formed from flocculation using low-molecular-weight chitosan ( Figure 2). This could possibly be caused by the bridging of microflocs due to the longer chains of high molecular chitosan, resulting in the physical associations between the microflocs, which eventually leads to the formation of macroflocs [37]. Conversely, low-molecular-weight chitosan with a shorter polymer chain results in inefficient bridging, hence generating smaller-sized flocs. Therefore, medium-molecularweight chitosan at a 0.5 g/L concentration was selected for subsequent experiments.
Effect of pH
Experiments were carried out using a 120 h culture of SW1 with chitosan (0.5 g/L) at pH 3, 4, 6, 8, 10, and 12. pH was adjusted via the addition of either acetic acid (1 M) or NaOH (1 M) to achieve the desired pH during flocculation. The results showed that when the pH of the culture was reduced from 8.53 (the initial pH of the culture) to pH 6, the flocculation efficiency reached the maximum level (94.63%) (Figure 3). The flocculation efficiency decreased up to 79.32% as the pH further decreased to pH 3. On the other hand, a rapid drop in the flocculation efficiency was also observed when the pH increased from However, the morphology of the formed flocs showed observable differences where rigid and more compact flocs formed with the application of high and medium-molecularweight chitosan whereas looser and fragile flocs formed from flocculation using lowmolecular-weight chitosan (Figure 2). This could possibly be caused by the bridging of microflocs due to the longer chains of high molecular chitosan, resulting in the physical associations between the microflocs, which eventually leads to the formation of macroflocs [37]. Conversely, low-molecular-weight chitosan with a shorter polymer chain results in inefficient bridging, hence generating smaller-sized flocs. Therefore, medium-molecular-weight chitosan at a 0.5 g/L concentration was selected for subsequent experiments.
Effect of pH
Experiments were carried out using a 120 h culture of SW1 with chitosan (0.5 g/L) at pH 3, 4, 6, 8, 10, and 12. pH was adjusted via the addition of either acetic acid (1 M) or NaOH (1 M) to achieve the desired pH during flocculation. The results showed that when the pH of the culture was reduced from 8.53 (the initial pH of the culture) to pH 6, the flocculation efficiency reached the maximum level (94.63%) (Figure 3). The flocculation efficiency decreased up to 79.32% as the pH further decreased to pH 3. On the other hand, a rapid drop in the flocculation efficiency was also observed when the pH increased from 6 to 8. The flocculation efficiency remained between 40 and 47% as the pH further increased to 12. At the optimal pH (pH 6), larger flocs with well-defined edges were formed compared to the lower pH, which resulted in smaller-sized flocs (Figure 3). This study shows that pH significantly affects flocs' formation and appearance. Furthermore, as flocculation was performed at a high pH (>pH 8), fine-sized flocs, which were easily broken, formed. This shows that pH significantly affects chitosan flocculation capability, as reported previously [38]. The decreasing flocculation efficiency observed as the pH was reduced to 3 was likely due to the protonated amine group. This was evidently shown by the high positive zeta potential value at low pH 3 and 4 (9.51 and 15.7 mV, respectively) ( Figure 3). Although chitosan is positively charged at a low pH due to the protonation of amine groups, excessive positive charges can cause a repulsion force between destabilized flocs, thus lowering flocculation efficiencies as pH decreases [39]. Conversely, at higher pH, chitosan started to lose its charge due to the deprotonation of the amine group on the polymeric chain. Additionally, a study conducted by Jusoh et al. [40] and Hao et al. [41] on Chlorella vulgaris and Anabaena variabilis cultures, respectively, also reported a similar increase in the negativity of the zeta potential values as pH increased. This causes chitosan to be unable to neutralize the cell's surface charges effectively, as shown by the increasingly negative zeta potential value of up to −24.85 mV at pH 12. At this point, flocculation shows a better harvesting efficiency (96.43%) compared to centrifugation (84.28%) at 12,000× g for 10 min. Harvesting Aurantiochytrium sp. SW1 cells via centrifugation is inefficient due to their energy-intensive and ineffective nature. According to Figure S1, not all cells become pellets, as a fraction of vegetative cells containing lipids still remain at the top or are diffused in the supernatant after centrifugation, which was also reported by other researchers [17][18][19].
However, previous studies claimed that chitosan is more effective for marine micro- This shows that pH significantly affects chitosan flocculation capability, as reported previously [38]. The decreasing flocculation efficiency observed as the pH was reduced to 3 was likely due to the protonated amine group. This was evidently shown by the high positive zeta potential value at low pH 3 and 4 (9.51 and 15.7 mV, respectively) ( Figure 3). Although chitosan is positively charged at a low pH due to the protonation of amine groups, excessive positive charges can cause a repulsion force between destabilized flocs, thus lowering flocculation efficiencies as pH decreases [39]. Conversely, at higher pH, chitosan started to lose its charge due to the deprotonation of the amine group on the polymeric chain. Additionally, a study conducted by Jusoh et al. [40] and Hao et al. [41] on Chlorella vulgaris and Anabaena variabilis cultures, respectively, also reported a similar increase in the negativity of the zeta potential values as pH increased. This causes chitosan to be unable to neutralize the cell's surface charges effectively, as shown by the increasingly negative zeta potential value of up to −24.85 mV at pH 12. At this point, flocculation shows a better harvesting efficiency (96.43%) compared to centrifugation (84.28%) at 12,000× g for 10 min. Harvesting Aurantiochytrium sp. SW1 cells via centrifugation is inefficient due to their energy-intensive and ineffective nature. According to Figure S1, not all cells become pellets, as a fraction of vegetative cells containing lipids still remain at the top or are diffused in the supernatant after centrifugation, which was also reported by other researchers [17][18][19].
However, previous studies claimed that chitosan is more effective for marine microalgae (Nannochloropsis sp) flocculation at high pH (>pH 7.5), where it precipitated [21]. In contrast to the findings of those reported data, Aurantiochytrium sp. SW1 s optimal pH for flocculation by chitosan was at a slightly acidic pH (pH 6). Nannochloropsis and Aurantiochytrium are both marine microbes and the level of salt concentration for their culture media is different, where Nannochloropsis requires >25 g/L of salt while Aurantiochytrium sp. SW1 requires only 6 g/L. It has been shown that salt concentration does not affect the chitosan flocculation efficiency. A study conducted by Garzon-Sanabria et al. [42] shows that the salt concentration does not affect chitosan flocculation capabilities when tested in a low salt concentration (5 g/L) and high salt concentration (35 g/L) medium. Therefore, this difference could likely be caused by the differences in the cell surface between different species, which will affect the flocculation parameters of chitosan.
Culture Age and Cell Density
The effect of culture age and cell density on Aurantiochytrium sp. SW1 by chitosan was investigated using SW1 cultures cultivated for 96 and 120 h using 0.5 g/L chitosan at pH 6. Both culture ages represent the late stationary and lipid accumulation phases where most thraustochytrids cultures are harvested. As shown in Figure 4, no statistical differences in flocculation efficiency between the two culture ages were observed when flocculation was carried out at various pH (pH 3 to pH 6).
Mar. Drugs 2023, 21, x FOR PEER REVIEW 8 of 13 SW1 requires only 6 g/L. It has been shown that salt concentration does not affect the chitosan flocculation efficiency. A study conducted by Garzon-Sanabria et al. [42] shows that the salt concentration does not affect chitosan flocculation capabilities when tested in a low salt concentration (5 g/L) and high salt concentration (35 g/L) medium. Therefore, this difference could likely be caused by the differences in the cell surface between different species, which will affect the flocculation parameters of chitosan.
Culture Age and Cell Density
The effect of culture age and cell density on Aurantiochytrium sp. SW1 by chitosan was investigated using SW1 cultures cultivated for 96 and 120 h using 0.5 g/L chitosan at pH 6. Both culture ages represent the late stationary and lipid accumulation phases where most thraustochytrids cultures are harvested. As shown in Figure 4, no statistical differences in flocculation efficiency between the two culture ages were observed when flocculation was carried out at various pH (pH 3 to pH 6). As an oleaginous microorganism, Aurantiochytrium sp. SW1 begins to accumulate lipids in the excess carbon source after the depletion of nitrogen at 48 h until 120 h, where a high lipid content was observed at 96 h (6.7 g/L) and started to drop due to lipid turnover at 120 h [43]. Lipid turnover was caused by the depletion of all available carbon sources where oleaginous microorganisms use reserved lipids for growth and biomass production through the β-oxidations process. This indicates that the progression of the cultures from 96 to 120 h, although resulting in significantly different lipid content, does not alter other cell properties that would affect flocculation mechanisms by chitosan. This is parallel to the observations by Vu et al. [44] where a less pronounced difference in the harvesting efficiency of Chlorella vulgaris was observed when flocculation using a cationic polyacrylamide polymer (FO3801) was performed on cultures in the late exponential and stationary phase. However, an approximately two-fold decrease in the harvesting efficiency was observed when flocculation was performed on early exponential cultures. This could likely be due to the significant difference in cell surface properties that occurs during the growth phase and fewer changes occurring as the cells matured and entered the stationary phase.
When experiments were carried out with different cell densities (0.5×, 1×, 1.5×, 3×, As an oleaginous microorganism, Aurantiochytrium sp. SW1 begins to accumulate lipids in the excess carbon source after the depletion of nitrogen at 48 h until 120 h, where a high lipid content was observed at 96 h (6.7 g/L) and started to drop due to lipid turnover at 120 h [43]. Lipid turnover was caused by the depletion of all available carbon sources where oleaginous microorganisms use reserved lipids for growth and biomass production through the β-oxidations process. This indicates that the progression of the cultures from 96 to 120 h, although resulting in significantly different lipid content, does not alter other cell properties that would affect flocculation mechanisms by chitosan. This is parallel to the observations by Vu et al. [44] where a less pronounced difference in the harvesting efficiency of Chlorella vulgaris was observed when flocculation using a cationic polyacrylamide polymer (FO3801) was performed on cultures in the late exponential and stationary phase. However, an approximately two-fold decrease in the harvesting efficiency was observed when flocculation was performed on early exponential cultures. This could likely be due to the significant difference in cell surface properties that occurs during the growth phase and fewer changes occurring as the cells matured and entered the stationary phase.
When experiments were carried out with different cell densities (0.5×, 1×, 1.5×, 3×, and 5×) of the initial biomass concentration (15 g/L), a decreasing trend of flocculation efficiency was observed with increasing cell density ( Figure 5). The optimal flocculation efficiency was achieved when the cell density was increased up to 1.5× of the initial culture and fell to <90% when the cell density of the cultures was 3× or higher. This is reflected by the zeta potential profiles where it was near zero below a 1.5× cell density and dropped to −14.1 mV sharply as the density increased. Although flocculation still occurred, the drop in the zeta potential value indicating the available chitosan (0.5 g/L) is insufficient to neutralize most of the available cells in the media as the density increases. The data obtained will help in the flocculation process during upscaling and high-cell-density cultures in the future.
. Drugs 2023, 21, x FOR PEER REVIEW 9 of 13 and fell to <90% when the cell density of the cultures was 3× or higher. This is reflected by the zeta potential profiles where it was near zero below a 1.5× cell density and dropped to −14.1 mV sharply as the density increased. Although flocculation still occurred, the drop in the zeta potential value indicating the available chitosan (0.5 g/L) is insufficient to neutralize most of the available cells in the media as the density increases. The data obtained will help in the flocculation process during upscaling and high-cell-density cultures in the future. are significantly different at p ≤ 0.05.
Aurantiochytrium sp. SW1 Cultivation
Seawater Nutrient Agar (SNA), which comprises 28 g/L of nutrient agar and 18 g/L of sea salt, was used to maintain the Aurantiochytrium sp. SW1 culture. Stock culture from slant agar was streaked on the SNA plate to obtain single colonies. A strip of agar containing approximately ten single colonies was transferred to a 300 mL seed culture medium (in 500 mL flask) containing 60 g/L of fructose (sterilized and added separately), 2 g/L of yeast extract, 8 g/L of monosodium glutamate (MSG), and 6 g/L of sea salt. This seed culture was incubated in an incubator shaker at 30 °C and 200 rpm for 48 h. Then, a 10% v/v seed culture inoculum was inoculated to 2.7 L production media containing the same medium composition as the seed culture medium. The cultures were incubated in a benchtop bioreactor for 96 and 120 h at 30 °C and 500 rpm [43].
Flocculant Preparation
The bio-flocculant used in this study was dry chitosan powder with three different molecular weights (low molecular weight, medium molecular weight, and high molecular weight) and was purchased from Sigma Aldrich (Darmstadt, Germany). A stock of 5 g/L was prepared by dissolving 1 g chitosan in 200 mL of 1% acetic acid solution under con-
Aurantiochytrium sp. SW1 Cultivation
Seawater Nutrient Agar (SNA), which comprises 28 g/L of nutrient agar and 18 g/L of sea salt, was used to maintain the Aurantiochytrium sp. SW1 culture. Stock culture from slant agar was streaked on the SNA plate to obtain single colonies. A strip of agar containing approximately ten single colonies was transferred to a 300 mL seed culture medium (in 500 mL flask) containing 60 g/L of fructose (sterilized and added separately), 2 g/L of yeast extract, 8 g/L of monosodium glutamate (MSG), and 6 g/L of sea salt. This seed culture was incubated in an incubator shaker at 30 • C and 200 rpm for 48 h. Then, a 10% v/v seed culture inoculum was inoculated to 2.7 L production media containing the same medium composition as the seed culture medium. The cultures were incubated in a benchtop bioreactor for 96 and 120 h at 30 • C and 500 rpm [43].
Flocculant Preparation
The bio-flocculant used in this study was dry chitosan powder with three different molecular weights (low molecular weight, medium molecular weight, and high molecular weight) and was purchased from Sigma Aldrich (Darmstadt, Germany). A stock of 5 g/L was prepared by dissolving 1 g chitosan in 200 mL of 1% acetic acid solution under continuous agitation using a magnetic stirrer at 500 rpm for 2 h until most of the chitosan dissolved. This solution was diluted further with deionized water until the desired concentration was obtained.
Flocculation Optimization for Harvesting Aurantiochytrium sp. SW1
Flocculant optimization was performed via the one-factor-at-a-time (OFAT) approach to investigate the effect of the chitosan concentration and pH level on flocculation efficiency and zeta potential. Briefly, 40 mL of the culture sample was added to 5, 3.5, 2.5, 1, or 0.75 g/L of chitosan (for all molecular weights) to achieve the final flocculant concentration of 1, 0.7, 0.5, 0.2, or 0.15 g/L, respectively. The mixture was mixed thoroughly using a magnetic stirrer at high speed (500 rpm) for 1 min and was mixed slowly (100 rpm) for the next 5 min. Then, the mixture was gently poured into a 50 mL centrifuged tube to allow the flocs to settle for 20 min via gravitational sedimentation. Afterward, the supernatant was taken approximately 2 cm from the surface for the determination of optical density (OD) at 660 nm using a spectrophotometer. The flocculation efficiency (%) was obtained using the formula shown in Equation (1) Flocculation where OD 0 and OD i are defined as OD values of the initial culture without chitosan treatment (negative control) and the supernatant after flocculation, respectively [45]. After the optimal chitosan concentration and molecular weight were obtained, the next pH optimization experiment was then performed. Different aged cultures of Aurantiochytrium sp. SW1 (96 and 120 h) were used to further investigate the effect of culture age on flocculation efficiency. The final culture pH was adjusted to pH 3, 4, 6, 8, 10, and 12 after the addition of the flocculant. Then, the flocculation experiment proceeded as mentioned previously. Finally, to investigate the upscaling potential, the flocculation experiment was performed on different cell density cultures using the optimal parameter obtained previously.
Zeta Potential Analysis
Zeta potential (ζ) was measured using the Malvern Zetasizer Nano-ZS (Worcester, United Kingdom, model ZEN3500). The supernatant was collected at a depth of 2 cm below the surface after flocs had been settled for 20 min in order to determine the zeta potential of the remaining free cells. In addition, the Aurantiochytrium sp. SW1 culture was also sampled for zeta potential measurements to compare the degree of charge neutralization after flocculation.
Statistical Analysis
An analysis of variance (One-way ANOVA) was performed followed by Tukey's multiple comparison test to determine the significance of the difference between flocculation efficiency using Graphpad Prism 9 (GraphPad Software, San Diego, CA, USA). The significance level was set at p < 0.05, all experiments were performed in triplicate (n = 3), and the values were denoted as mean ± standard deviation.
Conclusions
This experiment showed that Aurantiochytrium sp. SW1 can be flocculated by chitosan efficiently. The highest flocculation efficiency (>95%) was achieved using 0.5 g/L of chitosan at pH 6. Of all the parameters tested, the chitosan concentration, pH, and cell density significantly affected flocculation efficiency. This study also demonstrates the correlation between the flocculation efficiency and zeta potential measurement, proving that a high flocculation efficiency has near-zero zeta potential values. Flocculation could provide an efficient and cheap early de-watering process in conjunction with other mechanical procedures, although other operational and cost-affecting parameters need to be considered. It will become more beneficial for large-scale fermentation (e.g., 1000 L) in which a high volume of water needs to be removed. This research was performed using the one-factor-atime (OFAT) approach. Thus, flocculation parameters can be further optimized to achieve a more cost-effective and efficient Aurantiochytrium sp. SW1 flocculation. | 2023-04-28T06:17:50.612Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "46f3d7dc144b1780c653bdd35bedbc99c991df92",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/21/4/251/pdf?version=1681910427",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1eb41cb18f2afa43eaedd5a953689e51c30b61b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
30014667 | pes2o/s2orc | v3-fos-license | Cdc37 Regulates Ryk Signaling by Stabilizing the Cleaved Ryk Intracellular Domain
Ryk is a Wnt receptor that plays an important role in neurogenesis, neurite outgrowth, and axon guidance. We have reported that the Ryk receptor is cleaved by γ-secretase and that its intracellular domain (ICD) translocates to the nucleus upon Wnt stimulation. Cleavage of Ryk and its ICD is important for the function of Ryk in neurogenesis. However, the question of how the Ryk ICD is stabilized and translocated into the nucleus remains unanswered. Here, we show that the Ryk ICD undergoes ubiquitination and proteasomal degradation. We have identified Cdc37, a subunit of the molecular chaperone Hsp90 complex, as a Ryk ICD-interacting protein that inhibits proteasomal degradation of the Ryk ICD. Overexpression of Cdc37 increases Ryk ICD levels and promotes its nuclear localization, whereas Cdc37 knockdown reduces Ryk ICD stability. Furthermore, we have discovered that the Cdc37-Ryk ICD complex is disrupted during neural differentiation of embryonic stem cells, resulting in Ryk ICD degradation. These results suggest that Cdc37 plays an essential role in regulating Ryk ICD stability and therefore in Ryk-mediated signal transduction.
Wnt signaling plays an essential role in several developmental processes, including cell proliferation, cell migration, and cell fate determination (1)(2)(3). Wnt signaling is mediated by various receptors that activate different signal transduction pathways. One of these receptor families includes seven-transmembrane Frizzled proteins that, along with their coreceptor low density lipoprotein receptor-related protein (LRP), control catenin-dependent signaling through activation of Tcf/Lef transcription factors. Frizzled proteins also activate -cateninindependent signaling, referred to as the planar cell polarity and calcium pathways (4,5). Wnt proteins also activate a different type of signaling via the Ryk receptor (6,7).
Ryk, whose structure is related to that of receptor proteintyrosine kinases (RTKs) 4 , consists of a glycosylated extracellu-lar domain, a transmembrane domain, and an intracellular kinase domain. The extracellular domain exhibits sequence homology to Wnt inhibitory factor, suggesting that it binds to secreted Wnt growth factors (8). Indeed, recent studies show that Wnt1, -3, -3a, and -5a bind directly to the Ryk extracellular domain and suggest a possible role for Ryk as a Wnt receptor in several developmental processes, including neurite outgrowth, cell fate determination, organogenesis, and axon guidance (9 -13). The Ryk intracellular domain (ICD) contains 11 distinct subdomains that are highly conserved within the kinase domain of RTKs. Despite this, Ryk belongs to a subclass of catalytically inactive RTKs, which includes CCK4, ErbB3, EphB6, and Ror1 (14). A comparison of Ryk intracellular domains from several species with catalytically active RTKs shows amino acid substitutions in subdomains I and II, suggesting a loss of function at the ATP binding site. Subdomain VII also displays a highly unusual amino acid substitution in the catalytic loop, which may also account for loss of catalytic activity (14,15). Thus, the mechanism by which the Ryk receptor transduces signals is unknown. One possibility is that Ryk signals via heterodimerization with other RTKs. This hypothesis is supported by studies showing that Ryk-deficient mice exhibit a cleft palate and defects in craniofacial morphology, similar to the phenotypes exhibited by EphB2/EphB3-deficient mice, and that Ryk binds to EphB2 and EphB3 (16). Recently, we discovered that the Ryk receptor undergoes intramembrane proteolytic cleavage to directly transduce intracellular signaling and that this event is required for neurogenesis (17).
Ryk cleavage is mediated by ␥-secretase (17), releasing the ICD into the cytoplasm where it translocates to the nucleus and likely regulates transcription of target genes in a manner similar to the ICDs of Notch and ErbB4 (18 -20). ␥-Secretase can also facilitate degradation of other transmembrane proteins (18), and in some cases, the ICDs of some ␥-secretase substrates such as syndecan-3, nectin-1␣, and p75 are rapidly degraded. For example, in the absence of Notch binding, the cleaved ICD of the Notch ligand Delta is degraded by the proteasomal machinery (18). All of these observations suggest that the Ryk ICD may require stabilization to transmit Ryk signaling.
Here, we show that Cdc37 binds to the Ryk ICD, promoting stabilization of the ICD fragment and nuclear translocation.
We also show that the Ryk ICD is degraded following disruption of the Cdc37-Ryk ICD complex during the course of neural differentiation of embryonic stem cells (ESCs). These results suggest a biological role for Cdc37-mediated stabilization of the Ryk ICD in Ryk signaling.
EXPERIMENTAL PROCEDURES
Derivation of Ryk ESC Lines and Cell Cultures-Ryk ϩ/Ϫ mice were kindly provided by Dr. Steven A. Stacker (Ludwig Institute of Royal Melbourne Hospital). To prepare Ryk Ϫ/Ϫ ESCs, blastocysts were flushed from the uteruses of pregnant females 3.5 days after mating a Ryk ϩ/Ϫ breeder pair, and cells were explanted on feeder layers of irradiated mouse embryonic fibroblasts in growth medium containing 80% Knockout TM Dulbecco's modified Eagle's medium (Invitrogen), 15% Knockout TM serum replacement (Invitrogen), 5% fetal bovine serum (ESC-qualified, HyClone), 2 mM L-glutamine, 1ϫ nonessential amino acids, 1ϫ penicillin/streptomycin (Invitrogen), 1000 units/ml mouse leukemia inhibitory factor (LIF; Millipore), and 0.1 mM 2-mercaptoethanol (Invitrogen). After 3 days, most of the blastocysts had attached to the mouse embryonic fibroblasts, and the culture medium was changed; the medium was again changed every other day until cellular outgrowths were removed and dissociated. Between days 7 and 10 of co-culture, inner cell mass outgrowths were plucked from the mouse embryonic fibroblast feeder layer and dissociated with trypsin/ EDTA. Dissociated cells were plated on fresh mouse embryonic fibroblasts in 24-well plates. The medium was changed daily, and cultures were carefully monitored. ESC colonies became visible within a week and were passaged. Colonies were further expanded by enzymatic dissociation with trypsin/EDTA and genotyped using PCR. To obtain feeder-free ESCs, all ESC lines were cultured in Glasgow minimum essential medium (Sigma) containing 15% fetal bovine serum (Invitrogen), 0.1 mM 2-mercaptoethanol, 1000 units/ml LIF, and nonessential amino acids on gelatin-coated tissue culture dishes (26). Embryoid bodies in the absence of LIF were grown with or without 1 M all-transretinoic acid (Sigma) to promote differentiation. Differentiation of ESCs into neural progenitors on monolayer cultures was performed as described previously (27).
Construction of Plasmids and Viruses-Plasmids encoding FLAG-Cdc37 and Cdc37⌬CT were kindly provided by Dr. Robert L. Matts (Oklahoma State University). Plasmids encoding the Ryk ICD and deletion mutants ⌬319 -366, ⌬367-438, ⌬439 -493, and ⌬494 -579 were generated by PCR using wildtype Ryk as a template, whereas an oligonucleotide was used to create a C-terminal Myc tag. To generate enhanced GFP fusions, a DNA fragment encoding enhanced GFP was inserted at the N terminus into the plasmids encoding Ryk ICDs. To produce the recombinant Ryk ICD protein (amino acids 367-579) fused to glutathione S-transferase in Escherichia coli, the PCR product encoding the Ryk ICD was inserted into pGEX4T. To create a tetracycline-inducible system in a lentiviral expression system, the digested rtTA2S-M2 fragment (28) was inserted into pFUIPW, which contains an internal ribosomal entry site followed by the puromycin resistance gene (FUrt-TAIPW). The ubiquitin promoter of pFUW was replaced with a tetracycline-responsive element containing a cytomegalovirus minimal promoter to construct pFTREW. Green fluorescent protein (GFP) alone, GFP-ICD, and GFP-ICD⌬494 -579 were subcloned into pFTREW. To obtain constructs for tamoxifeninducible Cdc37 shRNA, the lox recombinant plasmid was generated as described previously (29). To express multiple proteins from a single promoter, a cDNA encoding Zeocin, the foot-and-mouth disease virus peptide (30), and mCherry were assembled by PCR and subcloned downstream of the ubiquitin promoter between the loxP sites. To regulate recombination by tamoxifen, a fragment encoding Cre-ERT2 was inserted into pFUW. Five different constructs (Open Biosystems) were tested for Cdc37 shRNA activity, and three positive Cdc37 shRNA were selected and used in experiments. Oligonucleotides encoding this sequence were annealed and inserted into a restriction site downstream of the second loxP site. Lentivirus was generated as described previously (31), and titers were determined by GFP expression after serial dilution.
Cell Cultures-Doxycycline-inducible stable ESCs were generated using lentivirus expressing FUrtTAIPW or FTREW containing GFP, GFP-ICD, and GFP-ICD⌬494 -579 and cultivated with 1 g/ml puromycin. A transduction efficiency of 100% was confirmed by fluorescence microscopy in doxycycline-treated cells. To obtain a tamoxifen-inducible Cdc37-knockdown ESC line, the lox recombinant plasmid containing Cdc37 shRNA and Cre-ERT2 was introduced into Ryk ϩ/ϩ , Ryk Ϫ/Ϫ GFP, and GFP-ICD ESCs by lentiviral infection, and ESCs were cultured with 20 g/ml Zeocin for 6 days. To induce Cdc37 shRNA expression, cells were treated with tamoxifen and analyzed by Western blotting with anti-Cdc37 antibodies and fluorescence microscopy after two or three passages. 293T cells were cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum and 100 g/ml penicillin-streptomycin and transiently transfected with the indicated plasmids using the calcium phosphate precipitation method.
Proteasomal Degradation of the Ryk ICD Prevents Nuclear
Localization-We evaluated the stability of the cleaved Ryk ICD using overexpression assays in HEK293T cells. To examine levels of the cleaved C-terminal fragment in cells, a full-length Ryk or the Ryk ICD construct with a C-terminal Myc tag was transiently transfected into HEK293T cells, and cells were treated with or without proteasomal inhibitors MG132 or lactacystin for 6 h. Treatment with MG132 increased levels of the ϳ42-kDa form of the Ryk ICD protein (Fig. 1A). Western blot analysis of the cytosolic and nuclear extracts also showed increased levels of the Ryk ICD in cells treated with lactacystin compared with dimethyl sulfoxide-treated control cells (Fig. 1B), suggesting that the Ryk ICD undergoes proteasomal degradation. When proteasomal degradation is inhibited, the Ryk ICD is detected in the nucleus. To determine whether the Ryk ICD is ubiquitinated prior to degradation, the Ryk ICD-Myc immunoprecipitates from cytosolic extracts were subjected to Western blotting with the anti-ubiquitin antibody. A high molecular mass form of the ubiquitinated Ryk ICD was detected in lactacystin-treated 293T cells expressing wild-type Ryk-Myc (Fig. 1C), indicating that cytoplasmic levels of the Ryk ICD may be down-regulated through ubiquitin-dependent proteasomemediated degradation. To confirm that the smearing high molecular mass bands represented the ubiquitinated Ryk ICD, we transfected 293T cells with constructs encoding HA-ubiquitin and the Ryk ICD-Myc. These cells were treated with or without proteasomal inhibitors MG132 for 6 h. As expected, smearing high molecular mass protein bands containing HA-ubiquitin were detected in the Ryk ICD-Myc immunoprecipitates (Fig. 1D). Treatment with MG132 increased the amount of high molecular mass smearing. In the anti-HA immunoprecipitate, Western blotting using anti-Myc antibody suggests that the smearing high molecular mass proteins are the Myc-tagged Ryk ICD. These results together support the idea that the Ryk ICD is polyubiquitinated.
Ryk Associates with Cdc37-Hsp90-Next, we undertook a yeast two-hybrid screen to identify potential interacting proteins with the Ryk ICD and identified the co-chaperone protein Cdc37 (data not shown). This interaction was confirmed in 293T cells transiently transfected with FLAG-Cdc37 and fulllength Ryk-Myc ( Fig. 2A). Both the full-length and cleaved Ryk ICD were detected in FLAG-Cdc37 immunoprecipitates. Furthermore, immunoprecipitation analysis following subcellular fractionation showed the presence of full-length Ryk in the membrane and the ICD in the cytoplasm. Both of these proteins bind to Cdc37 (Fig. 2B). Detection of the Ryk ICD in FLAG-Cdc37 immunoprecipitates suggests that the Ryk ICD mediates increases Ryk ICD protein levels. 293T cells were transfected with a plasmid encoding C-terminal Myc-tagged wild-type Ryk or an empty control vector. MG132 was added 6 h prior to lysis, and lysates were analyzed by Western blotting with an anti-Myc antibody. A similar experiment using 293T cells transfected with a Myctagged Ryk ICD construct or a control vector was also performed (right panel). B, nuclear localization of the Ryk ICD increases in cells treated with the proteasome inhibitor lactacystin. Nuclear, cytosolic, and membrane extracts of cells transfected with Ryk constructs or control plasmids were subjected to Western blotting using an anti-Myc antibody for full-length (FL) Ryk and Ryk ICD proteins. lamin A/C, actin, and E-cadherin were used as protein markers for nuclear, cytosolic, and membrane fractions, respectively. C, the Ryk ICD is ubiquitinated. Cells transfected with Ryk-Myc and GFP were treated with the proteasome inhibitor lactacystin. Cytosolic extracts were immunoprecipitated (IP) using an anti-Myc antibody and analyzed by Western blotting using antibodies against ubiquitin (Ub) and Myc. Antibodies against GFP and actin served as controls. D, 293T cells were transfected with HA-ubiquitin and Ryk ICD-Myc or empty vector and then treated with MG132 for 6 h prior to lysis. Cell lysates were subjected to immunoprecipitation using anti-HA or anti-Myc antibodies, followed by Western blotting (IB) using anti-Myc or anti-HA antibodies.
the association with Cdc37. This was confirmed when Cdc37 was detected in the anti-Myc immunoprecipitate from 293T cells transfected with plasmids encoding the C-terminal Myctagged ICD of Ryk and FLAG-Cdc37 (Fig. 2C).
Cdc37 interacts directly with the kinase domain of several protein kinases, including Raf-1, B-Raf, Akt1, and Cdk4, via its N terminus to induce their activation and stability as well as with the Hsp90 chaperone via its C terminus (21,25,32,33). We used a deletion mutant series to identify the Cdc37-binding site in the Ryk ICD kinase domain. Immunoprecipitation using anti-FLAG and anti-Myc antibodies showed that ICD⌬367-438 (lacking subdomains III-V), ⌬439 -493 (lacking subdomains VI and VII), and ⌬494 -579 (lacking subdomains VII-XI) mutants did not bind to FLAG-Cdc37, whereas the wild-type Ryk ICD-Myc and ICD⌬319 -366-Myc (lacking subdomains I and II) did (Fig. 2D), demonstrating that Ryk ICD subdomains III-XI (amino acids 367-579) in the kinase domain, but not those in the N terminus, are required for Cdc37 binding. This observation was verified by showing that glutathione S-transferase fusions of HA-tagged subdomains III-XI expressed in E. coli (Fig. 2E, top left panel) are sufficient for Cdc37 binding (Fig. 2E). A FLAG-Cdc37 protein was purified from 293T cells transiently transfected with FLAG-Cdc37 using anti-FLAG immunoprecipitation followed by elution with FLAG peptide (Fig. 2E, top right panel). To map the Cdc37 domain binding to Ryk, a plasmid encoding FLAG-tagged Cdc37 deleted at the C terminus (FLAG-Cdc37⌬CT) was transfected into 293T cells expressing wild-type Ryk-Myc. Ryk was detectable in immunoprecipitates derived from cells transfected with full-length FLAG-Cdc37 but not with FLAG-Cdc37⌬CT (Fig. 2F, IP:FLAG), and the findings were corroborated by an inverse immunoprecipitation using Ryk-Myc (IP: Myc). These results indicate that the C terminus of Cdc37 is required for the Cdc37/Ryk interaction.
Both Stability and Nuclear Localization of Ryk ICD Require
Interaction with Cdc37-The observation that the Ryk ICD binds to Cdc37 in the cytoplasm suggests that Ryk ICD stability is regulated by Cdc37, a hypothesis supported by evidence that proteasomal degradation of Cdk4 is prevented by binding to Cdc37-Hsp90 (24). To determine whether Cdc37 binding alters Ryk ICD stability, we cotransfected 293T cells with plasmids encoding wild-type Ryk-Myc or mutant Ryk⌬494 -579-Myc protein that cannot bind Cdc37 with FLAG-Cdc37 or control vector (Fig. 3A). The level of the wild-type Ryk ICD increased in Cdc37-expressing cells compared with mock-transfected cells. In contrast, expression levels of Ryk ⌬494 -579 were not altered by Cdc37 overexpression. Treatment of cells with MG132 increased the level of both the wild-type and mutant Ryk ICD but had no effect on increased levels of the wild-type Ryk ICD seen in the presence of overexpressed Cdc37. This data strongly suggests that Cdc37 protects the Ryk ICD from proteasomal degradation. Furthermore, Western analysis of nuclear extracts showed increased nuclear localization of the Ryk ICD in FLAG-Cdc37-transfected cells (Fig. 3B). Consistent with this result, GFP-tagged ICD⌬367-438, ICD⌬439 -493, and ICD⌬494 -579 mutant proteins were localized to the cytoplasm, whereas GFP-wild-type ICD and GFP-ICD⌬319 -366 proteins, which are capable of binding Cdc37, were detected in the nucleus and
. Cdc37 increases protein levels and nuclear localization of the Ryk ICD, and an Hsp90 inhibitor inhibits the activity of Cdc37 on the Ryk ICD.
A, expression of Cdc37-FLAG increases Ryk ICD protein levels, whereas it has no effect on Ryk ICD⌬494 -579, which does not bind to Cdc37. Addition of MG132 (MG) 6 h prior to cell lysis prevents degradation of both wild-type Ryk and Ryk ICD⌬494 -579. 293T cells were transfected with FLAG-Cdc37, wild-type Ryk, or Ryk ICD⌬494 -579-Myc plus GFP. DMSO, dimethyl sulfoxide. B, Cdc37 enhances nuclear localization of the Ryk ICD. Subcellular fractions from 293T cells transiently expressing Ryk ICD-Myc alone or together with Cdc37-FLAG were analyzed by Western blotting using anti-Myc, anti-FLAG, anti-Hsp90, and anti-lamin A/C antibodies. Detection of lamin A/C confirmed the purity of nuclear fractionation. C, the Ryk ICD and ICD⌬319 -366 are localized in the nucleus and cytoplasm, whereas Ryk ICD mutants that do not bind to Cdc37 (see Fig. 2D) remain cytoplasmic. The Ryk ICD and its mutants were fused with GFP. Nuclei were visualized using mCherry-tagged histone H2B. EGFP, enhanced green fluorescent protein. D, treatment with the Hsp90 inhibitor 17-AGG decreases the levels of the wild-type Ryk ICD but not Ryk ICD⌬494 -579. Cells were transiently transfected with wild-type Ryk-Myc alone or together with Cdc37-FLAG or with Ryk ICD⌬494 -579-Myc plus Cdc37-FLAG. Cells were then treated with the indicated concentration of 17-AGG for 24 h prior to lysis. ICD levels were analyzed by Western blotting of samples from whole cell lysates (top panels) and subcellular fractions (bottom panels). 17-AGG treatment had no effect on expression levels of Cdc37, Hsp90, and GFP. E, treatment with the Hsp90 inhibitor 17-AGG induces ubiquitination of the Ryk ICD. 293T cells transfected with HA-ubiquitin (Ub) and/or Ryk ICD-Myc were treated for 6 h with MG132 and different doses of 17-AGG. Immunoprecipitation (IP) was performed using anti-Myc antibody followed by Western blotting (IB) using anti-HA antibody. cytoplasm (Fig. 3C). To determine whether Hsp90 regulates Ryk ICD stability, 17-AGG, which inhibits Hsp90 chaperone function and disrupts formation of the Hsp90-Cdc37 complex (14), was added to 293T cells transiently transfected with wildtype and mutant (⌬494 -579) Ryk-Myc plus FLAG-Cdc37 or mock-transfected cells. 17-AGG treatment significantly blocked increases in levels of the wild-type Ryk ICD protein seen following overexpression of FLAG-Cdc37 in a dose-dependent manner (Fig. 3D, top panels). In contrast, no significant change in the level of the Ryk ICD⌬494 -579 protein was detected following 17-AGG treatment. These observations suggest that Hsp90 is directly related to the stability of the Ryk ICD. To determine whether 17-AGG-mediated reduction of Ryk ICD levels is caused by ubiquitin-mediated proteasomal degradation, 293T cells cotransfected with HA-ubiquitin and Ryk ICD-Myc were treated with 17-AGG in the presence of MG132. Ubiquitination of the Ryk ICD was determined by immunoprecipitation using anti-Myc antibody and Western blot analysis using anti-HA antibody. Ubiquitination of the Ryk ICD increased in a 17-AGG dosage-dependent manner. This data indicates that the inhibition of Hsp90 induces ubiquitination of the Ryk ICD as well as a decrease in Ryk ICD levels.
Disruption of Ryk Binding to Cdc37 Reduces Ryk ICD Levels during Neural Differentiation of Embryonic Stem Cells-We demonstrated recently that Ryk ICD levels are higher in differentiating neurons than in neural progenitor cells isolated from mouse embryo cortices (17). Although generation of the Ryk ICD protein likely requires intramembrane cleavage, the observation that the Ryk ICD binds to Cdc37-Hsp90 raises the possibility that the cleaved Ryk ICD may be unstable and rapidly degraded in neural progenitors. To examine this possibility, we used mouse ESCs, which can be differentiated into neural progenitors. To evaluate a functional link between Ryk ICD stability and Cdc37-Hsp90 during neural differentiation, we initially analyzed levels of the Ryk ICD in the Ryk ϩ/ϩ ESCs and in the Ryk-Mycexpressing Ryk Ϫ/Ϫ ESCs, which were generated by transduction of lentivirus expressing Ryk-Myc into Ryk Ϫ/Ϫ ESC lines derived from Ryk Ϫ/Ϫ blastocysts. We found decreased levels of Ryk ICD when cells were cultured under monolayer neural differentiation conditions for 6 days, compared with those cultured as undifferentiated ESCs in LIF-containing medium (Fig. 4A, Un). In such differentiation conditions, neural progenitor markers Sox1 and nestin were detectable, but the immature neuron markers TUJ1 and mature neuron marker MAP2 (data not shown; supplemental figure) were not, indicating the differentiation of ESCs into neural progenitor cells. To address the stability of the intracellular Ryk ICD, we employed doxycycline-inducible expression of GFPfused forms of ICD and monitored GFP, GFP-ICD, and GFP-ICD⌬494 -579 signals induced by doxycycline treatment in Ryk Ϫ/Ϫ ESCs during neural differentiation using fluorescence microscopy. Fluorescent signals were noticeably decreased in both Ryk Ϫ/Ϫ GFP-ICD and GFP-ICD⌬494 -579 ESCs undergoing differentiation compared with undifferentiated cells (Fig. 4B, top panels). In contrast, Ryk Ϫ/Ϫ GFP ESCs showed a similar GFP signal intensity when cultured under stem cell maintenance or differentiation conditions. We next evaluated ICD stability during differentiation using Western blot analysis. Specifically, GFP-ICD or GFP-ICD⌬494 -579 was induced with doxycycline for 2 h, and then, following doxycycline withdrawal, GFP-ICD levels were measured at time intervals in cells cultured under either stem cell maintenance or differentiation conditions (Fig. 4C). Peak levels of GFP-ICD, first detected at 60 h, were reduced 18-fold at 96 h in ESCs cultured under differentiation conditions compared with ESCs cultured in the presence of LIF. Furthermore, GFP-ICD⌬494 -579 levels were not detectable at 72 or 96 h. These results verify the hypothesis that the Ryk ICD is rapidly degraded during differentiation into neural progenitor cells.
We next examined the Ryk/Cdc37 interaction during neural differentiation of ESCs (Fig. 5A). Cdc37 levels remained constant during the course of differentiation. However, endogenous Cdc37 was detected in GFP-ICD immunoprecipitates only in undifferentiated cultures. Immunoprecipitates of GFP-ICD⌬494 -579 revealed no binding to Cdc37 in lysates from cells cultured under either stem cell maintenance or differentiation conditions, consistent with the requirement of amino acids 494 -570 of the Ryk ICD for Cdc37 binding. To further analyze the dynamics of the Ryk ICD/Cdc37 interaction during differentiation, Ryk GFP-ICD expression was induced by doxycycline for 12 h at different stages of differentiation, and Cdc37/ GFP-ICD association was determined by immunoprecipitation. The GFP-ICD/Cdc37 interaction was reduced during ESC differentiation, as evidenced by reduced Cdc37 levels in immunoprecipitated samples at 84 -96 h compared with those at 0 -12 and 36 -48 h (Fig. 5B, compare lane 3 with lanes 1 and 2). To confirm that reduction in ICD levels is dependent upon Hsp90 during ESC differentiation, 17-AGG was added to cultures after induction of GFP-ICD levels by doxycycline at different times during differentiation. The GFP-ICD protein was undetectable after a 24-h exposure to 17-AGG (Fig. 5C).
To determine whether Cdc37 regulates Ryk ICD stability in ESCs, we generated a Cdc37 knockdown in ESCs using tamoxifen-induced recombination. In this modified inducible RNA interference system, a DNA fragment encoding fusions of Zeocin, retroviral 2A protein, and mCherry, flanked by loxP sites, is inserted between the U6 promoter and shRNA targeting Cdc37, and a Cre-estrogen receptor fusion protein is expressed in a separate vector. Both lentiviral vectors are then transduced into ESCs to make a Zeocin-resistant cell line. After tamoxifeninduced recombination, Zeocin-2A-mCherry was floxed out, and expression of Cdc37 shRNA was driven by the U6 promoter (Fig. 5D, top panel). Loss of the mCherry fluorescence signal indicated that a complete recombination event had occurred (Fig. 5D). Levels of Cdc37 were noticeably reduced in Ryk ϩ/ϩ , Ryk Ϫ/Ϫ GFP, and Ryk Ϫ/Ϫ GFP-ICD ESC lines (Fig. 5D). When protein expression was induced by doxycycline in Ryk Ϫ/Ϫ ESC lines, GFP-ICD levels were significantly reduced in a tamoxifen-induced Cdc37 knockdown compared with control ESCs, whereas the level of GFP alone was not affected by Cdc37 knockdown (Fig. 5E). Taken together, these results strongly suggest that Ryk ICD stabilization is regulated through Cdc37-Hsp90 chaperone activity during neural differentiation of ESCs.
DISCUSSION
Intramembrane proteolysis of a single-pass transmembrane receptor can modulate signaling in one of two ways: by regulating the respective signaling pathway via the cleaved ICD or by degradation of cleavage products (18,34). Our previous data has shown that the receptor Ryk undergoes intramembrane proteolysis, resulting in the generation of a cleavage product, the Ryk ICD (17). The cleaved Ryk ICD exhibits nuclear localization in differentiating premature neurons both in vitro and in vivo. The Ryk-deficient mouse exhibits defects in neuronal differentiation during neurogenesis. These results can be recapitulated in differentiation of ESCs. When Ryk ϩ/ϩ and Ryk Ϫ/Ϫ ESCs undergo neural differentiation, no significant difference in the number of nestin-positive cells were observed between the Ryk ϩ/ϩ and Ryk Ϫ/Ϫ cell cultures, whereas the number of TUJ1-positive neurons were reduced in Ryk Ϫ/Ϫ culture (supplemental figure). We have demonstrated that the cleavage of Ryk and nuclear signaling of the Ryk ICD are critical for neurogenesis (17). Therefore, regulation of Ryk ICD stability in Ryk signaling, such as increased stability in differentiating premature neurons and decreased stability in neural stem cells, may be important for neurogenesis.
We showed that the cleaved Ryk ICD undergoes ubiquitination and proteasomal degradation in 293T cells and that the Ryk ICD is rapidly degraded during differentiation of ESCs into neural progenitor cells. Therefore, this implies that the stability of the cleaved ICD is down-regulated to prevent the activation of Ryk signaling in neural progenitor cells. Furthermore, the up-regulated stability of the Ryk ICD may activate Ryk signaling in cells differentiating from neuronal progenitor cells into neurons. This is supported by the observation that overexpression of the Ryk ICD increases the number of neurons differentiating from neural progenitor cells (17).
We identified Cdc37 as a Ryk ICD-interacting protein. Cdc37, a Hsp90 co-chaperone, is implicated in stabilization and/or activation of Hsp90 client kinases, including Cdk4 and Raf, which function in cell cycle control, development, and transcriptional regulation (23,35,36). Given that Cdc37 knockdown or treatment with17-AGG, which disrupts Hsp90 association with client kinases, eliminates measurable levels of the Ryk ICD protein and that Ryk is a catalytically inactive protein kinase (14), we conclude that Cdc37-Hsp90 is required for Ryk ICD stabilization rather than for activation of a kinase.
Cdc37 interacts with kinase catalytic domains through a glycine-rich loop (GXGXXG) in the N terminus (37). This loop is highly conserved among protein kinases or RTKs. However, the QXGXXG sequence in Ryk subdomain I contains a glutamine substitution of the first glycine (14). Our Ryk deletion mutants of subdomains I and II and a point mutant with alanine substituted for the remaining glycines (QXAXXA; data not shown) interacted with Cdc37, but C-terminal deletion mutants did not. Thus, the N-terminal lobe of Ryk is not necessary for Cdc37 binding. The requirement for the C-terminal lobe of kinases to bind the Cdc37-Hsp90 complex has been reported previously. V600E B-Raf, which is mutated in the activation segment of the C-terminal portion, shows increased association with Cdc37 compared with wild-type B-Raf. Treatment of cells with 17-AGG also decreased the interaction between V600E B-Raf and Cdc37 (25). Additionally, a recent study showed that the Cdc37-Hsp90 complex binds to the kinase domain of Lck via structures present in both the N-and C-terminal lobes (38).
We observed that association of the Ryk ICD with Cdc37 decreased during neural differentiation of ESCs, consistent with decreasing levels of the Ryk ICD. Treatment with 17-AGG significantly decreased Ryk ICD protein levels during early differentiation, showing the binding of the Ryk ICD and Cdc37 at that time. These results show a positive correlation between Ryk ICD stability and Cdc37-Hsp90 association, although it is not known how binding affinity of Cdc37-Hsp90 to Ryk is regulated during neural differentiation of ESCs.
In conclusion, we identified Cdc37 as a Ryk ICD-interacting protein and demonstrated that stability of the cytoplasmic Ryk ICD is regulated by Cdc37-Hsp90 activity, which then regulates translocation to the nucleus. Our findings suggest that Cdc37-Hsp90-mediated stabilization of the Ryk ICD is a crucial event in the process of Ryk-mediated signal transduction. These results also contribute to understanding a molecular mechanism whereby the receptor Ryk transduces a signal directly from the cell surface to the nucleus via a cleaved ICD during neurogenesis. | 2018-04-03T01:57:30.909Z | 2009-05-08T00:00:00.000 | {
"year": 2009,
"sha1": "2e33b501360a9054821900dc685b752f9ccb546c",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/19/12940.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4d0389b7b3caf2f4ae3d9f837475cc14e6feebb0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
20969724 | pes2o/s2orc | v3-fos-license | Epidemiologic analysis of central vein catheter infection in burn patients.
Background and Objectives
Currently, there are no well-defined guidelines or criteria for catheter-site care in burn patients, and there is little information about the epidemiology of central vein catheter (CVC) infection in such patients. This study aimed at addressing the epidemiological aspect of CVC infection in a sample of Iranian burn patients admitted to the largest referral burn center in Iran, Motahari Burn Center.
Materials and Methods
A total of 191 burn patients were eligible for the study. Catheter related blood stream infection (CRBSI) was diagnosed according to suspected line infection, sepsis or blood culture growing bacteria, which could not have been associated with another site.
Results
Of the 191 patients in this study, 45 males (23.68%) and 19 females (10%) had positive blood culture, confirming CV line infection. Patients who were burned by gas, gasoline ignition or burning Kerosene had the highest incidence of CV line infection. In contrast, patients burned by alcohol, pitch or thinner had the lower rate of CV line infection. Incidence of CV line infection was higher in patients with delay in presentation to the burn center (55.2%) when compared to those who presented without delay (22.8%). Pseudomonas aeruginosa was the most frequent colonizer of the wound culture (52.4%), the dominant strain of the first catheter tip culture (35%) and the dominant strain of the same day blood samples (53.8%). The mortality rate in patients diagnosed with CRBI was 21.9%.
Conclusion
One of the important factors related to CV line infection is delay inpresentation to the burn center. The rate of CV line infection was 20.64 in catheter days.
INTRODUCTION
There have been several reports addressing CVC infection leading to high mortality, morbidity and prolonged hospitalization (1,2). Controlling and managing these infections has imposed a heavy financial burden on the government (3). In an effort to reduce the burden of these infections and improve the standard of care for patients, healthcare providers, insurers, regulators and patients, advocates have attempted to reduce the incidence of CRBSI (4). These attempts have spanned multiple disciplines in the healthcare field and include such efforts as appropriate placement and removal of CVC by the healthcare professionals, monitoring infection incidence and progress by infection control personnel, organization of infection control committees by healthcare managers and even patients, who assist physicians in caring for their inserted catheters (5,6). The main goal of an effective preventive program is to eliminate CVC infection from all patient care areas. By providing an epidemiological analysis of CVC infection and educating health care personnel on the findings, we can move one step forward towards creating guidelines that help standardize the approach for providing care to at risk patients and ensure the existence of a minimum standard of care. The applications of this study are of particular importance in burn-related treatment centers due to the high risk of mortality and morbidity associated with CVC infection in these settings (7,8). In other words, there is increased risk for multi-localized and extensive infections not only in inserted venous catheters, but also in other organs (9). Additionally, because venous catheters in burn cases are frequently inserted either just at or in proximity to the wound site, CVC infections are more common in burn patients (10).
Despite the prevalence of CVC infections and their high mortality, there are no well-defined guidelines or criteria for catheter site care and the frequency at which they should be changed in burn patients. Also, little information is available onthe epidemiology of CRBSI in burn patients. Thus, the present study aimed at addressing the epidemiological aspects of CVC infection in a sample of burn patients treated at Motahari Burn Center in Iran.
MATERIALS AND METHODS
This was a retrospective review of a prospectively collected database of the adult and pediatric burn patients admitted to the Motahari Burn Hospital in Tehran, who had a central vein catheter in one year. Of the 191 patients, 23 were younger than 12 years. The data collected included the demographic information, total body surface area (TBSA), percentage of the burn wound, location of catheter insertion, blood and wound culture on the day of catheter removal and the delay time between when patients were admitted to a hospital and when they werereferred to the burn center. Data also included the time that the catheter was in place. If the CVL infection was suspected, the catheter had to be removed and replaced with another one. The reason for changing the catheter was also included in the collected data.
Wound culture was performed using swab culture. CRBSI was considered positive in patients, whose condition raised suspicion for catheter infection or sepsis, or if the blood culture result was positive with no other source of infection (11).
For statistical analysis, quantitative variables were presented as a mean ± standard deviation (SD) and categorical variables were summarized by frequency. Continuous variables were compared using at test or Mann-Whitney U test whenever the data did not appear to have a normal distribution, or when the assumption of equal variances was violated across study groups. Categorical variables were compared using chi square test. Statistical analysis was performed using SPSS Version 16.0 (SPSS Inc., Chicago, IL). P-value less than or equal to 0.05 was considered statistically significant.
RESULTS
The most common type of burns were caused by an explosion (54 patients), followed by burns caused by a fire (52 patients), gasoline (23 patients) and hot water (16 patients), as demonstrated in Fig. 1. A significant difference was observed in the incidence of positive culture across different types of burning. The incidence of CVC infections was higher in patients whose burns were caused by gas, gasoline ignition and burning Kerosene, while the incidencewas lowest in patients with burns caused by alcohol, pitch, or thinner (p = 0.008).
Delay in presentation of the patient to the burn unit resulted in a higher rate of positive culture, with positive cultures observed in 55.2% of the patients with delay and 22.8% of patients without delay (p < 0.001). No significant difference was observed in the mean age difference of patients with or without CVC infection (30.56 ± 16.17 years vs.34.44 ± 15.13 years, p = 0.104). Of the 191 patients included in the study, 25 passed away (13.1%). Moreover, of the patients, who had CV line infection, 21.9% passed away. The rate of CVC infection in femoral catheters was 32.93% (Table 1), and the rate of CVC infection in internal jugular and subclavian sites combined was slightly higher at 29.13% (p = 0.048).
The dominant organism in the first catheter samples included Acinetobacter baumannii (35.9%), P. aeruginosa (35.9%) and Staphylococcus aureus tween positive CVC culture result and insertion of CVC in the burned area (p < 0.0001).
The first wound culture was done at the site nearest to the CV line insertion site and the results showed P. aeruginosa (52.4%), A. baumannii (38.1%) and S. auerues (4.8%). The second wound culture was done with the same criteria and the results showed P. aeruginosa (70%) and A. baumannii (30%).
There was a significant difference between TBSA and positive culture results (p= 0.001). Mean TBSA with positive culture results was 43.8%, while the mean TBSA in negative culture results was 38.7% (p = 0.001). In patients with positive tip culture, the mean and median were 17.47 days and 15.5 days, respectively. In the first culture of CVC, the positive culture was 20.64 per 1000 catheter days; in the second culture of CVC, the positive culture was 46 per 1000 catheter days; and in the third CVC culture, positive result was 80 in 1000 catheter days.
No significant difference was found in the mean age of the patients with positive and negative culture results from the first catheter tip (30.22 ± 16.05 years vs.29.20 ± 15.32 years, p = 0.788). Additionally, no difference was found in extensiveness of the burn, indicated by (TBSA) of patients with and without positive culture from the first catheter tip (43.85 ± 13.70 vs.46.04 ± 19.54, p = 0.556). Also, no difference was found in the mean age (22.00 ± 15.07 years vs.18.00 ± 13.08 years, p = 0.688) and mean TBSA (43.33 ± 11.90 vs.37.67 ± 10.79, p = 0.484) with regard to positive culture from catheter tip at the second stage. In total, no association was found among positive vascular catheter culture and the baseline variables of gender, the presence of inhalation injury, and cause of burn in both first and second stages
DISCUSSION
Positive CRBSI was diagnosed whenever there was a suspicion of catheter infection, sepsis, or positive blood culture results with no other source of in-fection. The incidence of CRBSI per catheter day in burn intensive care unit (BICU) is much higher than its incidence in general ICU. This indicates a need for a separate set of guidelines for changing CV lines in burn patients. This study found that the rate of CRBSI was 20.6 per 1000 catheter days, compared to 15.4 per 1000 catheter days in O'Mara et al. study (11).
One option for changing a catheter is to rewire it. In rewiring a catheter, the line is left in place and only the CV catheter is replaced. There is conflicting findings on whether rewiring is superior to changing the whole catheter. In the Sheridan et al. study, it was supposed that changing CVP catheter with rewiring or changing catheter position was better for pediatric burn patients (12,13). However, in a different study, it was found that catheter infection rates were 25.2 per 1000 catheter days in rewiring techniques, compared to 16.6 per 1000 catheter days in changing the whole catheter (11). In contrast, Eyer et al. indicated that there is a need for routine exchange of central line catheter and rewiring and that it does not reduce patients' risk of infection (14). Rewiring techniques are not utilized frequently in our burn center, therefore, we could not collect data regarding rewiring of catheters.
In a multicenter study by Austin et al. in the United States, it was found that peripherally inserted central catheters were discontinued in 4.3% of burn patients due to central line associated bloodstream infection (15). In another study by Friedman et al., it was found that in patients with a TBSA > 60%, the incidence of CVC infection was 11.2 per 1000 catheter days, which was significantly higher than patients with a TBSA ≤ 60% (16). This study also revealed that the central venous catheters placed through burned skin instead of intact skin were 4 times more likely to be associated with CVC infection and that the most common infectious organism was Acinetobacter. This was similar to the finding of our study, as we found a significant difference between positive CVC culture results and insertion of CVC in burned areas of the skin. One possible explanation for the higher incidence of CVC infection in our burn patients, compared to general ICU patients, could be that the catheters were placed in burned areas of the skin or near it.
In a study by Tymonová et al. only 3.5% of patients had endogenous catheter colonization with positive peripheral blood culture and bacteremia; and the most frequent infecting pathogen in catheter tips was coagulase-negative Staphylococci (17). King et al. founda significant relationship between timing of central venous catheter exchange and frequency of bacteremia in burn patients (18). They found that the rate of catheter infection was 11% on the third day and 28% on the fourth day post insertion. They also found that CRBI occurred in 4% of the patients on the third day and 12% of the patients on the fourth day post insertion.
By comparing our results to those of previous surveys, 2 major points should be highlighted. Firstly, it seems that the rate of CVC infection is notably higher in Motahari Burn Center, when compared to developed countries. This indicates that the current approach is not successful in controlling CVC infections, presenting a need for a standardized set of guidelines to approach the care of catheter lines in burn patients. Secondly, it seems that the dominant pathogens that cause CVC infection are globally similar, and the most common pathogens include Acinetobacter and S. aureus. This can be attributed to the high global resistance of these pathogens to common antibiotics.
In our study, we found significant differences in the rate of infection based on location of the catheter in femoral, subclavian and jugular site, as demonstrated in Table 2, which is similar to findings of Greenhalgh and Deshpande and Goets articles (11,19,20). Previous studies also suggest that the risk of infection is lower in internal jugular catheters (21,22). In the Greenhalgh study, it was found that the infection rate in adults is higher than the rate in children, possibly because of catheter insertion near the burned areas of the skin (23,24) and because the adult patients had a higher TBSA and access via femoral catheters (25). In view of the positive tips on the second and third catheter culture, changing the protocol of central vein catheters in BICU should be seriously considered.
In this study, it was found that P. aeruginosa was the most common pathogen in CVC cultures taken the first time, followed by A. baumnnii and S. aureus. In the second CVC culture, when the site of the CVC was changed, the most common pathogen was P. aeruginosa, followed by A. baumanii and Enterococcus. In the third CVC culture, P. aeruginosa and Klebsiella were the most common pathogens. The wound culture taken from the wound nearest to the CVC insertion site revealed that the most common microorganisms were P. aeruginosa, A. baumannii, and S. aureus, which is identical to the first CVC cultures. This indicates that better wound infection control in burn patients can probably reduce the rate of positive CVC cultures. Additionally, the catheter insertion site should be as far from the burn wound as possible.
There are no current guidelines to determine when a patient catheter should be changed, and currently the catheters are changed only if there is suspicion of an infection. However, on average, infection is seen on catheter insertion on Day 17. We propose that to minimize the risk of infection, guidelines for catheter infection control must include a timeline to specify when catheters should be replaced.
There were some limitations in this study and this include: This retrospective study was conducted on new culture, first, second and third CVC culture in a burn center. It would be better to complete this study in a multicenter study to reduce all the essential factors. In Motahari Burn Center, rewiring is not performed as much, therefore, not enough data could be gathered about culture of the required CVL, and this should be included in further studies. Considering the higher rate of CRBSI in our center, changing the protocol for insertion, maintenance, and exchange is essential and the lines should be removed faster. | 2018-04-03T02:24:23.720Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "a93572029ba294145e69b2abc39a14afe593ac95",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ad0f5e98d128744df62933cf8e8222791f4130ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3874514 | pes2o/s2orc | v3-fos-license | Increased Fitness of Rice Plants to Abiotic Stress Via Habitat Adapted Symbiosis: A Strategy for Mitigating Impacts of Climate Change
Climate change and catastrophic events have contributed to rice shortages in several regions due to decreased water availability and soil salinization. Although not adapted to salt or drought stress, two commercial rice varieties achieved tolerance to these stresses by colonizing them with Class 2 fungal endophytes isolated from plants growing across moisture and salinity gradients. Plant growth and development, water usage, ROS sensitivity and osmolytes were measured with and without stress under controlled conditions. The endophytes conferred salt, drought and cold tolerance to growth chamber and greenhouse grown plants. Endophytes reduced water consumption by 20–30% and increased growth rate, reproductive yield, and biomass of greenhouse grown plants. In the absence of stress, there was no apparent cost of the endophytes to plants, however, endophyte colonization decreased from 100% at planting to 65% compared to greenhouse plants grown under continual stress (maintained 100% colonization). These findings indicate that rice plants can exhibit enhanced stress tolerance via symbiosis with Class 2 endophytes, and suggest that symbiotic technology may be useful in mitigating impacts of climate change on other crops and expanding agricultural production onto marginal lands.
Introduction
The geographic distribution pattern of plants is thought to be based on climatic and edaphic heterogeneity that occurs across complex habitats [1,2,3]. All plants express some level of phenotypic plasticity [4] enabling them to grow in diverse habitats and across environmental gradients [5,6,7,8,9]. Phenotypic plasticity is defined as the production of multiple phenotypes from a single genotype, depending on environmental conditions and is considered adaptive if it is maintained by natural selection [4,9,10].
Plant adaptation to high stress habitats likely involves a combination of phenotypic plasticity and genetic adaptation, and is thought to involve processes exclusive to the plant genome [11,12,13,14]. However, the mechanisms responsible for adaptation to high stress habitats are poorly defined. For example, all plants are known to perceive, transmit signals and respond to abiotic stresses such as drought, heat, and salinity [15,16]. Yet, few species are able to colonize high stress habitats which typically have decreased levels of plant abundance compared to adjacent low stress habitats [17,18]. Although there are numerous reports on the genetic, molecular and physiological bases of how plants respond to stress, the nature of plant adaptation to high stress habitats remains unresolved [19,20,21]. However, most ecological studies fail to consider the fact that all plants in natural ecosystems are thought to be symbiotic with fungal endophytes and these endophytes can have profound effects on plant stress tolerance and fitness [22,23]. For example, fungal endophytes can confer fitness benefits to plants including increased root and shoot biomass, increased yield, tolerance to abiotic stresses such as heat, salt, and drought, and to biotic stresses such as pathogens and herbivores [24,25,26,27,28,29,30,31]. One group of fungal endophytes [Class 2; [32]] confer habitat-specific stress tolerance to plants through a process defined as Habitat Adapted Symbiosis [33]. Remarkably, Class 2 endophytes, are capable of colonizing and conferring habitat-specific stress tolerance to monocot and eudicot plants which may suggest that the symbiotic communication responsible for stress tolerance may predate the divergence of these lineages (est. 145-230 mya) [34,35,36].
During the last several decades, there have been major climatic events that decreased agricultural productivity of rice (one of the four major food crops) at locations around the world. For example, in 2004, an earthquake generated tidal wave flooded Indonesia [37], and in 2008, a cyclone resulted in a tidal surge that flooded southern Burma. Both of these events resulted in inundation of productive agricultural lands with salt water that decreased or eliminated production of rice for one or more years. During the last 40 years of climate change, increased minimum air temperatures during growing seasons have resulted in a substantial decrease in rice yields in China and the Philippines and are predicted to continue [38,39]. Collectively, these events along with increasing world-population, have contributed to shortages and increased prices of rice exacerbating hunger and famine issues globally. Climate change has also begun to alter the phenology of currently used commercial rice varieties making predictions about rice availability and abundance less reliable. Although it may be possible to compensate for some impacts of phenology shifts by incorporating earlier season varieties into agricultural practices [40], the adaptive capabilities of rice will ultimately determine the severity of climate change on annual crop yields. However, the adaptive potential of most plants, including rice, is not well characterized. Here we report that the ability of rice to rapidly exhibit stress tolerance is dependent on associations with Class 2 fungal endophytes. The influence of three endophytes on the ability of rice to tolerate low temperatures, high salinity and desiccation was tested. In addition, the influence of the endophytes on growth, development and water usage was assessed under greenhouse and laboratory conditions. The potential for using symbiotic technologies to mitigate the impacts of climate change is discussed.
Growth Chamber/Greenhouse Studies
Fitness benefits (growth, development and yield). Both Fusarium culmorum isolate FcRed1 (SaltSym) and Curvularia protuberata isolate Cp4666D (TempSym1) ( Table 1) significantly increased the growth and development of seedlings in the absence of stress compared to nonsymbioitic plants ( Fig.1 & 2). Growth responses were evident less than 24 hr post inoculation and symbiotic seedlings averaged between 20 to 68% greater biomass than nonsymbiotic plants after 3 days of growth depending on the symbiotic association.
Symbiotically induced seedling developmental rates (Fig. 3) and young plant growth increases were dramatic in both roots and shoots of five-week old plants (Fig. 4). Differences in growth and root development of S and NS plants were observed using timelapse photography under 14 hr light cycles (Fig. 2). Rice seedlings were exposed for 48 hr to water (NS) and fungal spores (S) at which point germination occurred (t = 0, Fig. 2). Germinated seedlings were than embedded in the silica sand layer on top of the time-lapse apparatus (see Materials and Methods). NS plant shoot growth began before root growth initiated while S plants increased root mass prior to substantial shoot growth. By the time S plants began developing root hairs (day 6), NS plants had not yet begun significant root growth.
Mycelia of endophytes grown in liquid culture (SaltSym & TemSym1) were assayed for the production of IAA and IAAlike plant growth stimulating compounds. The colorimetric assay detects indole compounds, some of which are important in promoting plant growth [41]. This assay was specific enough to detect Indoles and not tryptophan. Endophytes were grown in liquid media for 21 days (Table 2). Endophytes grown in Mathur's Media with the addition of Trp produced 200-500 ppm IAA within 5 days, and levels were maintained until the last time point taken after 21 days. In the absence of Trp, or growth in other than Mathur's Media (such as PDA), no IAA was detected. These results suggest that the growth response in symbiotic plants may be due to endophyte production of IAA or IAA like plant growth stimulating compounds, which can be induced in the presence of Trp, and possibly suppressed by media components (e.g. sucrose or other amino acids).
Additional analysis revealed that IAA was not detected in fiveday old S and NS rice seedlings irrespective of a significant growth response in S plants ( Table 2). The absence of detectible levels of IAA in symbiotic rice seedlings may reflect levels of IAA below detectable range for the assay, the physiological status of plants at the time of analysis, or lack of sufficient Trp for fungal biosynthesis. Regardless, these results suggest that the potential role of endophytes and IAA production in planta needs to be addressed in greater detail.
Growth and biomass differences under greenhouse conditions translated into yield differences with NS plants producing less than 10% of the seed yield of SaltSym plants in the presence of salt stress (Fig. 4B). A seedling development assay indicated that germinated seeds of S and NS plants had equivalent levels of viability and development at temperatures.15uC ($ 95%; Figure 3B).
Stress Tolerance (cold, salt and drought)
The ability of habitat-adapted endophytes to confer significant levels of stress tolerance to young rice plants was assessed in double-decker Magenta boxes [33]under laboratory conditions. As anticipated, SaltSym conferred significant levels of salt tolerance to rice allowing them to grow at 300 mM NaCl (Fig. 3). Rice plants were grown for five weeks without stress before continual exposure to NaCl for three weeks.
To determine the impact of salt stress on mature plants (plants taken to seed set), SaltSym and NS plants were grown under greenhouse conditions, without salt stress for two months, and then gradually exposing plants to increasing levels of salt from 100 mM-300 mM NaCl prior to seed production. Seed production was measured after 5 months of growth, of which, approximately 6 weeks of that time frame, involved exposure to the highest level of salt concentrations of 300 mM NaCl. During the course of these studies, rice plants continued to grow even when exposed to high levels of salt, resulting in little root biomass differences within a treatment, but significant differences in shoot biomass that was +/2 exposed to salt stress (Fig. 4). Analysis of roots and shoot showed an increase in the root biomass of S plants in the presence and absence of salt stress, and shoot biomass of S plants in the presence and absence of stress, compared to NS plants. These results suggest that through symbiosis, endophytes may play a dual role in growth enhancement and salt stress tolerance. When yields were assessed, a significant difference in Figure 1. Effects of symbiosis on growth response on young seedlings in the absence of stress. The number of plants/treatment are indicated (N = XX) below. Statistical analysis was performed using Duncan's multiple-range test. Values with the same letters are not significantly different. A) Representative photo of three day old rice seedlings (N = 30 total) that were nonsymbiotic (NS) or symbiotic (S) with SaltSym. B) Growth of rice seedlings were measured by assessing the dry biomass (mg) of (N = 10/rep) three day old rice seedling that were NS or S with the SaltSym or TempSym1. Each assay was repeated three times. Statistical analysis indicated that S plants were statistically larger (SD#0.06; P#0.0002) than NS plants. doi:10.1371/journal.pone.0014823.g001 were photographed every 20 min over a ten-day period. Sterilized, imbibed rice seeds were inoculated with fungal spore solutions (S) or mock-inoculated (NS) for 48 hours on water agar plates until seed germination occurred. Three S and NS seedlings were randomly chosen and imbedded into the silica sand layer on top of the vertical plant growth apparatus, and the time-lapse photography initiated (identified as time point t = 0). Zero, three, six and ten day time points are represented by t = 0, t = d3, t = d6, and t = d10, respectively. doi:10.1371/journal.pone.0014823.g002 seed production was observed in S plants in the presence and absence of stress when compared to NS plants (Fig.4). Analysis of % biomass change in NS+ samples showed a decrease in root (30.84%) and shoot (26.13%) tissues, respectively, when exposed to salt stress compared to the same plant treatment (root and shoot) in the absence (NS-) of stress. Symbiotic (SaltSym+) plants however, showed a statistically lower %biomass change in root (15.04%) and shoot (19.54%) tissues, respectively, in salt stressed plants compared to the same plant (root and shoot) treatment in the absence (SaltSym-) of stress. Although seed production by S plants was decreased by salt stress, the levels were not significantly different from NS plants grown without stress.
TempSym1 was isolated from plants growing in geothermal soils that achieve temperatures of up to 60uC during dry summers and around 20uC in the winter [29], and TempSym2 isolated from the same plant species in cooler geothermal soils (,30uC in summer & 5uC in winter). Although TempSym1 is adapted to high soil temperatures, the canopy of the plants in which both endophytes were originally isolated, experience below freezing temperatures in the winter and continue to produce vegetative tissues. Therefore, TempSym1 & 2 (both C. protuberata isolates) were assessed for the ability to confer cold tolerance to rice seedling development (Fig. 3). This was analyzed by assessing the seedling development of inoculated and non-inoculated treatments incubated at temperatures below 20uC. To ideally make observations concerning the impacts of cold stress on seedling shoot and root development, rice seeds were surface sterilized and imbibed (see Materials and Methods: plant colonization section), seeds carefully selected that showed a small white tissue protuberance indicating a high potential success rate of germination (unpublished observations made by authors Redman & Kim). Samples were then inoculated or non-inoculated and placed at 5-20uC. The % seedling development (appearance of roots and shoots) was assessed after ten days. Symbiotic seedlings had development frequencies of greater than 90% at all of the temperatures tested, while uninoculated seeds achieved greater than 90% only at 20uC, and a substantial decrease of 70% and 45%, at 10u and 5uC, respectively.
All three endophytes (TempSym1 and 2, and SaltSym) were tested for the degree of drought tolerance they could confer in the absence of any other stress. This was done by determining time-towilt after the cessation of watering (Fig. 3). NS plants wilted after 3 days while TempSym1 inoculated plants did not wilt for 9 days. The other two endophytes in this study (TempSym2 and SaltSym) also . Plant health was based on comparison to NS controls and rated from 1 to 5 (1 = dead, 2 = severely wilted and chlorotic, 3 = wilted +/2 chlorosis, 4 = slightly wilted, 5 = healthy w/o lesions or wilting). All assays described from left to right and images are representative of all plants/treatment. A) Rice plants (N = 60), no stress controls (labeled "C") representative of both S and NS plants (100%, 5), S with SaltSym (100%, 5), or NS (0%, 1) exposed to 300 mM NaCl for 21 days. While all plants bent over with age, unstressed controls and salt exposed S plants remained hydrated while the NS plants wilted. B) The % rice seedling development at 5-20uC of NS (black bars) and TempSym1 colonized (grey bars) treatments were assessed. After the initial sterilization and imbibing process (see Materials and Methods), seeds (N = 20) were placed on agar water media plates, and +/2 inoculated with the fungal symbiont, and % seedling development assessed after ten days. Statistical analysis (Duncan's multiple-range test; SE#4.48; P#0.001) indicated the % seedling development was significantly higher at 5uC and 10uC in S treatments but not at 15uC and 20uC. C) Representative photo of five week old rice plants (N = 140-210) that are NS or S with TempSym1 (TS1). From Left to right: control plants (C) were kept hydrated and healthy (100%, 5) for both treatments. The remaining plants were not watered for 15,12, 9,7, 5, and 3 days. Post drought stress, plants were re-hydrated for 2 days and viability assessed. After 3 days, NS plants began to show the effects of drought (70%, 2; 25%, 1) and after.5 days, NS plants succumbed to the stress (.90%, 1). In contrast, S plants did not show the effects of drought until after 12 days (70%, 1; 30%, 2). Moreover, S plants remained green and robust in the absence of water for 9 days ($90%, 5) as visualized by a general thicker, green canopy. D) Fluid usage of 5 week old NS or S (SaltSym, TempSym1, and TempSym2) rice plants (N = 60) as ml of fluid used over a ten day period. Statistical analysis (Duncan's multiple-range test) indicated significant differences in fluid usage (SD#7.51; P,0.05) with all three symbionts using less fluid compared to NS plants. doi:10.1371/journal.pone.0014823.g003 conferred drought tolerance (7 and 9 day delay in wilt, respectively, not shown) compared to NS plants. Previous studies indicated that these endophytes decrease plant water consumption, which may explain symbiotically conferred drought tolerance [33]. A comparison of water consumption in S and NS rice plants revealed significant decreases (20-30%) by S plants with TempSym1 inoculated plants using the least amount of water (Fig. 3).
Physiological responses (osmolytes, ROS)
Two common plant responses to abiotic stresses such as heat, salt and drought involve regulating the levels of osmolytes and reactive oxygen species (ROS) [12]. Therefore, we analyzed plant osmolyte concentrations and sensitivity to ROS before and after exposure to salt and drought stress (Fig. 5). In the absence of salt stress, S plants had higher levels of osmolytes in shoots compared to NS plants but equivalent levels in the roots. In the presence of salt, all of the plants increased osmolyte levels in both shoots and roots, regardless of the treatment, with SaltSym expressing the highest osmolyte levels.
Excised leaf discs from plants grown in the absence of stress or exposed to salt and drought stress were analyzed for ROS sensitivity. One way to mimic endogenous production and assess sensitivity to ROS is to expose photosynthetic tissue to the herbicide paraquat, which is reduced by electron transfer from plant photosystem I and oxidized by molecular oxygen resulting in the generation of superoxide ions and subsequent photobleaching [42]. Leaf discs from plants that were not exposed to stress remained green indicating that ROS was not produced by exposure to paraquat (Fig. 5). When exposed to were +/2 exposed to a gradual increase in NaCl concentrations (0-300 mM) over an additional threemonth period. Non-stressed NS and S (SaltSym) plants were hydrated with 1x Hoagland's solution supplemented with 5 mM CaCl 2 throughout the length of the study (5 months total). Stress treatments began after 2 months and plants were exposed to 100 mM NaCl for 3 weeks, then increased 200 mM NaCl for an additional 3 weeks, and then increased and maintained at 300 mM NaCl until the completion of the study (5 months total).
Discussion
Rice plants were adapted to cold, salt and drought stress simply by colonization with Class 2 fungal endophytes. Salt and temperature stress tolerance are habitat-adapted traits of the endophytes evaluated in this study [33]. SaltSym, derived from coastal plants (Leymus mollis) exposed to high salt stress confer salt tolerance, and not temperature tolerance. TempSym1 & 2 were isolated from Dichanthelium lanuginosum thriving in geothermal soils conferred temperature tolerance and not salt tolerance. Since TempSym1 & 2 originated in geothermal soils differing in summer maximum and winter minimum temperatures, we anticipated that TempSym2 and not TempSym1 could confer cold tolerance. The fact that both endophytes conferred cold tolerance may reflect the cold winter temperatures plants experience above ground rather than in the soil.
Initial cell signaling and biochemical pathways involved in both hot and cold temperature stress responses begin with the same root physiological processes and later branch off into unique pathways [12]. It is tempting to speculate that TempSym1 & 2 regulate early events (as our ROS studies indicate) in the plant temperature response such that downstream responses do not occur resulting in tolerance to heat and cold.
To make observations concerning the impacts of cold stress on seedling shoot and root development, rice seeds showing a small white tissue protuberance (indicating a high potential success rate of germination) were used for the cold stress assays. Cold stress tolerance was conferred to germinated seeds under laboratory conditions by TempSym1 less than 24 hr post inoculation of seeds resulting in greater than 90% seedling development at temperatures between 5u-20uC (Fig.3). It is possible that TempSym1 either allows plants to increase metabolic rates at low temperatures or increase metabolic efficiency to overcome affects of low temperature. Symbiotically induced metabolic efficiency was also observed in laboratory studies showing decreased water consumption and increased biomass in S plants.
Salt and drought stress were tested under greenhouse and growth chamber conditions. SaltSym conferred salt tolerance, allowing plants to grow when continually exposed to a solution of 300 mM NaCl (Fig. 3). More importantly, a gradual increase in salt exposure of mature plants effectively eliminated seed production in NS plants. Although mature S plants had reduced seed production under salt stress compared to non-stressed plants, salt stressed S plants produced similar amounts of seed as NS plants grown in the absence of salt stress (Fig. 4). The number of stems of S plants was statistically higher (data not shown P$0.05)) than NS plants which would prove to be beneficial to overall plant health and yields under field conditions. The levels of salt used in these studies are similar to those occurring in agricultural lands after tsunamis or tidal surges [37]. Therefore, we anticipate that using SaltSym may allow growers to mitigate the impacts of salt inundation.
A common physiological response to salt stress is an increase in the production of osmolytes [12]. In the absence of salt stress, osmolyte levels were similar in the roots of S and NS plants but significantly higher in the shoots of S plants compared to NS plants (Fig. 5). Upon exposure to salt stress both S and NS plants significantly increased osmolyte levels, but their responses differed with NS plants increasing by approximately 50% and symbiotic plants increasing approximately 30% compared to non-stressed plants. This was unexpected as previous results, albeit with a different stress, indicated that in the presence of heat stress, Figure 5. Effect of symbiosis on plant osmolyte concentrations and paraquat-induced photobleaching (ROS) under laboratory conditions. A) Five week old rice plants (N = 30) that were NS or colonized with SaltSym or TempSym1 exposed for ten days in the absence (-) and presence (+) of salt stress (300 mM NaCl), at which point, the effects of stress began to show in NS plants treatments ($70% wilted +/2chlorosis). SaltSym imparts salt tolerance and TempSym1 does not. Osmolyte concentrations (milliosmoles per kg wet weight) of roots and shoots were assessed and statistical analysis (Duncan's multiple-range test; SE#9.98 &,23.73 for root, and shoot, respectively; P,0.0001 for root and shoot) indicated significantly higher levels in the shoots of S plants compared to NS plants in the absence of salt stress, and no statistical differences between treatments in the presence of salt stress. No significant differences were observed in roots in the absence of salt stress. In the presence of stress, Tempsym1+ showed significantly lower level of osmolytes than SaltSym+ and NS+ treatments. Values with the same letters are not significantly different. B) NS and S (SaltSym and TempSym1 & 2) plants exposed to salt (300 mM NaCl, 10 days) and drought stress (3 days) were tested for paraquat-induced photobleaching (ROS activity). Time points were chosen when symptoms began to appear (wilting and chlorosis) in NS stressed plants. Leaf disks (N = 9) from 9 independent plants were used for ROS assays. Leaf disks were sampled from leaf tissues of similar size, developmental age, and location for optimal side-by-side comparisons. Values indicate the number of leaf disks out of a total of nine that bleached white after exposure to paraquat indicating ROS generation. Statistical analysis (Duncan's multiple-range test) indicated that in the absence of stress, little to no (0-11%) photo beaching occurred in all the treatments. In contrast, significant differences occurred with 100% of the NS plant disks for both salt and drought stress bleaching white compared to only 11-22% of the S plant disks (P,0.0001). ND = not determined. doi:10.1371/journal.pone.0014823.g005 osmolytes increased significantly in nonsymbiotic plants but either increased slightly or not at all in S plants [33]. These results suggest that osmolyte production in symbiotic plants varies either with the type of stress, endophyte genotype, and/or plant genotype. Regardless, since all plant treatments in this study responded in a similar manner (an overall increase in the presence of stress regardless of the treatment), it appears that osmolyte production alone is not responsible for symbiotic adaptation of rice plants to salt stress.
All three endophytes conferred drought tolerance to rice plants delaying wilt, 2-3 times beyond that of NS plants (Fig. 3). Although the mechanisms of endophyte conferred drought tolerance are not known, delayed wilt time correlated with a reduction of water usage (20-33%). Production of ROS is correlated with the early events in the plants stress response system. Our studies indicated that drought and salt tolerance in S plants correlated to decreased ROS activity (Fig. 5). All of the NS leaf tissues photobleached in the presence of paraquat while only 0-22% of the S plants showed any photobleaching. Increases in ROS are common to all stresses as a result of stress-induced metabolic imbalances [42,43]. The data suggest that in the presence of stress, either rice plants remain metabolically balanced or over express ROS scavenging antioxidation systems. Nevertheless, decreased ROS activity in S plants correlates strongly with stress tolerance and may play a critical role in the process.
The endophytes increased the potential fitness of rice plants by enhancing growth, development, biomass and yield in the presence and absence of stress as observed under laboratory and greenhouse conditions. The influence of the endophytes on plant growth and development was significant. Assessing percent biomass changes in the treatments revealed that there was a 6.59% decrease in shoot biomass in NS compared to SaltSym plants exposed to stress. In roots, the effects were more dramatic with 15.8% root biomass decrease in NS plant compared to SaltSym plants exposed to stress. These results suggest that SaltSym plants are able to better deal with the negative effects of salt stress than their NS counterparts. Although the basis of endophyte induced growth promotion is not known, the fungal endophytes are able to produce significant amounts (#500 ppm) of IAA, a plant growth hormone, when grown in culture (Table 2). Remarkably, enhanced growth was observed within 24 hr of colonization and observations indicated endophytes influenced the allocation of resources into roots and shoots (Fig. 2). Time-lapse imaging revealed that nonsymbiotic plants preferentially allocated resources into shoots prior to substantial root growth while S plants increased root mass prior to shoot growth. This is in contrast to plants grown under continual light: NS plants equally distributed resources into roots and shoots while the shoots of S plants had limited growth until root hairs were developed [44]. The difference in resource allocation in plants grown under continual light and a 14 hr light cycle indicates that under light conditions (12-14 hr) occurring during crop production, the impacts of symbiosis are much greater than under continual light. It is tempting to speculate that since symbiotic plants use less water, produce greater biomass, and have higher yields; the endophytes may allow rice plants to achieve greater metabolic efficiency.
Taxonomically, L. mollis, D. lanuginosum and rice are in the family (Poaceae) but are in different subfamilies (Pooideae, Panicoideae and Ehrhartoideae, respectively). Previous studies indicated that SaltSym and TempSym1 could also colonize and confer habitat-specific stress tolerances to the eudicot tomato [27,33] suggesting that the symbiotic communication responsible for stress tolerance is conserved among plant lineages. The ability of endophytes to colonize and confer stress tolerance, increase yields and biomass, and disease resistance [28] to genetically unrelated plant species suggests that they may be useful in adapting plants to drought, salt and temperature stresses that are predicted to worsen in future years due to climate change.
Assessing endophyte colonization from plant tissues
At the beginning and end of each experiment, the efficiency of endophyte colonization of inoculated plants and the absence of endophytes in mock inoculated controls was assessed as follows: a subset of at least 10% of laboratory and greenhouse plants were assessed for colonization. Plants were washed until soil debris was removed, placed in to plastic zip-loc baggies and surface sterilized as previously described [28,45]. Using aseptic technique, plants were cut into sections representing the roots and stem sections, imprinted [46], plated on fungal growth media (see below), and incubated 5-7 days at 22uC with a 12 hr light cycle (cool fluorescent lights) to allow for the emergence of fungi. Upon emergence, fungal endophytes were identified using microbiological and molecular techniques as previously described [33]. The effectiveness of surface sterilization was verified using the imprint technique [46].
Fungal cultures. Fusarium culmorum and Curvularia protuberata species (Table 1) were cultured on 0.1X potato dextrose agar (PDA) medium supplemented with 50-100 mg/ml of ampicillin, tetracycline, and streptomycin, and fungal cultures grown at 22uC with a 12 hr light regime. After 5-14 days of growth, conidia were harvested from plates by adding 10 ml of sterile water and gently scraping off spores with a sterile glass slide. The final volume of spores was adjusted to 100 ml with sterile water, filtered through four layers of sterile cotton cheesecloth gauze and spore concentration adjusted to 10 3 -10 4 spores/ml. Rice Varieties. Oryza sativa, var. M-206 (subspecies Japonica) and Dongjin (subspecies Indica) were collectively used in greenhouse and laboratory studies. M-206 is a variety predominantly grown in Northern California and Dongjin in South Korea.
Plant colonization. For laboratory and greenhouse studies, seeds were surface sterilized in 2.5% (v/v) sodium hypochlorite for 24-48 hr, rinsed with 5-10 volumes of sterile distilled water, and imbibed in 1-2 volumes of water for 8-12 hr. Seeds were germinated on 1% agar water medium, maintained at 26u-30uC and exposed to a 12 hr fluorescent light-regime. To ensure that our studies began only with nonsymbiotic plants, seedlings that showed no outgrowth of fungi into the surrounding media were chosen and transplanted. Any seedlings showing outgrowth of fungi, were discarded. Endophyte-free plants (up to 20 plants/ magenta box depending upon the study) were planted into sterile double-decker magenta boxes (modified magenta boxes to hold soil or sand in upper chamber, and fluid in lower chamber that is wicked-up through a cotton rope; [33]) containing equivalent amounts (380+/25 g) of sterile-sand, or Sunshine Mix #4 [Steuber Distributing Co., WA, USA (40+/20.5 g)]. The lower chamber was filled with 200 ml of sterile water or 1x Hoagland's solution supplemented with 5 mM CaCl 2 . After 1 week, plants were either mock inoculated with water (nonsymbiotic) or inoculated with fungal endophytes by pipetting 10-100 ul of spores (10 3 -10 4 /ml) at the base of the crowns or stems. Plants were grown under a 12 hr light regime at 26u-30uC for 3-5 weeks for laboratory, and 2 months for greenhouse studies prior to imposing stress.
Growth response and development. Symbiotically induced growth response of root and shoot development was visualized in seedlings through time-lapse photography. Nonsymbiotic (NS) and symbiotic (S) seeds were generated by placing surface sterilized seeds on agar plates (containing 1x Hoagland's solution supplemented with 5 mM CaCl 2 ) and flooding the plates with water (mock-inoculated) or fungal spores (10 2 -10 3 /ml) for 48 hr. Three geminated seedlings were chosen at random from a total of N = 100 seeds. The representative seedlings were placed onto a vertical plant growth apparatus for photographic monitoring. The apparatus was comprised of two 3 mm thick glass plates (30 cm630 cm), with a divider (high pressure tubing) placed down the center, and the whole apparatus sealed around the edges using Tygon tubing and clamps to generate two separated leakfree compartments. One hundred fifty ml of 1x Hoagland's media (supplemented with 5 mM CaCl 2 and 1% agarose) was poured into each compartment and allowed to solidify. A thin layer (2-3 mm) of silica sand was placed on top of the solid matrix. All components and media used were either surface sterilized in 70% EtOH or autoclaved prior to assembly. Three germinated seedlings were then embedded in the sand on each side of the compartment. Seedlings were maintained at 26uC in a 14 hour light/dark regime using daylight balanced fluorescent studio lights. Photos were taken every 20 min using a Canon PowerShot G5 camera and a Pclix infrared Controller (Toronto, Canada). Salt -plants were exposed to 100-300 mM NaCl in 1x Hoagland's solution supplemented with 5 mM CaCl 2 (referred to as 100 mM, 200 mM and 300 mM NaCl solution) for 10-21 days for laboratory studies by filling the lower chamber of the doubledecker magenta boxes with 200 ml of the salt solutions. Greenhouse studies with mature plants were exposed to 100 mM-300 mM NaCl for up to 3 months (see below). After plants were showing symptoms (i.e., nonsymbiotic plants dead or severely wilted), they were re-hydrated in sterile water devoid of NaCl for 48 hr, plant health assessed and photographed. All laboratory assays were repeated three times.
Drought -watering was terminated for 3-15 days by decanting off the fluid in the lower chamber of the double-decker magenta box and letting the plant soils dry out over-time. A hydrometer (Stevens Vitel Inc.) was used to ensure that soil moisture levels were equivalent between treatments when watering was terminated. After plants showed symptoms (i.e., NS plants dead or severely wilted), they were re-hydrated in sterile water for 48 hr, plant health assessed and photographed. All assays were repeated three times.
Plant water usage and biomass. Water consumption was measured in double-decker magenta boxes. Initially, 200 ml of 1x Hoagland's solution supplemented with 5 mM CaCl 2 were placed in the lower chamber. Fluid remaining in the lower chamber after 10 days of plant growth was measured, and water usage calculated as ml consumed/10 days. All assays were repeated three times.
Cold Stress. The effect of cold stress was assessed using surface sterilized and imbibed seeds (see plant colonization above for details) that were either mock-water or fungal spore (10 2 -10 3 / ml) inoculated by immersion in solutions for 48 hr with gentle agitation. Twenty NS and S seedlings (seeds exhibiting a small white tissue radical -indicative of germination) were placed on solid water agar (1%) plates and incubated at 5uC, 10uC, 15uC, and 20uC, and the % seedling development assessed after 10 days. All assays were repeated three times.
Salt stress versus plant biomass and yields. The effect of symbiosis on the growth and yield of mature rice plants exposed to prolonged salt stress (3 months) was tested with NS and S (endophyte conferring salt tolerance referred to as SaltSym)] plants under greenhouse conditions. Plants were generated using standard protocols (as described above). Two-month old plants were either maintained as control non-stressed plants and watered with 1x Hoagland's solution supplemented with 5 mM CaCl 2 or salt stressed plants exposed to 100 mM-300 mM NaCl in 1x Hoagland's solution supplemented with 5 mM CaCl 2 . Stressed plants were initially exposed to 100 mM NaCl solution for 3 weeks, exposed to 200 mM NaCl solution for an additional 3 weeks, and then exposed to 300 mM NaCl solution until the termination of the studies (approximately 6 weeks for a total of 5 months for the study). Upon termination of the studies, the spikes were removed and weights determined (g). Prior to plant biomass assessment (wet weight), roots and shoots were gently washed to remove dirt and debris. Roots and stems were than blotted dry, cut into root and shoot sections, and weights determined (g). Representative photos were taken of all treatments.
Colony Forming Units (CFU). NS and S plants were surface sterilized (described above) and 5 plants (total of 0.5 g) pooled to obtain equal amounts of roots and lower stems. Plant tissues were homogenized (Tekmar tissue homogenizer) in 10 ml of STC osmotic buffer (1M Sorbitol, 10 mM TRIS-HCl, 50 mM CaCl 2 , pH 7.5) on ice and 100 ml plated onto 0.1XPDA fungal growth medium (see above). After 5-7 days at 25uC, CFU were assessed. All assays were repeated three times.
Plant osmolyte concentrations. NS and S plants exposed to +/2 salt stress were analyzed for osmolyte concentrations. Equivalent amounts of root and lower stem tissues (100 mg total) from 3-5 plants/condition were ground in 500 ul water with 3 mg sterile sand, boiled for 30 min, samples cooled to 25uC, centrifuged for 5 min at 6 K rpm, and osmolytes measured with a Micro Osmometer 3300 (Advanced Instruments). All assays were repeated three times.
Reactive Oxygen Species (ROS). NS and S plants were exposed to +/2 salt (300 mM NaCl solution) and drought stress for 3-10 days and leaf tissue samples taken just prior to or when slight to moderate stress induced symptoms were observed. Using a cork borer, leaf discs (2 mm) were obtained from each of 3 replicate plants from different magenta boxes and placed on a solution of 1 uM of the herbicide paraquat (N,N'-Dimethyl-4,49-bipyridinium dichloride, Syngenta, Greensboro, NC) and incubated at 22uC under fluorescent lights. After 48 hr exposure to paraquat, leaf discs were photographed to document chlorophyll oxidation visualized by tissue bleaching. All assays were repeated three times.
IAA Assays. Fusarium sp. and Curvularia sp. were cultured on 0.02-0.1X potato dextrose agar (PDA) medium and 0.2-1X modified Mathur's media [MS [47]]. Both media were supplemented with 50-100 ug/ml of ampicillin, tetracycline, and streptomycin. Indole-3-acetic acid (IAA) assays of mycelia grown in liquid media was assessed by growth on 0.1X PDA or 1X MS media as follows: 3-5 plugs of fungal mycelia plugs were inoculated into liquid media and grown for 2 days at 22uC with agitation (220 rpm). After 2 days, mycelia were blended (Tekmar) and grown an additional 1-2 days until a dense suspension of mycelia was achieved. Mycelia were filter through 4 layers of sterile gauze and washed with 10 volumes of sterile water. Ten percent mycelia (g/100 ml) was inoculated into 0.2X MS liquid media +/20.1% tryptophan (Trp) and grown with agitation (220 rpm) for up to 3 weeks. One ml of supernatant was collected at various time points (24 hr-21 days), the supernatant passed through spin columns (Millipore centrifugal filter units) to remove mycelia and pigment, and IAA assessed using Fe-HClO 4 solution colorimetric assay (OD = 530 nm) protocol [41] and levels compared to an IAA (Sigma) standard curve of 0-1000 ppm. IAA levels in NS and S [F. culmorum isolate FcRed1 (SaltSym) and C. protuberata isolate Cp4666D (TempSym1)] 5 day old rice seedlings were assessed by grinding x10 plants (seeds removed) in liquid N 2 , re-suspending plant debris in an equal volume of sterile water, vortexed 1 min, incubated for 30 min at 22uC, and the supernatant passed through spin columns and assessed for IAA as described above. IAA standards were passed through spin columns with no hindrance. All assays were conducted three times.
Statistical analysis. P values were determined by Duncan's multiple-range test and data analyzed using SAS [48]. | 2016-05-12T22:15:10.714Z | 2011-07-05T00:00:00.000 | {
"year": 2011,
"sha1": "919d317ecdece6118adc1210990280682c1a2d8f",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0014823&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "919d317ecdece6118adc1210990280682c1a2d8f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
264577054 | pes2o/s2orc | v3-fos-license | How Subtalar Kinematics Affects Knee Laxity in Soccer Players After Anterior Cruciate Ligament Injury?
Purpose The goal of the current study was to ascertain whether there is an association between foot pronation and anterior cruciate ligament (ACL) injury in a group of elite professional soccer players. Methods Two groups of soccer players were studied, all of whom played in the Greek Super League. The ACL group included players who had suffered an ACL injury in the last 2 years. The non-ACL group was composed of players who had never suffered an ACL injury. We used a 3D baropodometric laser scanner to measure pronation or overpronation (navicular drop phenomenon) of the subtalar joint and how this affects the subtalar joint while standing. We assessed ACL laxity using the Genourob Rotab. Results ACL-injured patients, regardless of the mechanism of injury, exhibited greater navicular drop values than a randomly selected group of subjects with no history of ACL injury. Conclusion Greater knee joint laxity and subtalar pronation may be associated with an increased risk of ACL injury. Pronation of the foot appears to be a risk factor for ACL injury. These findings should be integrated into future studies to better define how neuromuscular control related to lower extremity biomechanics is associated with ACL injury.
Introduction
Non-contact anterior cruciate ligament (ACL) injuries are common [1], with the highest incidence occurring in individuals aged 15 to 25 who participate in pivoting sports [2].With an estimated cost of nearly a billion dollars per year [3], identifying risk factors and developing risk reduction strategies is both a scientific and public health priority [4].
The National Collegiate Athletic Association Injury Surveillance System recorded in 2004 that the highest ACL injury rate was found in American football for men [5,6].Similar data was observed in gymnastics for women, with 33 injuries per athlete exposure.Current epidemiology indicates that ACL injuries constitute a larger proportion of total injuries for women than for men (3.1% vs. 1.9%), reaching as high as 4.9% of injuries in specific sports such as women's basketball and gymnastics [7].The risk factors for non-contact ACL injuries fall into four distinct categories: anatomic, biomechanical, environmental, and hormonal [8].This investigation aims to determine whether overpronation of the foot at the subtalar joint can affect ACL laxity and whether overpronation is associated with ACL injury.
University of Athens (IRB-N: 17654B).
All 36 soccer players who gave informed consent to participate in the present investigation were elite professional athletes aged 18 to 29 playing in the Greek Super League (Figure 1).These soccer players were divided into two groups of 18 soccer male players each, according to whether they had suffered an ACL injury in the last two years (ACL group), or had never suffered an ACL injury (non-ACL group).We used a 3D baropodometer laser scanner [9,10] to measure pronation of the foot (pronation index) (Foot Levelers Inc. 3D Body View Scanner, Virginia).We measured ACL laxity (ACL laximetry) with the Genourob Rotab (Genourob LLC, Paris, France) device.During their visit to the biomechanics laboratory, each subject underwent measurements, on both lower limbs.This included static assessment of both feet on a 3D body view scanner, with the eyes open, and weight bearing on both feet.It was measured and ensured that the body mass was equally distributed between the two feet.The 3D body view scanner has an accuracy of 300 microns.In each athlete, both feet were examined.The athlete stood still on the 3D scanner and the laser beam scans the feet one at a time.Additionally, comparative knee laxity was measured with the knee flexed to 30 degrees.Starting with one knee on a random basis, each knee underwent automated dynamic laximetry for anterior tibial translation and tibial rotation with 134N of force [11].
Inclusion criteria
The ACL group consisted of 18 male soccer players who suffered a magnetic resonance imaging (MRI) confirmed ACL tear in the last two years before the study, and had fully recovered following ACL reconstruction and were playing soccer.A patellar tendon bone graft had been used in 10 of them, and a hamstring graft had been used in the remaining eight players.The non-ACL group consisted of 18 male soccer players who had never suffered an ACL or other knee injury.
Exclusion criteria
Players who had suffered an ACL injury less than two years for the ACL group, or for the non-ACL group, any injury to the knee and/or ankle over the last two years were excluded.
Data analysis
Continuous variables were presented using summary statistics (number of patients with available observations (n pt ), mean, standard deviation, median, 25th and 75th percentiles, minimum and maximum values).The distribution of continuous variables was also presented graphically using boxplots.Differences in the values of continuous variables between the ACL group and non-ACL group were analyzed using the Mann-Whitney U-test.
Discussion
The main finding of the present study is that elite professional soccer players who suffered an ACL injury had been operated and had successfully returned to sports exhibit significantly greater pronation of their feet when compared to a control group of elite professional soccer players who had never suffered such injury.Therefore, overpronation of the foot and internal rotation of the tibia may produce stresses on the ACL and predispose it to injury when rotational forces are imposed on the knee [5].
These results are in agreement with a prior report that ACL-injured patients, regardless of the mechanism of injury, exhibit greater navicular drop measures than subjects with no history of ACL injury [2].Increased foot pronation is associated with increased medial tibial rotation [1,5].Internal rotation of the tibia on the femur increases the strain on the ACL [12], increasing the risk of ACL injury [13], and is in line with observations on the increased incidence of non-contact ACL injury in athletes with pes planus [5,[14][15][16][17].In a study of navicular drop in a group of ACL-injured and uninjured subjects, there were significantly higher navicular drop test scores in the ACL-injured group [15].Hyper-pronation and, therefore, the occurrence of ACL injuries may be related [18].
In a controlled study, Woodford-Rogers et al. showed that navicular drop, as well as anterior tibial translation, were higher in ACL-injured athletes compared to an uninjured group matched by sport, competitive level, and amount of time played [17].
However, it is uncertain whether hyper-pronation, as measured by navicular drop alone, is an adequate predictor of non-contact ACL injuries.
As the foot excessively pronates, eversion of the subtalar joint leads to obligatory internal rotation of the tibia.The femur naturally begins external rotation at the midstance phase of gait, when the tibia of the pronated foot continues to internally rotate.The opposing rotatory torques of femoral external rotation and tibial internal rotation may stress the knee joint and stretch the structures that limit knee rotation [19].
Increased pronation is associated with greater internal rotation in the transverse plane at the knee [1], which places additional strain on the ACL during deceleration activities and increases the risk of rupture.The authors are unaware of any intervention studies that have examined the role of foot orthotics as a prophylactic means of preventing ACL ruptures in athletes who hyper-pronate.
Although biomechanical information is still unclear about this matter, a higher joint laxity may be correlated with an increased ACL injury risk, as it increases mid-foot loading and thus affects lower limb kinematics.Increased muscular strength and neuromuscular control may compensate for the negative effects of joint laxity and be part of a wider panel of ACL dynamic loading [14,15].This biomechanical relationship is found in prevention programs aiming to lower peak ground reaction forces and knee valgus in landing and cutting movements by increasing knee flexion and muscle loading capacities [20].
Athletes with increased generalized joint laxity demonstrate increased midfoot loading that may affect lower extremity biomechanics and potentially increase ACL injury risk.Although laxity may increase the risk of ACL injury, particularly in females, it is unclear whether joint laxity can be modified [14,15,20].However, it may be possible to indirectly overcome the adverse effects of joint laxity with increased muscle strength and heightened neuromuscular control.The biomechanical relationships of ACL loading with lower extremity kinematics and kinetics target training programs to be focused on increased knee flexion angle and reduction of peak impact ground reaction force and knee valgus moments during landing [18].
One of the main issues in ACL injury management remains its clinical assessment and diagnosis.Although clinical examination is still found as the primary tool used by surgeons, with Lachman and pivot shift tests being the most sensitive and specific manual assessments found, such clinical examinations come with limitations, such as subjective training and experience [11].
It is therefore crucial to cross-check these clinical examinations with medical imagery and instrumented laxity measurements, along with the patient's full medical history.This combination allows for a better diagnosis and medical decision on the correct treatments to present.Clinical examination remains one of the most common and important diagnostic procedures for the ACL-deficient knee.The Lachman and pivot shift tests are considered the most sensitive and specific manual clinical tests, respectively.However, clinical examination has some limitations: it is dependent on a surgeon's training, experience, and technique [11].Therefore, medical history, instrumented laxity testing, imaging, and arthroscopy also play an important role in the diagnosis of ACL ruptures.In addition, clinical examination, instrumented laxity testing, and MRI are required in combination to obtain broader diagnostic certainty.
Objective stress radiography may also assist clinicians in both diagnosing and managing an ACL injury by providing objective measures of sagittal and rotatory laxity of the knee.These methods may be used to identify complete ACL ruptures from partial tears, in which case the absence of laxity-related functional impairments might lead patients to be candidates for nonoperative treatment [20].
The present study offers an original contribution to the understanding of laxity and instability at the human knee joint.Measurements were taken in a non-invasive fashion, confirming that it is important to measure the anteroposterior and rotational laxity of the uninjured contralateral knee in assessing the laxity of the injured knee.
The present study evidenced a statistically significant association between overpronation, anterior knee joint laxity, and ACL injury.There are limitations of the present study.For example, given the retrospective nature of the investigation, one cannot infer a cause-effect relationship.Also, the number of subjects involved is relatively small.However, only elite professional athletes were recruited for the study.In such a setting, it would be difficult to recruit more subjects and to perform randomized studies to ascertain whether a given intervention would benefit in terms of the effects on the rate of occurrence of ACL injuries.
Despite these limitations, the descriptive data and discriminant analyses indicate clinically relevant differences in inherent knee joint laxity and foot biomechanics between ACL-injured and non-ACL-injured athletes.
Measures of navicular drop (overpronation) and anterior knee joint laxity can be easily obtained.Pronation can be limited by selecting appropriate footwear or with orthotics.If joint laxity is of concern, a training program can be prescribed to strengthen the muscles surrounding the knee and increase neuromuscular control.Thus, these risk factors can be identified during routine physical examination and potentially mitigated through orthotics management and exercise.Also, appropriate neuromuscular control can be introduced in the training routine to reduce the relative risk of ACL injury.
Conclusions
Reduction of the risk of non-contact ACL injuries should include the use of modern biomedical technology that includes state-of-the-art laxity and pronation measures.Future studies are needed to understand how neuromuscular control is related to laxity and pronation measures and the lower extremity biomechanics associated with ACL injury.
FIGURE 2 :
FIGURE 2: Differences in pronation index (Α) and laxity (Β) values between the ACL and non-ACL group.
TABLE 1 : Differences in pronation index and laxity values between the ACL group and non ACL group
a Wilcoxon rank sum test.ACL: anterior cruciate ligament; SD: standard deviation; IQR: interquartile range; n pt : number of patient. | 2023-10-30T15:11:01.969Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "fd91f1b4b3d0079700001d3f81e37942f515013f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "34898a0c765176c9a596bf2b7cf91cdda5ac2efb",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
} |
227153019 | pes2o/s2orc | v3-fos-license | Somehow I always end up alone: COVID-19, social isolation and crime in Queensland, Australia
The COVID-19 pandemic has dramatically affected social life. In efforts to reduce the spread of the virus, countries around the world implemented social restrictions, including social distancing, working from home, and the shuttering of numerous businesses. These social restrictions have also affected crime rates. In this study, we investigate the impact of the COVID-19 pandemic on the frequency of offending (crimes include property, violent, mischief, and miscellaneous) in Queensland, Australia. In particular, we examine this impact across numerous settings, including rural, regional and urban. We measure these shifts across the restriction period, as well as the staged relaxation of these restrictions. In order to measure impact of this period we use structural break tests. In general, we find that criminal offences have significantly decreased during the initial lockdown, but as expected, increased once social restrictions were relaxed. These findings were consistent across Queensland’s districts, save for two areas. We discuss how these findings are important for criminal justice and social service practitioners when operating within an extraordinary event.
Introduction
With its origins in Wuhan, China in late 2019, the COVID-19 pandemic has spread around the world (Readfern 2020). By the end of the first quarter of 2020, most nations had implemented social restrictions (social distancing, closing non-essential business, restricting local movement, etc.) in efforts to minimise the spread of the virus. Social interactions moved online, as did the economy, with the ways in which we interact changing radically, in many cases literally overnight. Aside from essential workers (health care providers, front-line officers, food services, etc.) and limited trips for groceries, medical concerns, and exercise, governments instructed residents to stay home.
Exceptional events, such as the COVID-19 global pandemic, though catastrophic in a number of dimensions, provide opportunities for natural experiments. These natural experiments can then be used to test our theories of human behaviour that can then be used to (hopefully) improve societal responses to future exceptional events, planned or otherwise. COVID-19 is considered an exceptional event because it impacts social structures and collective behaviour (Barton 1969). The introduction of social restrictions related to COVID-19 radically impacted human movement patterns (Google 2020) with significant reductions in movement away from the home.
Alongside these shifts in movement, many criminologists would expect changes in the opportunities for criminal activity. With more people spending the majority of their time at home, home guardianship would arguably improve and opportunities for crimes such as residential burglary would decrease. However, prolonged time spent at home by all residents, compounded by the financial stress of a global pandemic, could create additional opportunities for other crime types, such as domestic violence, as victim's exposure to their offender increases (United Nations 2020).
Of additional interest is how social life and opportunities for crime may be impacted differently in different contexts. While much of the preliminary research on COVID-19 and crime has supported opportunity theories (Stickle and Felson 2020), few studies have examined the impact of COVID-19 in non-urban settings. For example, in rural settings, opportunities for crime types such as commercial burglary would be much lower, while opportunities for crime types such as agricultural equipment or stock theft might be much higher (Harkness 2017). Alternatively, in tourist destinations, there may be less guardianship against burglary as these properties may sit empty if their owners or renters are unable to travel (Mawby 2014). It is important then to examine the shifts in offending across different geographical contexts as a result of the COVID-19 pandemic to determine if certain areas may be at a higher risk of specific types of victimization.
In this paper, we investigate the impact COVID-19 on the opportunity structures for crime in Queensland, Australia. We identify changes that occur at the time of lockdown (social restrictions) as well as when the staged relaxation of those social restrictions occurred. We contribute to the research on COVID-19 and crime through the inclusion of this staged relaxation of social restrictions period as well as through an analysis of state-wide data in Queensland, Australia for 2 years. By analysing offence data across the entire state of Queensland, Australia separated into the 15 police districts, we are able to identify differential effects of social restrictions between urban, rural, and remote areas in Queensland. The implications for this research are to better understand the impacts of changing opportunity structures from imposed social restrictions that may improve planning for public safety with regard to future outbreaks of COVID-19 (currently underway in a number of countries) and future exceptional events.
Related research
The impact of exceptional events on crime are understood through three theoretical frameworks: social cohesion/altruism, social disorganisation, and opportunity theories. Social cohesion/altruism approaches claim that during an exceptional event crime rates remain the same or decline, because people come together to help each other during a crisis (Barton 1969;Quarantelli 2007;Zahran et al. 2009). Empirical research has supported this in the context of both violent and property crime (Lemieux 2014;Siegel et al. 1999;Sweet 1998). It is important to note, however, that exceptional events often exacerbate social inequalities (Craemer 2010;Fothergill and Peek 2004). As such, others have argued that during an exceptional event social cohesion and collective efficacy are weakened by the social disorganization of crisis, impacting the ability of residents to control antisocial behaviour (Harper and Frailing 2012;Prelog 2016). Alternatively, to both of these proposals, opportunity-based explanations, such as the routine activity approach state that changes in crime are dependent on changes in the opportunity structure for crime (Hodgkinson and Andresen 2020).
Theoretical expectations vary from offence type to offence type and place to place: crime may increase because of decreased guardianship over businesses during a lockdown (commercial burglary), or crime may decrease because of the loss of opportunities under the same conditions (shoplifting) (Hodgkinson and Andresen 2020). In the case of COVID-19, the nature of the exceptional event is quite different. Unlike a typical natural disaster, that can create opportunities for disorganization and crime, as well as opportunities for altruism and social cohesion (Lemieux 2014) the COVID-19 pandemic has seen government systems actively discourage direct social interaction, but supporting other forms of social interaction in order to remain connected. This has created a particularly unique exceptional event type, which requires further investigation. Stickle and Felson (2020) and Eisner and Nivette (2020) have outlined a prospective research agenda that criminologists may undertake to understand the effects of COVID-19 and its social restrictions on criminal activity. In addition to this call for criminological research on a global-level natural experiment, reports and research are emerging discussing the impact of COVID-19 on crimesee Stickle and Felson (2020) for a discussion of the various research briefs that have emerged recently. Though preliminary in all cases due to the recency of changes in opportunity structures and that the effects of COVID-19 are far from over, this research has shown a number of interesting patterns that are consistent with expectations derived from changing opportunity structures (Stickle and Felson 2020).
COVID-19 and crime
Considering residential burglary in Detroit, Michigan, Felson et al. (2020) found that crime decreased significantly during the early stages of social restrictions. Perhaps most interesting, with implications for crime prevention, is that these decreases were greatest in areas with land use that are dominantly (> 90 percent) residential. In fact, areas with more mixed land use already saw increases in residential burglary by the end of March 2020. Mohler et al. (2020), in a study of a number of crime types in Los Angeles, California and Indianapolis, Indiana found that the changes in the volume of crime are not large in many cases. They did, however, find notable changes in robbery and traffic stops, with increases in domestic violence. Though instructive, these articles only considered data during the year 2020. As such, and (lack of ) changes may be due to expected seasonal changes in their respective cities. Borrion et al. (2020) investigated the impact of COVID-19 on retail theft in a city in China. Using a resilience framework, they found that retail theft decreased by over 60 percent, rebounding to a level higher than expected after social restrictions were relaxed. In an analysis of a variety of crime types across 16 cities in the United States, Ashby (2020) found no changes in serious assaults, decreases in residential burglary in some cities, decreases in theft from vehicles, inconsistent changes across cities for theft of vehicle, and little change in nonresidential burglary. Of particular note is that Ashby (2020) found that results differed across the 16 cities under analysis: crime did not decrease in all cities and even increased in some cities. Hodgkinson and Andresen (2020) investigated a number of property, violence, and social disorder crimes in Vancouver, Canada. They found that most crime types decreased (or no increases based on expected seasonal patterns). Perhaps most interesting was a sudden drop in other theft (e.g. shoplifting) and a sharp increase and subsequent decrease in commercial burglary. Halford et al. (2020) used Google Covid-19 Community Mobility Reports to estimate the elasticity of crime to measure responsiveness to social restrictions imposed in a United Kingdom police service. They found that elasticities varied by crime type. In an analysis of domestic violence in Dallas, TX, Piquero et al. (2020a) found an initial spike, and subsequent decline, in the early stages of lockdown-see Reingle Gonzalez et al. (2020) and Piquero et al. (2020b) for discussions of these results. de la Miyar et al. (2020) investigated both conventional crime (domestic violence, burglary, and vehicle theft) as well as organized crime in Mexico City, finding that conventional crimes decreased but organized crime remained stable. And most closely related to the current research, Payne et al. (2020) analyzed violent crime for the state of Queensland, Australia, finding that violence dropped in the early stages of lockdown.
Though instructive, there are still many avenues of research to be undertaken. First, most of the existing research is based in North America-Stickle and Felson (2020) and Payne et al. (2020) do cite some research briefs that are outside of North America. And second, although there are a number of different cities, the areas studied are all urban areas or an entire state. However, if an opportunity approach is best suited to understanding crime trends in a pandemic, different opportunity structures need to be considered. In the analyses below, we consider all of the police districts in Queensland, Australia that include urban, regional and rural areas.
Data and methods
We analyse social disorder, property, violent, and other offences across the entire state of Queensland, Australia. Queensland is approximately 1.85 million km 2 (about 14 times larger than England) with a population of approximately 5 million people (less than one tenth the population of England), resulting in a fairly low population density outside of urban areas like Brisbane and the Gold Coast. The Queensland Police Service (QPS) organises the state into 15 districts of varying sizes, see Fig. 1. The majority of the Queensland population is in three of those districts: Gold Coast, North Brisbane, and South Brisbane, the predominantly urban areas. The remaining areas of Queensland, though containing small urban centres are mostly rural, regional and remote areas (Far North, Mount Isa, South West). Some areas on the eastern coast also contain popular vacation areas (Sunshine Coast) and tourist destinations such as Cairns in the Far North district.
As shown in Fig. 2, Queensland has been successful with early social restrictions, in reducing COVID-19the spike in June related to one person who worked on a large farm (ABC News 2020). Restrictions began with the declaration of a public health emergency the day after the first COVID-19 infection was identified in Queensland, 28 January 2020. On 19 March 2020 Australia banned arrivals of non-citizens and residents, with all non-essential services being shuttered 23 March 2020, and state borders being closed to all non-essential travel 25 March 2020. Stage 1 re-opening the economy began 02 May 2020, including the re-opening of restaurants, pubs and bars (up to 10 people) 2 weeks later. Stage 2 of the re-opening (up to 20 people in businesses and homes and full travel within Queensland) began 01 June 2020. And Stage 3 (largely normal business operations with COVID-19 safe planning) began 03 July 2020. Aside from the spike in early June 2020, the social restrictions imposed were very successful in reducing new COVID-19 infections: most days after 01 May 2020 had zero new infections.
We expect that offences will decrease during the lockdown period, and begin to increase again during the staged relaxation of social restrictions. These reductions and following increases are based on changes in the opportunities for crime. This is evident in Fig. 3, that shows time spent in Queensland by location (Google 2020). This figure clearly shows that all activities outside of "residential" decreased before or immediately after lockdown. And during the relaxation of social restrictions, many of these activities began to return to their baseline level, with grocery-pharmacy returning to its baseline at this time. However, there may be some offences that may not change in this pattern. For example, because of the Queensland border restrictions, the QPS districts that border other Australian states may experience increases in traffic-related offences during both lockdown and staged relaxation of social restrictions because the borders remained closed at this time. This may be particularly the case because the Queensland Police Service modified its service delivery with up to 10 percent of its officers being deployed specifically to COVID-19 related duties (Crockford and Lynch 2020).
Data
We use open source data provided from QPS: https :// www.polic e.qld.gov.au/maps-and-stati stics . Our data are measured weekly from 03 May 2018 through to 02 July 2020, 113 observations; 03 May 2018 is the first available data for QPS districts at the time of data collection and 02 July 2020 is the latest available data at the time of data collection but also when social restrictions entered another stage (becoming less restrictive). Of the 19 available offence types, we investigate any changes in the following: good order (social disorder), 1 mischief (property damage), assault, robbery, other violence, 2 total violence (the sum of assault, robbery, and other violence), burglary, theft, theft of vehicle (TOV), drugs, fraud, and traffic. We exclude arson, handling stolen goods, homicide, liquor-related (does not include public drunkenness), prostitution, trespassing, and weapons-related offences as the counts for most of these offence types were consistently quite low or not always clearly identified as criminal.
Methods
We analyse slightly more than 2 years of days (113 weeks), controlling for the longer-term and seasonal trends. We use weekly counts to maximise the number of observations during the short time horizon for this global pandemic, while minimising volatility. As an additional method to address the volatility in weekly offence data we use a data smoothing method, the Hodrick and Prescott (1997) filter, to obtain the trend in the data for analysis. The HP (1997) filter was developed in the macroeconomics literature to identify business cycles and separates the trend, cyclical, and error components of a time series: where y t is the time series of interest, τ t is the trend component, c t is the cyclical component (weekly pattern, for example), and ϵ t is the error component. The trend component, τ t , is identified using the following function: (1) The first term in Eq. 2 is the sum of squared deviations of the original weekly time series and its trend; the second term is the sum of squares of the squared second differences, penalizing variations in the growth rate of τ t . All HP filter calculations are undertaken in R using the mFilter library, developed by Balcilar (2018).
Our choice to use the HP (1997) filter stems from the fact that it identifies the trend in the data without the loss of observations that occurs when using more traditional methods such as moving average calculations. It is important to note that the HP (1997) filter has its critiques. In particular, the identification and analysis of the (business) cycles in time series have been identified as potentially problematic (Hamilton 2018). However, we are using the HP ((1997)) filter to smooth a volatile (weekly) data set, not analyse the (timed) cyclical component of the data.
The primary statistical methodology is a structural break test with robust (heteroskedastic and autocorrelation consistent) standard errors. These analyses are becoming increasingly common in the criminological literature Piehl et al. 2003;Reid and Andresen 2014;Hodgkinson et al. 2018), including an analysis of COVID-19 and crime (Hodgkinson and Andresen 2020). We use a version of the Chow (1960) test to exogenously identify (we impose the break points) a change in the trends of the offence types: We account for the known seasonal component in offence data Breetzke and Cohn 2012;Cohn and Rotton 2000;Farrell and Pease 1994;Linning et al. 2017;McDowall et al. 2012) including both week and week-squared variables. Week is measured as sequential values (1, 2, 3, …, 52) over the course of a year, whereas Week 2 is the squared value of Week-these two variables account for the known seasonal effect in the data, not the week-to-week volatility filtered out using the HP (1997) filter. Overall Trend, measured as sequential values for the entire time series (1, 2, 3, …, 113), captures any underlying trend in the data for the entire time period.
We include two break points in the data and test for their statistical significance. The first breakpoint (3) captures the lockdown, both its immediate effect (if any) and any change in trend; this break point is 25 March 2020. The second breakpoint is the staged relaxation of social restrictions (beginning with the opening of restaurants, pubs, and bars), both its immediate effect (if any) and any change in trend; this break point is 16 May 2020. The Queensland government implemented a number of relaxations to social restrictions, but we chose this date because it represented the opening of restaurants, pubs, and bars that necessarily increases movements outside of the home and work environments. Each break-based variable has the value of zero before its representative break time and unity (Lockdown and Stage) or sequential values (Lockdown-Trend and StageTrend) thereafter. All estimation for the sequential Chow tests is undertaken using R: A language and environment for statistical computing, version 3.5.3 (R Core Team 2019).
Results
The results of the structural break tests are presented in Tables 1, 2 , 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. In addition to the structural break results, we include the pre-and post-lockdown average weekly counts by offence type and QPS district, for context regarding the magnitude of the various parameters. For the structural breaks we present the overall trend variable, the lockdown dummy variable (immediate impact of lockdown), the lockdown trend variable (change in trend after lockdown), the stages dummy variable (immediate impact of staged reduction in social restrictions), and the stages trend variable (change in trend after staged reduction in social restrictions).
The results for the social disorder offences, good order and mischief, are reported in Tables 1 and 2 Mackay, and North Brisbane)-some districts did continue to have immediate drops in social disorder during the staged relaxation of social restrictions (e.g. Ipswich and Mount Isa). These differences in trends during the staged relaxation of social restrictions cannot be known based on the data but may be due to differences in policing responses across QPS districts. The impact of lockdown and staged relaxation of social restrictions have far more varied results with violent crime (assault, robbery, other violence, and total violence), shown in Tables 3, 4, 5 and 6 and Figs. 6, 7, 8 and 9. With regard to lockdown, assault most often exhibited the expected pattern of immediate decreases and continued decreases in trend; this is also the case for robbery. Other violence, however, is far more varied, particularly with the trend during lockdown, most often increasing. Overall, aside from Mackay, when significant there are immediate drops in total violence (Table 6) with changing trends varying from district to district; and similar to the social disorder results, the stages dummy and trend variables are not as consistent Tables 7, 8 and 9 and Figs. 10, 11 and 12 (burglary, theft, and theft of vehicle), theft and theft of vehicle have the expected results for the lockdown immediate and change in trend effects: both negative. In fact, these are the most consistent results found with regards to these expectations and theft have the largest number of statistically significant results. Burglary, however, exhibits an interesting pattern for its Lastly, miscellaneous offences, Tables 10, 11 and 12 and Figs. 13,14 and 15 (drugs,fraud,and traffic), have varied results. The immediate effect of lockdown on drugs in most QPS districts is positive and statistically significant-all negative parameters are statistically insignificant. Aside from Capricornia and South Brisbane, the trend during lockdown is negative, when statistically significant. Though some districts continued to have positive immediate and trend effects during the staged relaxation of social restrictions (Wide Bay Burnett and South Brisbane, respectively), all other districts exhibited negative parameters for both the immediate effect and subsequent trend. Fraud and traffic, Tables 11 and 12, generally exhibited the expected negative and subsequent positive effects for the lockdown and staged relaxation of social restrictions, respectively, for both immediate changes and changes in trend.
Discussion
The results of the study offer some interesting areas for further exploration. First, while the findings are relatively consistent with those of other researchers on COVID-19 and crime-crime in general is down during the pandemic-we offer an analysis that considers the post lockdown period. In addition, we use previous years of data for seasonal trend comparisons. As expected, most crime types demonstrate an increase after the easing of restrictions. This is consistent with opportunity theories and predicted by previous research on COVID-19 and crime (Hodgkinson and Andresen 2020). Though it is possible that these results (largely decreases in crime) may be explained by social cohesion/altruism, we do not have any data on changes in social cohesion/altruism to make any inferences here. And given the patterns of change, there is no support for social disorganisation as an explanation for these changes at this time. However, social disorganisation theory tends to be more instructive for longer term explanations of crime patterns (Andresen 2012(Andresen , 2013Land 1985, 1991).
Second, we conducted this analysis across a large and diverse Australian state. Much of the research on COVID-19 and crime has focused entirely on cities, particularly cities in North America, with more recent international research, as cited above. The analysis of numerous districts that include regional, rural and remote areas, allows us to explore these patterns of crime in different geographical settings and amongst alternative opportunity structures. Indeed, we found different trend patterns in certain contexts.
In the case of the district of Mackay, a large mining community in Northeast Queensland, violence, robbery and assault all increased in concert with the COVID-19 lockdown. These increases are inconsistent with the declines in other districts. Mackay is unique in that it acts as the gateway to the Bowen Basin, which is 60,000 km 2 of coal reserves-the largest mining area in Australia. Mackay is also home to a large number of 'donga' communities which are pop-up residences for mining workers and largely house men only. During the lockdown, Mackay suffered a huge loss of mining jobs due to the drop in the cost of coal internationally and the COVID-19 related restrictions put in place on mining workers (Szabo 2020;Whiting 2020). This situation differs greatly from other Queensland districts. While we can only offer conjecture at this point, we believe that the loss in jobs may have left a predominantly male workforce out of work and in the presence of a lot of other men who are also out of work. This could increase opportunities for assault or alcohol/drug-related violence (Carrington et al. 2012). Indeed, Mackay did experience one of the most significant increases in drug-related offenses during the lockdown. In addition, this could create additional opportunities for domestic violence for men who are out of work and at home for extended periods of time with their victims (United Nations 2020); however, this depends on where the families of these workers live given the fly-in/fly-out nature of the mining communities. Further research on the impact of COVID-19 in this area, and other non-urban or natural resource dependent communities is necessary.
Third, we found that depending on the nature of ownership and use of space in cities, some cities may be at an increased risk for certain types of crime. The Sunshine Coast and Gold Coast, for example, experienced sharp initial increases in burglary at the beginning of the lockdown, followed by the expected decreases. A possible explanation for the different lockdown effects for burglary in the Gold Coast and Sunshine Coast, may be that these two areas are close to the primary urban area of Queensland (Brisbane) and contain a lot of vacation homes and vacation rentals. Unable to travel to these locations during the lockdown (initial lockdown restrictions limited travel to under 50 km), these homes may have been suitable targets in that they had no guardianship. However, once these targets had been burglarized, they were no longer suitable as owners and renters were unable to travel to these homes to replace what had been stolen. Until owners, security, and police could respond to these increased opportunities for burglary, which would be delayed by the lack of presence and ownership in these tourist spaces (Mawby 2015) this increase in burglaries is not surprising. In fact, this is precisely what occurred in Vancouver, Canada with regard to commercial burglary until security was increased and police increased activities in these areas (Hodgkinson and Andresen 2020)-this is generally consistent with changing opportunity structures and crime (Hodgkinson et al. 2016;Hodgkinson and Andresen 2019). Again, we speculate this is the case and further research into the impact of crime on tourist locations may be necessary.
Fourth, we did not find the expected shift in border related traffic offenses that we would have expected during and after the social restrictions. The districts that bordered other states (Mount Isa, South West, Darling Downs, Ipswich, Logan, and Gold Coast) did not experience an increase in traffic-related offences because of inter-state travel restrictions. This may simply be a result of the great distances between states in Australia and the increased effort necessary to break these laws considering the presence of police along these borders and that all states in Australia imposed strict domestic travel restrictions during this time. It may also be the case that changes in police operational practices during COVID (Crockford and Lynch 2020; Queensland Police Service, 2020) have led to decreased reported incidents. It may be interesting to see if other smaller countries that put in place border control, witnessed any shift in traffic-related offenses.
Fifth and finally, the study demonstrates an increase in drug-related offenses post-lockdown in 13 of the 15 districts. 9 of these increases were significant. Mt Isa, Logan and Mackay experienced that greatest magnitude increases in these offenses in comparison to the baseline in the early stages of lockdown. While Logan and Mt Isa decreased in drug-related offenses after the relaxation of restrictions, Mackay remained steady. As mentioned above, this may be a result of the loss of mining jobs in the area due to the drop in coal-prices internationally. The overall increase in drug use at the point of lockdown across most of the districts, could indicate an increase in self-medicating behaviours to the insecurity of the pandemic. While the social restrictions have had the expected impact on the accessibility of drugs on the global market, as many routes for drug trafficking have been interrupted, this has not prevented people from stockpiling and using less pure forms or lesser alternatives such as cannabis (which is still illegal in Australia) to self-medicate (UNODC 2020). The increase in drug use across Queensland during the pandemic is probably the most contradictory finding to opportunity theories, as it demonstrates that, increases in effort or decreases in opportunities, do not dissuade those who are dealing with self-medicating behaviour or addiction.
As with all research, ours is not without limitations. First and foremost, our analyses rely on offences reported to the police. Though the lack of reporting to the police is long-standing and well-known (Bulwer 1836;Perreault 2015), reporting of offences to the police is higher in Australia than many western countries: 54% (assault), 39% (sexual assault), 90% (theft of vehicle), 75% (residential burglary) (Australian Bureau of Statistics 2018). However, there is no way to know if reporting rates have remained constant during the pandemic; for example, police and other social services have modified practices because of social distancing requirements and some victims may be less likely to report criminal occurrences to avoid contact, more generally. There is little that can be done regarding this limitation but given the higher level of reporting in Australia it is of a lesser concern than other countries. Second, though it would not be feasible across an area as large as Queensland, Australia, we do not consider local area variations for the impact of COVID-19 on crime. As with previous research, cited above, there may be variations within districts that may prove to be important. However, we believe this large-scale analysis sets the stage for further investigation into some of the identified areas such as communities reliant on natural resources, and tourist destinations. Third, through domestic violence has been reported to be on the rise in Queensland, Australia (Bavas 2020), and important in the context of pandemics (Mohler et al. 2020;Parkinson 2019; United Nations 2020), we are unable to disentangle the violence data in this study as domestic violence incidents are rarely presented separately in open data sources. And fourth, though we have been able to show regional variations, our regions (police districts) are still quite large. Smaller areas of analysis should be undertaken to see if the effects of COVID-19 vary by neighbourhood.
In addition to addressing these limitations, future research should move beyond investigations of the impact of COVID-19 on crime in urban centres. Significant populations, upwards of 20 to 30 percent, continue to live in rural areas, even in developed countries such as Australia, Canada, the United States, France, Germany, and Japan (Statistics Canada, 2012). In addition, in countries like Australia and Canada, crime rates are often higher in rural and regional areas (Hogg and Carrington 2006;Ruddell 2016). In the context of COVID-19, these areas suffer greater harms because of decreased access to social/medical services. Moreover, opportunity approaches can only go so far in understanding the (changing) patterns of crime, particularly with regard to exceptional events. Longer term effects need to be tested in order to properly assess the opposing predictions of social cohesion/altruism and social disorganisation theory.
Conclusion
We explored the rates of different crime types across the state of Queensland, Australia before, during and after the COVID-19-related lockdown. We find, as predicted by opportunities theories, that most crime types, except for drug-related offenses, decreased during the lockdown and subsequently increased after the restrictions were lifted. However, in particular contexts, certain crime types increased as a result of the lockdown. These shifts appear to be affected by loss of jobs and self-medicating behaviours, as well as the number of suitable targets in tourist destinations. We suggest that further research examine non-urban or unique contexts to explore these findings greater detail. However, we believe these findings are useful in understanding the impact of a global pandemic on crime trends. | 2020-11-25T05:05:41.876Z | 2020-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "3831d6461817ebdf6337570a6a5da6c704225d40",
"oa_license": "CCBY",
"oa_url": "https://crimesciencejournal.biomedcentral.com/track/pdf/10.1186/s40163-020-00135-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc90f7c53108df689376490a9756e9c4f0954352",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
264434038 | pes2o/s2orc | v3-fos-license | Did Clostridioides difficile testing and infection rates change during the COVID-19 pandemic?
Testing for and incidence of Clostridioides difficile infection (CDI) was examined at a single center before and during the first surge of the COVID-19 pandemic. Incidence of CDI remained stable but testing statistically significantly decreased during the first surge despite an increase in antibiotic use. There were no new CDI-focused antimicrobial stewardship interventions introduced during this time.
While routine infection prevention practices are a priority for all healthcare systems, the COVID-19 pandemic has underscored the importance of ongoing infection prevention efforts [1]. Despite this, there has been little research on the impact of the COVID-19 pandemic on health care associated-infections (HAI) in the United States [2,3].
Current literature suggests that increased adherence to infection prevention recommendations, increased antibiotic use, improved hand hygiene, and correct donning and doffing of personal protective equipment, may have influenced HAIs in the US during the pandemic [4,5]. The aim of this study was to investigate Clostridioides difficile (CDI) testing and incidence during the initial surge of the pandemic. We hypothesized that strict adherence to contact precautions may have resulted in a decreased incidence of CDI in hospitalized patients during the first peak of the COVID-19 pandemic, and that CDI testing may have increased even in the absence of directed diagnostic stewardship efforts.
We conducted a single center, retrospective, observational study at the Veterans Affairs (VA) Hospital in Ann Arbor, Michigan between January 2019 and June 2020. The VA Ann Arbor is classified as level 1a, providing the most complex level of patient care. We compared data on CDI tests from January 2019 through February 2020 to data from March 2020 (the admission of the first patient with COVID-19 at our institution) through June 2020. Pre-peak and peak periods were defined by confirmed cases in Washtenaw County [6]. No novel diagnostic or CDI-focused stewardship interventions were introduced by the antimicrobial stewardship program during the study period. Guidance on optimizing antibiotic use for secondary bacterial pneumonia was added to the institutional COVID treatment guidelines on March 30, 2020. High risk for CDI antimicrobials were not included in this guidance.
CDI testing at our institution is performed with enzyme immunoassay (EIA). A positive CDI test was defined by both glutamate dehydrogenase (GDH) and toxin. Interrupted time series analysis was performed using STATA v.16.1 software (StataCorp LLC, College Station, TX). This project received Institutional Review Board approval (IRB-2020-1234) from the Ann Arbor VA Human Studies Committee.
There were 6525 total admissions and 34,533 bed days between January 1, 2019 and June 30, 2020. The number of admissions COVID-19 ranged from 6 to 27 patients each month. The percentage of COVID-19 patients relative to total patients admitted each month was 5%, 9%, 2%, and 2% in the months MarcheJune 2020. There were 900 total EIA tests obtained and 104 total positive cases of CDI between January 2019 and June 2020. Percent positivity of CDI tests ranged from 6% to 22% and was not significantly different in the pre vs. peak-pandemic periods (p ¼ 0.4). Only one of those positive tests was from a patient with COVID-19. Average monthly bed days in the pre-pandemic and peak-pandemic period were not statistically significantly different (pre: 1935 days/month, peak: 1859 days/month, p ¼ 0.2). There was a significant decrease in EIA tests after March 1, 2020 (the COVID peak in our region), compared to January 1, 2019eMarch 1, 2020. (Fig. 1). After March 1, 2020 the number of EIA tests obtained decreased by 10.2 tests each month (95% Confidence Interval [CI] À18.7 to À1.7; p ¼ 0.02). There was no statistically significant change in the incidence of CDI/10,000 patient days (p ¼ 0.5). Use of antibiotics that were defined as high risk for CDI (clindamycin, cefotaxime, ceftriaxone, ceftazidime, cefepime, cefdinir, cefpodoxime, cefixime, ciprofloxacin, gemifloxacin, levofloxacin and moxifloxacin) increased in the months of April 2020 (odds ratio ¼ 1.4) (Fig. 2).
This retrospective, observational study examined CDI testing and diagnosis incidence before and during the first peak of the COVID-19 pandemic in Washtenaw County, Michigan. We found that while the number of admitted patients remained relatively stable, testing of CDI decreased during the first peak of the pandemic as compared to months before the initial peak. Incidence remained stable. During the first peak in our institution, elective surgeries were delayed, and some medical care transitioned from inpatient to outpatient management [7]. In addition, patients were transitioned from double occupancy to single-bed rooms and the correct donning and doffing of PPE was reinforced among hospital employees through ongoing infection prevention training. Some of these changes may have contributed to decreased testing of CDI. Incidence of CDI may have remained stable despite decreased testing due to potential over-testing or inappropriate testing before the pandemic or more thoughtful testing during the first surge. It will be interesting to monitor these trends in the future.
Our findings differ from the limited existing literature that describe increases in nosocomial infections, particularly central line-associated blood stream infection and ventilator-associated pneumonia, during the peak of the COVID-19 pandemic [8]. Other recent work has shown fewer cases of CDI despite increased antibiotic use during their initial surge of COVID-19 patients, though large-scale studies on HAI are pending [2]. National trends in CDI have been declining in the US in recent years. However reporting during the COVID-19 pandemic may have been affected due to unexpected US Health and Human Services policies and exemptions to public reporting for HAIs afforded by the Centers for Medicare and Medicaid Services [9]. Therefore, studies at the institutional level, such as ours, may serve as snapshots into larger national CDI trends during this time.
Prior studies have found that in the treatment of COVID-19 patients, antibiotics were frequently overused in the early part of the pandemic [5,10]. At our institution, we found an increase in the use of antimicrobials overall and in high-risk CDI antibiotics between April and June 2020. While the decrease in CDI during this time occurred despite concomitant overuse of antibiotics, we were unable to determine whether there was a difference in CDI testing and antibiotic use among patients with COVID-19 patients compared to patients without COVID-19 due to limited sample size.
Our study has several limitations. First, as an observational study we are unable to establish causation. Second, institutionspecific biases may have inadvertently affected the results in this single center study, limiting its generalizability. In order to preserve PPE at our institution, on March 20, 2020, patients identified to have Vancomycin-Resistant Enterococci (VRE) or Methicillinresistant Staphylococcus Aureus (MRSA) infection or colonization were no longer required to use contact precautions. Although we attempted to identify concurrent changes made to CDI testing over the time period of the study that could have affected the results, it is possible that we were unable to account for all potential developments. Finally, our VA hospital has comparatively low rates of CDI at baseline with more than 30 years of fluoroquinolone restriction in place and a longstanding antimicrobial stewardship program.
In this single center study, we observed a stable incidence of CDI and decreased testing during the first peak of the COVID-19 pandemic. Understanding local HAI reporting is critical, as changes in HAI reporting structures and exemptions during this time may have affected national reporting. Further research should be undertaken to investigate the effect of COVID-19 on other HAI reporting within the US health care system.
Declaration of competing interest
The authors do not report any conflicts of interest. | 2021-05-24T13:15:26.574Z | 2021-05-22T00:00:00.000 | {
"year": 2021,
"sha1": "f2c6f12868344f52d66287658c6fd16b722e8bc8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.anaerobe.2021.102384",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "725fea6dd280f724479cab1173b9beca7a8d3103",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258051823 | pes2o/s2orc | v3-fos-license | A cross-sectional study of retinal vessel changes based on optical coherence tomography angiography in Alzheimer’s disease and mild cognitive impairment
Background The involvement of retina and its vasculature has been recently described in Alzheimer’s disease (AD). Optical coherence tomography angiography (OCTA) is noninvasively used to assess the retinal blood flow. Objective This study was to compare vessel density (VD) and blood perfusion density (PD) of the macular in AD patients, mild cognitive impairment (MCI) patients and healthy controls by OCTA, which may provide new ideas for diagnosis of AD or MCI. Methods AD patients, MCI patients and healthy controls underwent a comprehensive ophthalmic and neurological evaluations, including cognitive function assessments as well as visual acuity, intraocular pressure (IOP), slit lamp examinations, and OCTA. General demographic data, cognitive function, retinal VD and PD were compared among three groups. The correlations among retinal VD, PD and cognitive function, amyloid-beta (Aβ) protein and phosphorylated Tau (p-Tau) protein were further evaluated. The correlations between retinal superficial capillary plexus and cognitive function, Aβ protein and p-Tau protein were also explored. Results A total of 139 participants were recruited into this study, including 43 AD patients, 62 MCI patients, and 34 healthy controls. After adjusting for sex, age, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, best corrected visual acuity, and IOP, VD and PD in the nasal and inferior regions of the inner ring, superior and inferior regions of outer ring in the AD group were significantly lower than in the control group (p < 0.05). PD in nasal region of outer ring also significantly decreased in the AD group. VD and PD in superior and inferior regions of inner ring, superior and temporal regions of outer ring in the MCI group were markedly lower than in the control group (p < 0.05). After adjusting for sex and age, VD and PD were correlated with Montreal Cognitive Assessment Basic score, Mini-mental State Examination score, visuospatial function and executive function (p < 0.05), while Aβ protein and p-Tau protein had no relationship with VD and PD. Conclusion Our findings suggest that superficial retinal VD and PD in macula may be potential non-invasive biomarkers for AD and MCI, and these vascular parameters correlate with cognitive function.
Introduction
Alzheimer's disease (AD) is a common progressive degenerative disease of the central nervous system, and has been the most common cause of dementia, accounting for 50-75% (Lane et al., 2018). It has a high prevalence in the elderly and early old age, and is mainly characterized by progressive cognitive dysfunction and behavioral impairment (Lane et al., 2018). Mild cognitive impairment (MCI) is the transitional stage between normal aging and dementia (Gauthier et al., 2006). It is characterized by cognitive decline but the ability to live a normal life is not affected (Kim et al., 2017). Of note, people with MCI have a high risk of developing dementia. It is estimated that 32% of MCI patients will develop AD within the next 5 years (Ward et al., 2013). AD patients bring great economic and social burdens to the family and even the whole of society. It is estimated that the annual total cost of nursing AD patients is more than US $507.49 billion in 2030 in China (Jia et al., 2018). Prevention and treatment of AD are still a worldwide problem. These may be ascribed to the difficult early diagnosis of AD. Because its onset is insidious, with pathological changes in the brain occurring 20 years or more before clinical symptoms (Villemagne et al., 2013;Gordon et al., 2018). The identification of these pathological changes in the brain requires expensive positron emission tomography/computed tomography (PET-CT) and invasive cerebrospinal fluid (CSF) tests, which are not widely available in clinical practice (Jack et al., 2018). Therefore, it is imperative to develop economical and noninvasive tests for the early recognition of AD and MCI.
The retina and the brain share some features in embryology, anatomy and physiology (Lee et al., 2020). First, the retina develops from the neuroectoderm, having the same embryonic origin as the brain, and is a sensory extension of the brain (Hart et al., 2016). Secondly, the retina is an extension of the diencephalon and has a blood-retinal barrier similar to the blood-brain barrier (Baker et al., 2008). Retinal small blood vessels and small cerebral blood vessels also have similar physiological properties (Patton et al., 2005). The microcirculation systems of both are hyperoxic extraction systems, and their blood flow depends on regional neuronal activity (Patton et al., 2005). The automatic regulation mechanism makes the perfusion pressure of the vessels maintain relatively constant blood flow even if it changes (Yan et al., 2021). Moreover, autopsy has indicated the amyloid-beta (Aβ) protein deposits in the retinal vessels of AD patients (Shi et al., 2020). This suggests that retinal vascular disease can objectively reflect the vascular disease in the brain, and is a window to study cerebral vasculopathy (Newman, 2013).
Studies have revealed that changes in brain perfusion exist long before the clinical symptoms of AD, and may even predate Aβ protein accumulation or brain shrinkage (Hays et al., 2016). However, the changes of blood flow in the brain cannot be directly observed. Based on the similarity between the retina and the brain, it is possible to detect the blood flow in the retina to reflect the changes of blood flow in the brain. Optical coherence tomography angiography (OCTA) is a non-invasive, rapid and high-resolution fundus angiography technique, which can observe the structure and morphology of blood vessels at different levels of the retina in layers, and quantify the blood flow index and diseased blood flow area within a certain range (Boeckaert et al., 2012). OCTA can be used to collect information on blood vessel density (VD) and blood vessel morphology of the retina in macular area. Studies have indicated that, compared with normal controls, the blood VD in the superficial and deep retina of macular area in AD and MCI patients significantly reduced, and the foveal avascular zone (FAZ) area was significantly enlarged, which is a sign of macular ischemia (Bulut et al., 2018;Jiang et al., 2018;Lahme et al., 2018;Zabel et al., 2019). In addition, the fractal dimension (FD) of the superficial vascular network also significantly reduced in AD patients (Chua et al., 2020), while FD reflected the complexity of retinal vascular branches and the density of the entire retinal vascular system. However, the changes in FD in MCI patients remain controversial. One study shows that FD in the superficial vascular network significantly reduces in MCI patients as compared to normal controls (Chua et al., 2020). But another casecontrol study shows a significant increase in the retinal FD in patients with MCI due to AD (Biscetti et al., 2021). There are also some changes in the choroid in AD and MCI patients. Compared with normal controls, choroid thickness was significantly thinner in AD patients (Trebbastoni et al., 2017;Salobrar-Garcia et al., 2020). However, the choroid thickness of MCI patients tends to become thinner although there is no statistical significance (López- de-Eguileta et al., 2020).
Therefore, this study was to compare VD and blood perfusion density (PD) of macular retinal superficial capillary plexus (SCP) in AD patients, MCI patients and healthy controls by OCTA. The relationships among the retinal microvascular network and cognitive function, Aβ protein and phosphorylated Tau (p-Tau) protein were also investigated.
Diagnosis
The study participants were all from the Department of Neurology or Ophthalmology of Tongji Hospital in Shanghai. Data were collected from July 2020, to August 2022. The diagnoses of AD and MCI was based on 2011 guidelines of the National Institute of Aging-Alzheimer's Association workgroups (NIA/AA) (McKhann et al., 2011) and the quantitative criteria proposed by Jak/Bondi in 2014 (Bondi et al., 2014).
Alzheimer's disease: (1) Insidious onset and slow progression of symptoms; (2) A clear history of cognitive deterioration; (3) Impaired ability to function in daily life; (4) Cognitive impairment was classified into the following categories when the medical history and neuropsychological assessment were reviewed: (a) Amnestic presentation; (b) Nonamnestic presentations: language disorders, visuospatial disorders, and executive dysfunction; (5) Exclusion of other causes of dementia, such as metabolic disorder and encephalopathy.
Mild cognitive impairment: (1) Cognitive concern reflecting a change in cognition reported by the patient or relatives or clinicians; (2) Mini-Mental State Examination (MMSE) scores: illiterate ≤17, elementary school ≤20, middle school and above ≤24; or Montreal Cognitive Assessment Basic (MoCA-B) scores: elementary school and below ≤19, secondary school ≤22, college ≤24; (3) Clinical Dementia Rating Scale (CDR) = 0.5, not enough to diagnose dementia; (4) Meeting any one of the following three criteria: (a) impairment of 2 metrics in the same cognitive domain [score are 1 standard deviations (SD) below the mean for their age and education matched peers]; (b) impairment of 1 test score in 2 or more of the four cognitive domains (score are 1 SD below the mean for their age and education matched peers); (c) instrumental activities of daily living (IADL) score: more than one item score of 1 or more.
Neurological and ophthalmic examinations
All participants underwent neurological tests. A full set of cognitive scales were used to assess their cognitive status, including: MoCA-B, MMSE, IADL, Hamilton anxiety scale (HAMA), Hamilton depression scale (HAMD), Hopkins Verbal Learning Test (HVLT), Wechsler memory scale (WMS), Boston naming test (BNT), Verbal fluency test (VFT), Shape trails test (STT), and Rey-Osterrieth complex figure test (ROCFT). In addition, medical history, and results from laboratory and neuroimaging examinations were collected to aid the diagnosis. Furthermore, with the consent of some participants, CSF was collected and tested for Aβ and p-Tau proteins. CSF samples were collected from 23 patients, including 16 AD patients and 7 MCI patients.
A complete ophthalmic examination was administered, including the measurement of best-corrected visual acuity (BCVA), IOP, slit lamp examination and conventional OCTA of the macula. An international standard logarithmic visual acuity chart was used to measure the BCVA. IOP was measured three times with a hand-held tonometer, and the average value was taken. OCTA images and slit lamp examination were used to rule out other eye diseases.
Procedures for OCTA
The ZEISS Angioplex™ OCTA (Carl Zeiss Meditec, Dublin, CA) was used to scan the macula of all the participants. It has a scan rate of 68,000 A-scans per second, a central wavelength of 840 nm, and motion tracking to reduce motion artifacts. 6 × 6 mm images centered on the fovea were acquired. This study focuses on retinal SCP, which was defined as the area from the internal limiting membrane (ILM) to the inner plexiform layer (IPL). We subdivided the macula into 1 × 1 mm fovea subregion, 3 × 3 mm inner ring and 6 × 6 mm outer ring. Meanwhile, the inner ring and outer ring were divided into superior, inferior, nasal and temporal subregions ( Figure 1). VD was defined as the ratio of total length of blood vessels in the region to the area of the region, whereas PD was defined as the ratio of covered area of blood vessels in the region to the area of the region. The built-in software automatically calculates the area of the macular foveal avascular zone (FAZ), VD and PD.
All the examinations are performed by the same skilled clinicians. The clinician input the patient's name, gender and date of birth, and informed the patient of the precautions for examination to relieve the patient's nervousness and clear images were captured. Participants were seated with their mandible placed on the mandibular support and their forehead pressed against the front support. The position was adjusted so that the patient's lateral canthus was at the same height as the horizontal line. Good fixation is required during the scanning. After each scan, the operator determines whether rescanning is needed depending on the image quality. The high-quality images were captured and saved in the computer.
Statistical analysis
The enumeration data are represented by the number of cases, and the Chi-square test was used for comparisons between groups. The measurement data are expressed as mean ± standard deviation, and Shapiro-Wilk test was used to test the normality of these data. If the data were normally distributed, one-way analysis of variance (ANOVA) was used for comparisons among groups. If the data were not normally distributed, Kruskal-Wallis H method was used for comparisons between groups. Multiple logistic regression models for each of the OCTA parameters with adjustments for confounding factors were used to compare AD, MCI and HC subjects. Partial correlation analysis was used to evaluate the correlations among OCTA parameters and neuropsychological assessment scores, Aβ and p-Tau protein. A value of p < 0.05 was considered statistically significant. Statistical analysis was performed using Statistical Package for Social Sciences (version 20.0, SPSS Inc., Chicago, IL, United States).
Patient characteristics
A total of 139 subjects were included in this study, including 43 AD patients, 62 MCI patients and 34 HC. The eye with high-quality image was selected from each participant. Of AD patients, there were 21 (48.8%) males, and 58.1% were older than 70 years. Of MCI patients, there were 25 (40.3%) males, and 53.2% were older than Frontiers in Aging Neuroscience 04 frontiersin.org 70 years. Of healthy controls, there were 18 (52.9%) males, and 50% were older than 70 years. The demographic characteristics of AD patients, MCI patients, and HC are shown in Table 1. There were no significant differences in the age, sex, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, years of education, IOP and FAZ among AD, MCI and HC groups (p > 0.05). The BCVA was 0.63 ± 0.26 in the AD group, 0.74 ± 0.22 in the MCI group, and 0.76 ± 0.23 in the HC group (p = 0.037).
The scores of neuropsychological assessments in each group are shown in Table 2. There was no significant difference in the HAMA score among three groups (p = 0.089). There were significant differences in the MMSE score (p < 0.001), MoCA-B score (p < 0.001), HAMD score (p = 0.009), HVLT score (p < 0.001), WMS score
Perfusion density
Perfusion density showed the same results to VD. Compared with HC group (0.338 ± 0.077), the PD (0.29 ± 0.07) in the whole circle region significantly decreased in the AD group (p < 0.05). The whole circle was divided into three regions. Compared with HC group, the PD in the inner ring (AD: 0.24 ± 0.08, HC: 0.302 ± 0.093, p < 0.05) and outer ring (AD: 0.31 ± 0.07, HC: 0.358 ± 0.076, p < 0.05) regions significantly decreased in the AD group. Then, the inner and outer rings were further explored. In the AD group, the PD also significantly reduced in nasal (AD: 0.25 ± 0.09, HC: 0.309 ± 0.102, p = 0.007) and inferior (AD: 0.22 ± 0.1, HC: 0.305 ± 0.097, p = 0.001) regions of inner ring, nasal (AD: 0.37 ± 0.08, HC: 0.421 ± 0.078, p = 0.006), superior (AD: 0.31 ± 0.08, HC: 0.364 ± 0.080, p = 0.007) and inferior (AD: 0.29 ± 0.1, HC: 0.351 ± 0.087, p = 0.017) regions of outer ring as compared to the HC group (Table 4 and Figure 5). There were no significant differences in other areas. There was no significant difference between MCI group and HC group, and between AD group and MCI group (p > 0.05) (Table 4 and Figure 5). After adjusting for confounding factors such as sex, age, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, BCVA, and IOP, multiple logistic regression analysis showed that significant differences remained in the AD group. Compared with HC group, VD decreased significantly in the nasal (OR = 0.001, 95%CI: 3.62*10 −6 , 0.406) and inferior (OR = 1.89*10 −4 , 95%CI: 6.48*10 −7 , 0.055) regions of the inner ring and in the nasal (OR = 4.41*10 −4 , 95%CI: 3.50*10 −7 , 0.554), superior (OR = 0.001, 95%CI: 8.15*10 −7 , 0.423) and inferior (OR = 0.001, 95%CI: 2.76*10 −6 , 0.450) regions of the outer ring in the AD group (Figures 6, 7). Compared with HC group, VD also Forest plot of vessel density comparison among three groups. Multiple logistic regression was used to access the association between retinal vessel density and clinical diagnosis, adjusted for confounders of sex, age, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, BCVA, and IOP. (A) Compared with HC group, vessel density decreased significantly in the nasal and inferior regions of the inner ring and in the superior and inferior regions of the outer ring in the AD group. (B) Compared with HC group, vessel density decreased significantly in the superior and inferior regions of the inner ring and in the temporal and superior regions of the outer ring in the MCI group. VD, vessel density; AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls; CI, confidence interval.
FIGURE 2
Comparison of vessel density in macular among the AD, MCI, and HC groups. Blue regions indicate that the vessel density of the former decreases significantly in these areas compared to the latter. (A) In the AD group, the vessel density significantly reduces in the nasal and inferior regions of inner ring, and in the nasal, superior and inferior regions of outer ring as compared to HC group. (B,C) There is no significant difference between MCI group and HC group, and between AD group and MCI group. *Significant at p < 0.05. AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls.
FIGURE 3
Multiple logistic regression was used to assess the association between retinal vessel density and clinical diagnosis, adjusted for confounders of sex, age, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, BCVA, and IOP. Blue regions indicate decreases significantly compared with the HC group. (A) Compared with HC group, vessel density decreases significantly in the nasal and inferior regions of the inner ring and in the superior and inferior regions of the outer ring in the AD group. (B) Compared with HC group, vessel density decreases significantly in the superior and inferior regions of the inner ring and in the temporal and superior regions of the outer ring in the MCI group. *Significant at p < 0.05. AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls.
Correlation analysis
The correlations between OCTA parameters and cognitive function were further evaluated using partial correlation analysis. After adjusting for sex and age, VD and PD of the inner ring region were correlated with MoCA-B score, MMSE score, STT-A score and ROCFT score. VD of the outer ring region was correlated with MoCA-B score and STT-A score. PD of the outer ring region was correlated with MoCA-B score, STT-A score and ROCFT-recall. VD and PD of the fovea region were correlated with MoCA-B score and ROCFT-recall (Table 5).
The CSF was collected for the detection of Aβ and p-Tau protein. The correlation between OCTA parameters and Aβ or p-Tau protein was further evaluated. After adjusting for sex and age, the results showed no correlation between them (Table 6).
Discussion
In this cross-sectional study, changes in blood vessels and blood flow of the SCP were investigated in patients with AD and MCI. VD and PD of the SCP significantly reduced in patients with AD and MCI compared with healthy controls. This suggests that the retinal microvascular system is damaged in patients with AD and MCI. And these blood flow indicators correlated with cognitive
FIGURE 5
Comparison of perfusion density in macular among the AD, MCI, and HC groups. Blue regions indicate that the perfusion density of the former decreases significantly in these areas compared to the latter. (A) In the AD group, the perfusion density significantly reduces in the nasal and inferior regions of inner ring, and in the nasal, superior snd inferior regions of outer ring as compared to HC group. (B,C) There is no significant difference between MCI group and HC group, and between AD group and MCI group. *Significant at p < 0.05. AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls.
FIGURE 7
Forest plot of perfusion density comparison among three groups. Multiple logistic regression was used to assess the association between retinal perfusion density and clinical diagnosis, adjusted for confounders of sex, age, history of smoking, history of alcohol intake, hypertension, hyperlipidemia, BCVA, and IOP. (A) Compared with HC group, perfusion density decreased significantly in the nasal and inferior regions of the inner ring and in the nasal, superior and inferior regions of the outer ring in the AD group. (B) Compared with HC group, perfusion density decreased significantly in the superior and inferior regions of the inner ring and in the temporal and superior regions of the outer ring in the MCI group. PD, perfusion density; AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls; CI, confidence interval.
function. However, no correlation was revealed among these blood flow indicators and Aβ, p-Tau protein.
The mechanism underlying the decreased retinal blood flow in AD patients remains unclear. It has been proposed that Aβ protein may be deposited in the retina, causing damage to blood vessels. Autopsy results and AD animal experiments have shown that AD is accompanied by the deposition of retinal Aβ protein (Grimaldi et al., 2018;Chiquita et al., 2019;Shi et al., 2020), and the retina may have the deposition of Aβ protein before the Aβ accumulation in the brain (Koronyo-Hamaoui et al., 2011;Mirzaei et al., 2019). Aβ protein deposits in the vascular walls, resulting in decreased blood flow, hypoxia and nutrient deficiency. The hypoxic retina promotes angiogenesis by producing vascular endothelial growth factors (VEGF) to ensure essential oxygen and nutrient supplies.
However, this process is stopped by the Aβ protein. VEGF is mechanically blocked by the diffuse accumulation of Aβ plaques, and Aβ protein competitively binds to VEGF receptor 2. Therefore, VEGF cannot bind to their corresponding endothelial receptors to restore retinal blood supply to normal levels (Bulut et al., 2018;Yoon et al., 2019). A recent study (Shi et al., 2020) has also suggested that Aβ protein accumulation in retinal blood vessels leads to decreased expression of platelet-derived growth factor receptor-β and pericyte loss, vascular cells that regulate blood flow in capillaries, coupled with decreased expression of LDL receptorrelated protein-1(LRP-1), which leads to impaired blood-retinal barrier. The ability to clear Aβ protein is reduced, causing vascular damage. This may be the reason why retinal VD and PD are reduced in patients with AD.
FIGURE 6
Multiple logistic regression was used to assess the association between retinal perfusion density and clinical diagnosis, adjusted for confounders of sex, age, history of alcohol intake, hypertension, hyperlipidemia, BCVA, and IOP. Blue regions indicate decreases significantly compared with the HC group.
(A) Compared with HC group, perfusion density decreases significantly in the nasal and inferior regions of the inner ring and in the nasal, superior and inferior regions of the outer ring in the AD group. (B) Compared with HC group, perfusion density decreases significantly in the superior and inferior regions of the inner ring and in the temporal and superior regions of the outer ring in the MCI group. *Significant at p < 0.05. AD, Alzheimer's disease; MCI, mental cognitive impairment; HC, healthy controls.
Frontiers in Aging Neuroscience 09 frontiersin.org Studies have indicated that the distribution of Aβ protein is not uniform in the retina, but analyzing the changes in the whole retina may ignore the changes in local areas (Lad et al., 2018;Chan et al., 2019). In the present study, the densities of different retinal regions were calculated. After adjusting for confounding factors, the VD and PD in the inner ring (especially in the nasal and inferior regions), and outer ring (especially in the superior and inferior regions) significantly decreased in the AD group compared with HC group. VD and PD in the inner ring (especially in the superior and inferior regions), and outer ring (especially in the superior and temporal regions) significantly decreased in the MCI group compared with HC group. Wu et al. (2020) also conducted a similar study. They also divided the macula into many areas, but their study results showed that the superficial retinal vascular plexus in AD and MCI groups showed no significant difference compared with the normal control group, while the trend of blood flow decline was more obvious in the deep retinal capillary plexus (Wu et al., 2020). But the study did not adjust for confounding factors, and the two studies used different OCTA cameras. This may account for the differences between the two studies. Lahme et al. (2018) also divided the blood vessels of macular retina into superficial layer and deep layer for analysis, and their results were consistent with our results about AD. The blood vessel density in the superficial layer of macular retina in AD patients was lower than that in the control group. Another study also supports our conclusion . Changes in retinal small vessels may reflect changes in brain small vessels in Alzheimer's disease. These parameters may be used as alternative non-invasive biomarkers for AD diagnosis. We speculate that the localized changes may be caused by the thinning of ganglion cell layer in AD patients, which changes the retinal blood flow in the corresponding area. The SCP provides nutrients and oxygen to the layers of nerve fiber and ganglion cell in the retina (Jiang et al., 2018). Yoon et al. (2019) investigated the ganglion cell and inner plexiform layer (GC-IPL) thickness in AD patients, and results showed that GC-IPL thickness significantly reduced in AD patients, and the decreased areas were concentrated in the superonasal, inferior and inferonasal regions around the macula. It is roughly consistent with the decreased areas of retinal VD and PD in AD patients in this study. Our study also showed that retinal VD and PD decreased in patients with MCI. This indicates that MCI patients have developed vascular lesions before the onset of clinical symptoms of AD. Therefore, the retinal microvascular network may reflect the early signs of microvascular injury in the MCI and AD patients. There was no significant difference in the FAZ area between the groups in this study, which was inconsistent with previous findings. Bulut et al. (2018) found that, as compared to healthy controls, the FAZ area in AD patients increased, and a significant negative correlation was noted between FAZ area and MMSE score, suggesting that the lower the MMSE score, the larger the FAZ area is. Similar results were reported by O'Bryhim et al. (2018). In his study, the cognitively healthy subjects were into two groups based on the biomarkers, and results showed significant difference in the FAZ area between two groups, with patients in the biomarker positive group having larger FAZ area (O'Bryhim et al., 2018), but there was no difference in average annual change of FAZ area between the two groups during the 3-year follow-up period (O'Bryhim et al., 2021). Another study showed that the FAZ area remained unchanged (van de Kreeke et al., 2020). But FAZ size varies greatly in healthy people and can be affected by a number of factors (Laatikainen and Larinkari, 1977;Wagner-Schuman et al., 2011;Sampson et al., 2017). Therefore, whether FAZ can be used as a noninvasive retinal marker for AD remains controversial. And more studies with larger sample sizes are needed to confirm the association between FAZ area and AD pathology.
In addition, we compared the correlation between retinal blood flow parameters and cognitive function. MoCA-B and MMSE are usually employed to measure overall cognitive function. HVLT and WMS are used to test the memory function of patients. HVLTimmediate is used to reflect immediate memory, while HVLT-delay is used to reflect delayed memory. BNT and VFT reflect language function. STT tests executive function. ROCFT reflects visuospatial function and memory function. After adjusting for age and sex, overall cognitive function, executive function and visuospatial function were correlated with VD and PD of the retinal SCP. Another study also investigated the correlation between macular retinal blood flow and cognitive function, but no correlation was observed (Yan et al., 2021). The discrepancy between two studies may be ascribed to the differences in diagnostic criteria, statistical methods and OCTA machines. Frontal lobe, temporal lobe and parietal lobe constitute the attention, memory and executive network of the brain (Bero et al., 2011). Regional decrease of cerebral blood flow in AD patients is also mainly manifested in the frontal lobe, temporal lobe, parietal lobe and medial temporal lobe , suggesting that retinal blood flow may reflect changes in cerebral blood flow. However, this study had a small sample size, and prospective cohort studies with large sample size was still needed to further clarify the correlation between cognitive function and macular blood flow density.
In this study, the CSF was collected from 23 patients and the Aβ protein and p-Tau protein were detected. The correlations of retinal OCTA parameters with CSF Aβ protein and p-Tau protein were further explored. Our results showed no correlation of retinal VD and PD with Aβ protein and p-Tau protein. In a study of Lahme et al. (2018), results also showed no correlation between retinal SCP blood flow density and Aβ protein, p-Tau protein. Whether this implies that AD vascular lesions are primary rather than secondary to Aβ protein deposition and tau protein phosphorylation is still unclear. Therefore, prospective cohort studies with large sample size are needed to further clarify the relationship between retinal blood flow density and pathological proteins.
There were several limitations in the present study.
(1) The sample size was small, which may be related to the absence of differences in some parameters. In future studies, we will recruit more patients to expand the sample size. (2) OCTA requires a long shooting time, and patients cannot maintain fixation for a long time, especially patients with severe AD. So our study excluded patients who were unable to cooperate. That's one of the reasons why we had a low number of patients. (3) The patients were not followed up in this study. This was a cross-sectional study and the dynamic changes of retinal VD and PD were not investigated. In the following study, we will follow up the participants to monitor the dynamic changes in retinal vessel.
In conclusion, retinal SCP microvascular network density reduce in patients with AD and MCI patients as compared to healthy controls, suggesting retinal microvascular dysfunction in MCI and AD patients. Moreover, retinal VD and PD are correlated with some cognitive functional domains. This may be a potential non-invasive biomarker for AD and MCI. Changes in the retinal microvascular network density may offer a valuable insight on the brain in AD.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Clinical Research Ethics Committee of Tongji Hospital, Shanghai. The patients/participants provided their written informed consent to participate in this study.
Author contributions
XM and ZX were responsible for collecting patients' general information, cognitive information and ophthalmic examination information and writing this article. ZT and HW were responsible for statistical analysis of the data. LZ, YL, and YB were responsible for revising the article. All authors contributed to the article and approved the submitted version. | 2023-04-11T13:11:56.140Z | 2023-04-11T00:00:00.000 | {
"year": 2023,
"sha1": "01756baf23d5ba92c8ce7bc7f1e5a2887bbb218b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "01756baf23d5ba92c8ce7bc7f1e5a2887bbb218b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
16504305 | pes2o/s2orc | v3-fos-license | Factors affecting receipt of chemotherapy in women with breast cancer.
Aims: To review literature describing factors associated with receipt of chemotherapy for breast cancer, to better understand what factors are most relevant to women’s health and whether health disparities are apparent, and to assess how these factors might affect observational studies and outcomes research. Patterns of care for metastatic breast cancer, for which no standard-of-care exists, were of particular interest. Methods: Relevant studies written in English, Italian, French, or Spanish, published in 2000 or later, were identified through MEDLINE and reviewed. Review articles and clinical trials were excluded; all observational studies and surveys were considered. Articles were reviewed for any discussion of patient characteristics, hospital/physician/insurance characteristics, psychosocial characteristics, and clinical characteristics affecting receipt of chemotherapy by breast cancer patients. Results: In general, factors associated with increased likelihood of receiving chemotherapy included younger age, being Caucasian, having good general health and few co-morbidities, having more severe clinical disease, having responded well to previous treatment, and having breast cancer that is estrogen- or progesterone-receptor-negative. Many of the clinical factors found to increase the likelihood of receiving chemotherapy were consistent with current oncology guidelines. Of the relevant 19 studies identified, only six (32%) reported data specific to metastatic cancer; most studies aggregated women with stage I–IV for purposes of analysis. Conclusion: Studies of patterns of care in breast cancer treatment can help identify challenges in health care provided to particular subgroups of women and can aid researchers in designing studies that account for such factors in clinical and outcomes research. Although scarce, studies evaluating only women with metastatic breast cancer indicate that factors affecting decisions related to receipt of chemotherapy are similar across stage for this disease.
Introduction
As the most common cancer affecting American women, significant research effort and resources have been dedicated to the prevention, control, and treatment of breast cancer. Since 2003, breast cancer research accounts for the highest proportion of appropriated National Cancer Institute (NCI) funds for an individual cancer. 1 Despite these efforts, however, breast cancer remains a poorly understood disease and a significant cause of morbidity and mortality among American women. For the estimated 192,000 American women who will develop breast cancer in 2009, 2 factors affecting their prognosis and survival from disease are of utmost importance. Therefore, the need continues for breast cancer research and public health education and outreach programs.
Dovepress
Observational studies play an important role in studies of treatment effectiveness and survival, and can play a role in optimizing the use of therapeutics. Compared to clinical trials, they are able to quantify rarer events over longer periods of time and can provide a "real-world" picture of drug efficacy and safety in actual usage outside the trial setting. However, observational studies, like other research formats, are susceptible to potential biases from misclassification, differential selection, confounding, and incomplete adherence. Studies examining patterns of care in cancer treatment, therefore, provide valuable information for researchers that can affect the outcome and interpretation of results.
For breast cancer, late-stage and metastatic breast cancer treatment patterns are particularly difficult to study. Although only 6% of patients have metastatic/stage IV disease at diagnosis, 3 10% to 40% of women with early-stage breast cancer will eventually develop distant disease and metastases. 4,5 For these women, there is no standard-of-care guideline, and complex treatment decisions are based on individual patient and tumor characteristics. The goal is generally to manage symptoms and prolong life, rather than to cure the disease. Recommended treatment for incident or recurrent stage IV breast cancer may include surgery, hormone therapy, aromatase inhibitors, ovarian ablation or suppression, therapeutic or palliative chemotherapy, or supportive care, depending on the patient's prior treatments (radiation, surgery, antiestrogens, previous chemotherapy), tumor hormone receptor status, physical health, and menopausal status. 6 For many of these, decisions are made primarily on the basis of clinical status, as is called for in typical oncology guidelines. Patients' preferences for treatment have become increasingly important in clinical decision making, and factors such as trade-offs between quantity and quality of life, and patient hopes, expectations, values, and priorities, are weighed. 7 With each progressive step of planning a patient's care, numerous selection factors are introduced that can affect receipt of treatment.
In addition to patient preferences, some factors beyond a patient's control can dictate whether they are treated with chemotherapy for late-stage disease. Health disparities in cancer outcomes and treatment have been well documented, and addressing the causes and establishing measures to mitigate these differences have become increasingly higher priorities of government health care program policies. [8][9][10] Cancer health disparities, frequently marked by age, race/ethnicity, income, educational attainment, or geographic location, 9 are reflected in differences in cancer incidence, mortality, and survival rates across groups and are thought to be due to disparities in access to health care, which affects screening rates, treatment resources, and the quality of treatment given. 11 Therefore, the decision to treat with chemotherapy for breast cancer may also be influenced by a woman's inability to receive treatment if desired, as well as by lack of knowledge about the treatment options available to her.
Factors affecting the receipt of chemotherapy in women with breast cancer have been well studied, but no literature currently exists that compiles factors associated with patient characteristics, hospital/physician/insurance characteristics, psychosocial characteristics, and clinical characteristics in a single source. For example, it is commonly understood that older women are generally less likely to receive chemotherapy due to the shorter life expectancies of older women, general poorer health, and the reduced risk/benefit; however, other factors may also influence receipt of chemotherapy, even in younger women, and need to be accounted for in observational studies and outcomes research involving breast cancer. In addition, quantifiable information on specific influences of palliative treatment of metastatic breast cancer is particularly scarce. Therefore, we performed a review of the literature and information published since 2000 regarding the factors affecting decision making for the receipt of chemotherapy in patients with breast cancer, particularly metastatic cancer, taking into account patient characteristics, hospital/physician/insurance characteristics, psychosocial characteristics, and clinical characteristics.
Literature search and review
A MEDLINE search was performed using the following query: breast cancer AND (recurrent OR metastatic OR advanced stage OR advanced disease OR stage IV OR stage I OR stage II or stage II or early stage OR early disease) AND (chemotherapy OR treatment OR second line OR third line) AND (practice patterns OR health services OR decision-making OR predictors OR disparity OR correlates OR quality of life) NOT ("review" [Publication Type]) NOT ("clinical trial" [Publication Type]) NOT ("case reports" [Publication Type]). Searches were limited to the year 2000 or later. Review articles, case reports, and clinical trials were excluded; observational, clinical, and population-based studies were considered, as were survey data of physicians and oncologists. General clinical reviews that provided treatment guidelines but no original data were also excluded. This initial search returned 491 studies discussing chemotherapy and breast cancer, of which 46 were deemed relevant for further review based on criteria that were set a priori by the
Dovepress
contributing authors and as containing information related to the four categories of interest (patient characteristics, hospital/ physician/insurance characteristics, psychosocial characteristics, and clinical characteristics). The bibliographies of these studies, as well as those of several recent review articles, were also reviewed, and an additional 35 articles of potential relevance were identified. All articles were reviewed by one or more authors; studies were excluded from our discussion if they specifically excluded cases of metastatic breast cancer, if they did not include information regarding chemotherapy as a treatment, or if they contained only data on the differences in response to (not receipt of ) chemotherapy. Thus, articles identified spanned receipt of chemotherapy treatment in stage I-IV breast cancer patients.
Using the above criteria, of the 81 articles reviewed, 19 were identified as pertinent to treatment decision-making in breast cancer (either specific to or including stage IV) and chemotherapy, [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30] and 62 were excluded. Of the 19 studies, only six provided data specific to metastatic (stage IV) disease. 13,18,19,[21][22][23] One of these 22 reported tabular data for "metastatic" breast cancer, but discussed the cases as "advanced" breast cancer in the text. For the purposes of this paper, we considered the data to be specific to metastatic disease. Most of the studies were conducted in populations in the United States 12,15,17,18,[23][24][25][26][27][28][29] and the United Kingdom. 16,20,22,30 Studies from other countries included one from France 19 and one from Australia. 13 The factors studied in relation to receipt of chemotherapy were roughly divided into four primary categories: patient characteristics, hospital/physician/insurance characteristics, psychosocial characteristics, and clinical characteristics. Some studies spanned more than one factor. Of the 19 studies identified as relevant, 15 mentioned patient characteristics, including demographic characteristics such as age, race, marital status, socioeconomic status (SES), and education. Four mentioned hospital/physician/insurance characteristics such as insurance status, physician type, and type of medical facility, while five mentioned psychosocial characteristics, including patient anxiety and depression. Finally, seven studies mentioned clinical characteristics such as tumor markers, lymph-node involvement, type of previous treatment, response to previous treatment, patient general health, and the presence of co-morbidities. Our findings for each factor are summarized individually in the following sections.
Patient characteristics
Patient characteristics evaluated in the studies identified included age, race, SES/income, education, and language barriers ( Table 1). Age was the most frequently considered characteristic, being discussed in 10 studies. 14,15,[17][18][19][20]22,26,28,30 Race was considered in five studies, 12,18,25,26,29 SES/income in three, 12,16,24 education in four, 12,22,26,28 and language barriers in two. 12,22 Age All of the studies identified found that older women received chemotherapy less commonly than did younger women. 14,15,[17][18][19][20]26,28,30 Of these studies, seven provided numerical data to support this conclusion (Table 1). 15,[17][18][19][20]28,30 Five of the studies demonstrated a statistically significant difference in chemotherapy use between older and younger women, [17][18][19][20]28 although only two of these provided data specific to metastatic breast cancer. 18,19 In a prospective survey of qualified specialists in France, the authors noted that, of the women receiving chemotherapy, 82% in the younger age group received the standard dose and cycle length, but only 62% of those in the older age group received it (P 0.01). 19 Odds ratios (ORs) were presented from the Surveillance Epidemiology and End Results (SEER)-Medicare-linked database of women with breast cancer diagnosed in 1991 and 1992, where the odds of receiving chemotherapy among US women with Medicare claims decreased with increasing age as 0.60 (95% confidence interval [CI]: 0.51-0.70), 0.33 (95% CI: 0.27-0.40), and 0.11 (95% CI: 0.0-0.14) for women aged 70-74, 75-79, and 80 years and older, respectively, relative to women aged 65-69 years. Among women with stage IV disease, the proportion with Medicare claims for chemotherapy decreased from 39% among women between the ages of 65 and 69 years to only 10% among women 80 years of age or older. 18 In a survey administered to medical and clinical oncologists in the UK, asking which factors were important in deciding whether to recommend chemotherapy to patients with metastatic breast cancer, patient age was considered to be "quite important" or "very important" for 58.6% of oncologists surveyed. 22 Several studies offered reasons to explain why older women received chemotherapy less often. Commonly cited reasons included little clinical evidence to prove the benefit of chemotherapy in older women, 30 the shorter life expectancies of older women and the reduced cost/ benefit, [28][29][30][31] and having fewer incentives (eg, dependents) to invest in therapies that may extend their lives. 28 The lower proportion of older women with breast cancer receiving chemotherapy may also reflect an increased number of co-morbidities and worse general health among these women. 19 For example, among British oncologists, "frailty" and "concurrent medical conditions" were deemed "quite important" or "very important" to 93.1% and 82.8% of surveyed clinicians, respectively, compared to the 58.6% of oncologists who considered age to be of importance. 22 Of the 10 studies in this review citing the impact of age on chemotherapy use, only two adjusted for co-morbidities, 17,18 one of which 18 provided data specific to metastatic breast cancer. In both studies, multivariate analyses revealed a stronger inverse association of increasing age and chemotherapy use than that of co-morbidity and chemotherapy use. The higher prevalence of hormone receptor (estrogen-or progesteronereceptor [ER/PR]) positive tumors among postmenopausal women than premenopausal women, 32 and therefore more frequent use of hormone therapy, also contributes to this observation. It has been suggested that elderly patients have cancers with lower proliferative indices, and that they will derive less benefit from standard chemotherapy; 33 however, the elderly are frequently underrepresented in cancer clinical trials. Although elderly (65 years of age or older) patients make up 63% of cancer patients in the US, they represent only 25% of the cancer clinical trial participants. 34 Whether this deficit is due to fear and misunderstanding of older patients, physician bias against enrolling older patients, or overly stringent eligibility criteria that limit the number of elderly patients, their underrepresentation makes it difficult to assess the risks and benefits of cancer chemotherapeutic regimens and may partially explain the inverse relationship between age and chemotherapy use.
race
Five studies, all conducted in the United States, considered race to be a factor in predicting receipt of chemotherapy in breast cancer patients 12,18,25,26,29 Only one study presented data specific to metastatic breast cancer, 18 with the remainder considering all cases (stages I through IV) in aggregate. In Du and Goodwin, 18 the proportions of black and white women with stage IV breast cancer who received chemotherapy were similar (26.8% versus 26.5%, respectively), although fewer women whose race was classified as "other" received chemotherapy (18.2%). Two other studies (not specific to stage IV disease) reported very small differences in the percentages of Caucasian, African-American, and Hispanic women with breast cancer treated with chemotherapy; both noted that higher proportions of Caucasian women (81.3% 25 and 67.0% 29 ) than African-American women (80.0% 25 and 46.5% 29 ) or Hispanic women (52.4% 29 ) received chemotherapy, although these differences either were not statistically significant 25 or statistical significance was not evaluated. 29 A qualitative study interviewed women of different races and indicated that African-American women were the least likely to receive adjuvant therapies, including chemotherapy, and it was suggested that economic-related issues and insufficient insurance coverage might be the underlying reasons. 12 Two studies reported that African-American women were more likely to believe in alternative medicine or religious intervention in place of Western treatments. 12,26 Only one study reported on Hispanic women. Similar to the age effect, the disparities in cancer care among ethnic minorities have been well documented in the literature. 9,12,13,15 Treatment for ethnic minority groups may also be influenced by other factors that affect these groups, including socioeconomic issues, cultural beliefs, language barriers, challenges in access to care, and different rates of co-morbidities, 35 making it difficult to determine the optimal method to address this disparity.
SeS/Income
Three studies discussed SES or income in relation to chemotherapy treatment -two in the US [12][13][14][15][16][17][18][19][20][21][22][23][24] and one in the UK 16 -although none was specific to metastatic breast cancer. Of the two studies that provided numerical data, neither observed a significant difference in the proportions of patients receiving chemotherapy by income or SES, [16][17][18][19][20][21][22][23][24] although one suggested that their observation that uninsured, lower income women were less likely to receive chemotherapy may have reached statistical significance with a larger sample size. 24 A qualitative study of community health professionals working with diverse populations reported that individuals with lower SES may lack awareness regarding the disease, resources, and treatments, and are not as proactive about seeking medical care. 12 education Four studies discussed education level in relation to chemotherapy treatment, 12,22,26,28 as well as other adjuvant therapies such as radiation and hormone therapy; only two provided quantitative data related to chemotherapy, [26][27][28] and only one was specific to metastatic breast cancer. 22 Peele and colleagues stated that educated women were significantly more likely to choose treatment with adjuvant therapy, including chemotherapy, hormone therapy, and combination therapy, although the study did not distinguish between cases based on disease severity and treatment. 28 Mitchell and colleagues reported that having less education was statistically significantly (P 0.0001) and inversely correlated with a belief in "religious intervention in place of treatment"; 26 it is presumed that the treatment likely included chemotherapy due to inclusion Dovepress of women with advanced-stage breast cancer in the study population. Qualitatively, Ashing-Giwa and colleagues, when discussing various adjuvant therapies, including chemotherapy, reported that less-educated women in the United States were less informed about breast cancer itself, as well as resources and treatments, and were less proactive in seeking medical care. 12 In the UK, 13.8% of clinicians ranked education as an important factor influencing their recommendation for palliative chemotherapy to women with metastatic breast cancer. 22
Language barriers
Only two studies, both qualitative, discussed the effect of language barriers on the receipt of chemotherapy. 12,22 Ashing-Giwa and colleagues reported that language barriers prevented half of the Latinas and some of the monolingual Asian-Americans in their study from following and meeting the requirements for treatment-related financial assistance, 12 while Grunfeld and colleagues reported that only 20.7% of surveyed UK clinicians felt that language barriers were an important factor in their decision to treat metastatic breast cancer with palliative chemotherapy. 22
Hospital/physician/insurance characteristics
Four studies discussed differences in hospital, physician, or insurance status and their effect on the percentage of patients treated with chemotherapy 19,23,24,28 ( Table 2).
Insurance status
Two studies reported on insurance status. 12 Dovepress significant differences; however, the very existence of the WHN likely increased the treatment percentages for uninsured women, indicating that this study does not preclude the role of insurance status in determining the rate of women receiving chemotherapy. In their qualitative study of the use of adjuvant therapies, including chemotherapy, in women with all stages of breast cancer in aggregate, Ashing-Giwa and colleagues supported the suggestion that uninsured individuals are less likely to seek medical care. 12
Medicare reimbursement rates
In the only study identified in this category, Jacobson and colleagues found that the index of excess Medicare reimbursement had a minimal effect on the overall rate of chemotherapy treatment in cases with metastatic breast cancer, but did find that more generously reimbursed providers were more likely to choose more expensive chemotherapy regimens. 23
Physician type
In a prospective survey of French specialists, including oncologists, radiotherapists, gynecologists, and internists, 1,009 patients with metastatic breast cancer aged 65 to 74 years and greater than 75 years old were evaluated. 19 Results indicated that physician type played a role in treatment decision-making, with treatment chosen by a single physician-rather than in consultation-in 30% of cases in the younger age group and in 43% of patients in the older group. Freyer and colleagues noted that geriatricians were involved in only 2% of treatment discussions in the older patients. 19 This difference was proposed as a possible factor underlying the observed substandard treatment of older women with breast cancer.
Type of practice
Peele et al reported that women attending university-based practices were significantly more likely (P 0.01) to choose adjuvant therapy, 28 including chemotherapy, hormone therapy, and combination therapy. No other studies were identified that discussed practice type.
Clinical characteristics
Tumor characteristics and disease severity were the most frequently discussed clinical factors, considered in five of the seven studies identified in this category 17,18,20,22,28 ( Table 3). The patient's general health and co-morbidities were mentioned in four studies, 14,18,19,22 and previous cancer treatments were mentioned in two studies. 18,22 Three studies in this category were specific to metastatic breast cancer. 18,19,22 Tumor characteristics/disease severity Studies have demonstrated that chemotherapy is the preferred treatment in achieving pathologically complete remission in cases that are ER-and/or PR-negative, 36 while hormonal and endocrine therapies may be more effective in the treatment of hormone receptor-positive disease. 37 Accordingly, general clinical reviews indicated that chemotherapy should be used as the initial treatment in cases that are hormone receptor-negative 37 and as treatment for ER/PR-positive advanced breast cancer that is refractory to hormonal therapy. 38 The results of two analyses of the SEER-Medicare linked database may reflect these recommendations. In Du and Goodwin, 17 patients with ER-negative disease (and positive lymph node status) were 425% more likely to receive chemotherapy than those with ER-positive disease. In another publication by Du and Goodwin,18 the overall proportion of breast cancer patients receiving chemotherapy was 69.9% and 48.4% for women with nodepositive/ER-negative tumors (age groups 65-69 and older than 65 years, respectively), while only 4.8% and 2.9% of women with node-negative/ER-positive disease received chemotherapy (age groups 65-69 and older than 65 years, respectively); however, neither study was specific to metastatic breast cancer for this particular factor.
Women with more severe disease (defined as having larger tumors, hormone-receptive negative disease, and node-positive disease) were more likely to undergo chemotherapy, 17,18,20,28 especially when thoroughly informed about their treatment options through a decision aid, compared to a control pamphlet, in one trial (P = 0.04). 28 The clinical characteristics most frequently cited as being important in clinicians' decisions to recommend palliative chemotherapy were the pace of disease progression (89.7%) and site of metastases (79.3%), with tumor histologic type/grade cited less frequently as important to the decision-making process (24.1%). Other factors noted by more than half the clinicians as important included symptoms other than pain, concurrent medical conditions, site of metastases, toxicity with previous chemotherapy, pain, patient's wishes, frailty, age, and social support. 22
General health/co-morbidities
The subjective determination of the patient's general health status was the most important criterion reported by clinicians for treatment with weaker doses of chemotherapy in a French prospective study of metastatic breast cancer cases. 19 Performance status of the patient was one of the most frequently cited influential factors (96.6%) in recommending palliative chemotherapy among UK clinicians. 22 Other factors relating to patient health were also considered important, with 93.1% and 82.8%, respectively, agreeing that patient frailty and concurrent medical conditions were important. One study of breast cancer cases reported a statistically significantly lower probability of receiving chemotherapy for women with a co-morbidity index of two, compared to those with a co-morbidity index of 0 (OR = 0.46; 95% CI: 0.27-0.76), although this inverse relationship was not statistically significant among patients with a co-morbidity index of 3 or greater. 18 Caban and colleagues did not find a significant difference in the rate of neoadjuvant chemotherapy treatment based on patient disabilities that limited mobility. 14 While effects of specific co-morbidities may vary, the current studies indicate that overall health is an important factor in predicting receipt of chemotherapy for breast cancer.
Previous cancer treatments
Du and Goodwin provide data on the proportion of women with metastatic breast cancer who received chemotherapy according to their previous breast cancer treatments. 18 The rates ranged from 19.4% (in those previously treated with mastectomy only) to 30.2% (in those previously treated with mastectomy and radiotherapy), although the authors did not report whether the difference was significant and did not indicate any clear trend, making it difficult to draw any conclusions from these data. Grunfeld and colleagues 22 cited "toxicity with previous chemotherapy" and "previous response to chemotherapy" as important factors in the decision to treat metastatic breast cancer with palliative chemotherapy by 79.3% and 86.2% of surveyed clinicians, respectively.
Psychosocial characteristics
Psychosocial characteristics studied in relation to receipt of chemotherapy for breast cancer included the presence of social partners or support, 18,22,27 mental health, 21,22 and the attempt to minimize the psychosocial impact of cancer on social, work, and family lives 13 (Table 4).
Social support/partners
Two studies provided numerical data regarding the impact of a spouse or significant other on the receipt of chemotherapy treatment, 18,27 one of which provided data specific to metastatic breast cancer. 18 Osborne and colleagues and Du and Goodwin reported that married women were more likely to receive chemotherapy than unmarried women (married = 12.3% versus unmarried = 9.1%; 27 married = 37.4% versus unmarried = 20.7% 18 ). One study suggested that unmarried women might receive chemotherapy less often due to patients' personal concerns over postoperative assistance and transportation or the amount of out-of-pocket expense for treatment, or due to a doctor's decision not to discuss such treatment options because of these assumptions. 27 Of British clinicians surveyed, 51.7% reported that the patient's social support was an important factor in their decision to give palliative chemotherapy to women with metastatic breast cancer. 22
Mental health
In an analysis of the SEER-Medicare linked database, the authors reported that, among women whose breast cancer was stage IV at diagnosis, breast cancer cases with a prior diagnosis of depression were less likely to receive chemotherapy than were women without a prior diagnosis of depression (
Psychosocial impact minimization
Butow and colleagues reported that Australian women with metastatic breast cancer who were attempting to minimize the impact of their disease on their social, work, and family life (termed "minimizers") were significantly less likely to receive chemotherapy than those who were not minimizing the impact of the disease ("nonminimizers") 13 . Specifically, 55% of nonminimizers received chemotherapy, a statistically significant difference compared to minimizers (P 0.001).
Patient/family wishes
Grunfeld and colleagues reported that 96.6% of British clinicians considered the desire of the patient to continue treatment an important factor in their decision to recommend palliative treatment for metastatic breast cancer. 22 The wishes of the patient's family were reported to be influential to a lower proportion of clinicians (37.9%). 22
Discussion
In this review of literature describing factors associated with receipt of chemotherapy among women with breast cancer, we found that women receiving chemotherapy tended to be younger, healthier, more frequently Caucasian, and of higher educational status, and had clinical characteristics of more severe disease, such as ER/PR-negative tumors.
There was some evidence that the type of physician and attending a university-based facility were related to more frequent use of chemotherapy. Women with emotional/mental health issues and less social support were less likely to receive chemotherapy, although these observations need to be replicated in additional studies to determine whether they constitute consistent trends. There was less evidence that factors such as income, insurance characteristics, or Medicare reimbursement rates had substantive influence on whether chemotherapy was used, with sometimes only a single study discussing these factors. Only six of the 19 studies focused on women with metastatic breast cancer, with the remaining studies analyzing cases of stage IV breast cancer in combination with all other stages of breast cancer. Typically, only a small percentage of the aggregated cases had metastatic disease. As noted earlier, the emphasis on palliative, rather than curative, treatment for those with metastatic disease would most likely influence differences in treatment patterns. However, a qualitative review of these studies reporting specifically on metastatic breast cancer compared to those evaluating all cases in aggregate revealed no striking differences in factors affecting treatment receipt across stage, suggesting that the factors listed above are likely important for women with both early-and late-stage disease.
The patterns of chemotherapy use observed in this review appear to largely reflect a few general underlying influences. The lack of chemotherapy for older women is important, because breast cancer incidence and mortality rates peak in the elderly, 2 and the population of elderly women in the US is growing rapidly. 39 The conservative use of chemotherapy in the elderly has been reported consistently for several other cancer types. [40][41][42][43] Older age and multimorbidity are often intertwined as co-morbidities increase with advancing age, which may limit treatment options. 44,45 The less frequent use among older women and women with lower general health status and higher numbers of co-morbidities may also reflect lack of evidence of efficacy, because these women are typically underrepresented in clinical trials. It is also possible that, because the proportion of ER-positive tumors increases with age, 46 the less frequent use of chemotherapy among older women reflects the shift from cytotoxic regimens to hormonal therapies, as per current clinical guidelines for treatment of hormone receptor-positive tumors. 6,44 The less frequent use of chemotherapy among ethnic/ racial minorities, and women of lower educational attainment, is consistent with the results of other studies of health disparities in cancer outcomes and treatment, as well as other diseases, and most likely reflects limitations in access to care and unawareness of treatment options. 35,47 The decreased chemotherapy use among women with emotional and mental health issues may reflect the recognition on the part of the treating physician, and possibly the patient herself, of the trade-offs between quantity of life gained and quality of life lost at this juncture. 7 There are a few important limitations to consider when interpreting these results. In addition to the relative paucity of studies examining factors that affect receipt of chemotherapy, with only 19 studies identified since 2000, no study attempted to thoroughly dissect the many potential factors that may be involved. Most studies considered only one or two factors, and as such, were unable to adjust for correlations and potentially confounding effects, possibly creating spurious relationships and/or obscuring true ones. This could have considerable effects on conclusions derived from observational and other epidemiologic studies, as well as Dovepress outcomes research. It is also likely that several other factors not reported in these papers may play an important role in the decision to treat metastatic breast cancer with chemotherapy or discussed factors that did not fall into the four categories of interest. Therefore, comprehensive studies examining these interrelated factors are needed to better understand the factors associated with receiving chemotherapy for metastatic breast cancer. Finally, because the study designs, the populations studied, and the measures of association varied so much across studies, summary measures could not be calculated, and results reflect a more qualitative than quantitative review of the literature.
Studying the patient, insurance/provider, and psychosocial factors associated with receipt of chemotherapy among breast cancer patients can provide a real-world view of usage patterns of treatment in clinical practice. While it is apparent that a combination of clinical guidelines, possible social disparities in health care, and personal decision-making styles and beliefs among patients and their providers play a role in chemotherapy use, the current body of literature does not allow quantitative comparisons and assessments of the relative contributions of each selective factor. This review highlights the paucity of quantitative literature available with which to study the impact of these factors, both individually and as interrelated variables that inevitably impact each other. Future studies examining these patterns can aid patients and care providers when interpreting the clinical literature and making decisions about best course of treatment. Differential patterns of care can also provide guidance to policy makers when developing programs and interventions to reduce disparities in health care access. | 2016-05-04T20:20:58.661Z | 2010-08-09T00:00:00.000 | {
"year": 2010,
"sha1": "8616ac11144622c8a1c9aff58b98cbc6a5f4cd67",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=6245",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bd3863827dd8093597374819cf339dd3c851f43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230681124 | pes2o/s2orc | v3-fos-license | Media, Community Building, and Refugee Resettlement Policies: The Impact of Canada’s Welcoming Culture and Media Coverage on the Settlement Outcomes of Resettled Syrian Refugees
Abstract: This paper argues that positive online media coverage of Syrian refugees arriving in Canada, and the welcoming culture of Canadian society, have both influenced positive settlement and integration outcomes for Syrian refugees. It also provides a better understanding of Canada’s response to the Syrian refugee crisis and shows how the process of resettlement becomes stronger when local community members and citizens are involved. These arguments are demonstrated firstly by analyzing the relationship between welcoming cultures, positive media coverage, and the perception of refugees. Secondly, the role of media coverage in influencing welcoming cultures in Canada, as well as its role in encouraging community members and ordinary citizens to be involved in national humanitarian projects, Mounir Nasri* Queen’s University, Canada https://orcid.org/0000-0001-5271-3676
Introduction
In November 2015, when the new Canadian federal government led by Justin Trudeau and the Liberal Party came into power, a commitment was made to help resettle 25,000 Syrian refugees before the end of February 2016 1, 2 . With that announcement, the state began to lead various initiatives in order to help refugees settle, providing housing, employment, education and other essential needs. Citizens and members of civil society also played a big part in this national project, as they took on different roles to help empower the Mounir Nasri, Media, Community Building, and Refugee Resettlement Policies... newcomers 3 . The response did not stop there. Canadian media channels, organizations and governmental institutions also took this opportunity to highlight on their platforms the welcoming attitude of the host community and to cover the settlement journeys of the newly arrived refugees. The coverage was not limited to traditional media outlets; the topic also received extensive local and international attention online and on emerging social media platforms.
MIGRATION, ADAPTATION, AND MEMORY
This paper focuses on exploring the influence of media coverage and public opinion on the settlement and integration outcomes of resettled Syrian refugees in Canada. This paper will argue that positive online media coverage of Syrian refugees arriving in Canada, and the welcoming culture of Canadian society, have both influenced better settlement and integration outcomes for many Syrian refugees. It will also provide a better understanding of Canada's response to the Syrian refugee crisis and show how the process of resettlement becomes stronger when local community members and citizens are involved. I will demonstrate these arguments firstly by analyzing the relationship between welcoming cultures, positive media coverage, and the perception of refugees. Secondly, I will examine the role of media coverage in influencing welcoming cultures in Canada, as well as its role in encouraging community members and ordinary citizens to become involved in national humanitarian projects. Finally, information related to Canada's welcoming culture and positive media coverage are discussed relative to settlement outcomes, which portrays the strong influence of storytelling and inclusive communities on the success of new immigrants as they rebuild their lives in a new country. I will also outline the various refugee resettlement programs in Canada. A diverse range of academic research was used and analyzed to understand if there is a connection between public opinion, media coverage, and the public perception of refugees. In order to provide specific context and examples on the topic, two widely circulated and high-profile refugee stories were examined to explore the influence of media and public opinion on refugees. Additionally, a range of reports and studies by the Canadian government and international organizations like the UNHCR were analyzed to highlight the different refugee resettlement streams in Canada, and to examine the short and long term settlement and integration outcomes of each program. Mounir Nasri, Media, Community Building, and Refugee Resettlement Policies... MIGRATION, ADAPTATION, AND MEMORY 3 J. Hyndman, W. Payne, S. Jimenez, Private refugee sponsorship in Canada, "Forced Migration Review", 54: 2017, pp. 56-59. There is a lot that can be learned from the response to the Syrian refugee crisis in Canada, lessons that may be valuable for similar future projects 4, 5 . The role of media in the topic of migration is instrumental as it sets the agenda for public discourse, identifies opportunities and challenges, and may also provide solutions that lead to smooth and more successful integration of refugees in host countries 6 . According to Reker 7 , the term "Welcoming Culture" initially emerged in Germany in national conversations on demographic changes and refugees. It can be defined as a positive attitude of societies towards foreigners, and particularly towards refugees and immigrants. The appearance of a welcoming culture for refugees in the Canadian context can perhaps be dated back to the late 1970s with the launch of the private sponsorship program, and when a system was established by the government of Canada to engage the public in responding to the Indochina refugee crisis. The private refugee sponsorship program gave people the chance to take action by helping Indochinese refugees settle in Canada 8 . In 1986, the program won the United Nations Nansen Medal, the only time a whole country has ever been recognized with this refugee-focused award 9 . Since its inception, the private sponsorship program has allowed Canadians to offer a new home to more than 200,000 refugees 10 . Evidence suggests that the process of integration has proven to be more successful in the case of community involvement than for those admitted into the country through governmental assistance 11 . It is also important to note that private sponsorship 4 S. Marwah, Syrian refugees in Canada: Lessons learned and insights gained, "Ploughshares Monitor", 37 (2): 2016, pp. 9-11. 5 S. Pathberiya, Paying it forward: Lessons Learned from Syrian resettlement can prepare us for the future waves of climate change refugees, "Alternatives Journal", 42 (3): 2016, pp. 62-66. 6 came in as an addition to government assisted resettlement commitments, and not as a substitute 12 . In other words, the private sponsorship program arose out of the desire to do more for refugees 13 . This demonstrates the historical background behind the role of community members in influencing welcoming cultures for refugees in Canada.
Relationship between media coverage and perception of refugees
To argue that positive online media coverage has had an impact on Canada's response to the refugee crisis, it is important to prove first that there is a strong relationship between media and the perception of refugees. According to Danilova 14 , in the age of globalization, media have become a very powerful tool and some of the main determinants of public opinion. Media, and particularly online media have the ability to raise awareness about the daily experiences and challenges faced by refugees. More importantly, media coverage can influence the framework of immigration policy debates and the decision-making process 15,16 . Lawlor and Tolley 17 affirm that there is a very strong relationship between media coverage and the perception of refugees. Their research (2017:968) states that media can either lead or follow public opinion, and that it holds the ability to shape policy responses towards refugees and migrants. A strong example of the relationship between media coverage and the perception of refugees can be observed through the story of Alan Kurdi, a three-yearold Syrian boy who drowned in the Aegean Sea in September 2015. The pho-tos of Alan Kurdi triggered strong international reactions from governments, NGOs, citizens and politicians 18 . In fact, the Canadian response to the Syrian refugee crisis was arguably mainly fueled by this incident. The influence that Alan Kurdi's photo had was crucial in creating the Canadian public impression towards Syrian refugees 19 . The media coverage of this incident was a major contributor to the government's response and to mobilizing citizens. Tyyska, Blower, Deboer, Kawai and Walcott 20 also found that media coverage plays an essential role in the construction of socially shared understandings and influential representations of newcomers, especially refugees.
That said, it is important to note that the existing relationship between media and the public perception of refugees is still a very sensitive topic given that it can be steered in directions that will not necessarily be in the best interests of vulnerable refugees. Kosho 21 indicates that various studies on media coverage and public opinion concerning refugees have shown that the images in the media, the descriptions, and the labeling of the immigrants and refugees can influence public attitudes concerning immigrants, and impact the national policies on immigration. Media coverage holds a certain type of power over public opinion, and citizen should be aware and critical about the type of information they receive, or even seek. As framed by Boomgaarden, Matthes and Lechele 22 , the refugee crisis has shown how central digital and social media communication has become a part of many citizens' lives, with online social media platforms like Facebook and Twitter acting as first sources of information, as well as potential risks for political radicalization. 18
Positive coverage of Syrian refugees in Canadian media and welcoming attitudes of Canadian communities
Canada's response to the Syrian refugee crisis in late 2015 can be considered unique and inspiring at a time when most Western countries were closing their doors to refugees. Positive online media coverage majorly contributed to the development of this national response. Portraying Syrian refugees positively through online platforms of Canadian media and institutions encouraged private sponsors across various communities in Canada to take action and support the newly arrived refugees 23 , unlike other countries, where negative coverage resulted in lower support for Syrian refugees. In fact, a 2018 report by the Environics Institute for Survey Research showed that one out of three Canadians had a connection with the Private Sponsorship of Refugees (PSR), either directly or through someone they knew 24 . The PSR is a program that allows Canadian Citizens and Permanent Residents to take part in the resettlement process by offering protection and helping refugees build a new life.
The Canadian online media coverage that focused on the public's volunteerism in relation to refugee resettlement is noteworthy 25 . Videos produced by organizations like UNHCR Canada and World Vision Canada were shared on social media platforms and gained thousands of viewers 26, 27 . These videos focused on the welcoming attitudes presented by Canadian communities and citizens, and further enhanced the relationship between refugees and the newest community members. Swain reminds us that social media can both inform and misinform the public, and make significant changes in people's perceptions. The media had a positive impact on people's understanding of different cultures and struggles 28 . This is something we can observe in the Canadian context, 23 V. Tyyska 28 L. Swain, The Influence of the Media on Public Perspective of the Syrian Refugee Crisis, "Concrete", October 6 2015, https://www.concrete-online.co.uk/the-influence-of-the-media-onpublic-perspective-of-the-syrian-refugee-crisis/. Mounir Nasri, Media, Community Building, and Refugee Resettlement Policies... where many Canadian citizens had first-hand contact with Syrian refugees through the PSR program or by volunteering. Shedding light on these positive relationships through online content shaped the perception of the public and encouraged further community responses. Coker 29 states that the media attention on the lived experiences of Syrian refugees moved Canadians to respond. An example of that can be seen through the actions of ordinary citizens who came together through online groups such as GTA Refugee Assistance Hub, a growing Facebook group with more than 3500 members, to support the newly arrived refugees by donating items and sharing resources 30 . It is arguable that online media platforms, whether through social media websites or online news coverage, have been instrumental in creating, or perhaps improving, welcoming attitudes towards Syrian refugees and by Canadian communities.
Impact of positive online media coverage and welcoming cultures on the settlement and integration of refugees
The online coverage of Syrian refugees in Canada has influenced welcoming attitudes among diverse populations in Canadian society. Based on that finding, it is safe to argue that the welcoming attitudes by the current government, organizations and ordinary citizens have contributed to successful settlement and integration outcomes for refugees. According to Tyyska, Blower, Deboer, Kawai and Walcott 31 , the acceptance and integration of Syrian refugees was mainly dependent on the way certain media platforms related the resettlement plan to the Canadian public. UNHCR Canada's 2017 data also showed that after less than one year, 90% of Syrian refugees resettled between 2015 and 2016 reported having a strong or very strong sense of belonging to Canada 32 . Welcoming cultures can allow citizens to contribute more and refugees to integrate better. A good example of that is the story of Danby CEO Jim Estill, a Canadian citizen who was recognized as a global hero for his humanitarian actions in response to the refugee crisis. Estill contributed 1.5 million dollars to 29 C. Bolu sponsor 58 Syrian refugee families -more than 200 people -and helped them settle and integrate in his hometown, Guelph Ontario, by providing language training, job opportunities and even helping some start their own business 33 . He took the lead and engaged over 800 volunteers to support this humanitarian initiative. Estill's story is one out of many stories that demonstrate the strength of welcoming cultures and communities. The story was circulated widely on social media platforms and perhaps encouraged new community members to take action and help. Another positive impact that the online coverage created can be witnessed in the settlement and integration of refugees and their communities, where the successful settlement of one family or individual can form public opinion and also help fellow refugees find or navigate new opportunities. The Hadhad family are a good example on that. The Hadhad family landed in the town of Antigonish in Nova Scotia with the arrival of 25,000 Syrian refugees back in 2015. With the support of their new community, the family was able to rebuild their chocolate business that was bombed in their home country 34 . They called the business "Peace by Chocolate", and have been utilizing the power of social media platforms to promote their story and business. The story went viral online after it was mentioned by Canada's prime minister at the Leaders' Summit on Refugees at the UN in New York 35 . Today, almost three years later, the Hadhad's are hiring an additional 25 local community members to meet demand 36 . It is hard to imagine this success happening without a welcoming culture for refugees and positive media coverage supporting it. 33 A. Kassam
Canada's Refugee Resettlement Programs
According to the evaluation division at the Ministry of Immigration, Refugees and Citizenship Canada (IRCC), resettled refugees can be admitted into the country via one of the following three resettlement programs 37 : the first program is the GAR, otherwise known as Government-Assisted Refugees. Through this program, the refugees to be sponsored are usually referred by the United Nations High Commissioner for Refugees (UNHCR) or other designated referral agencies, and the Government of Canada provides them with initial resettlement services and supports them financially for up to one year. GARs are also eligible to receive resettlement services (i.e. reception at port of entry, temporary housing, assistance in finding permanent accommodation, basic orientation, links to settlement programming and federal and provincial programs) provided by special organizations that have signed a contribution agreement with the government in order to deliver these services. The second program is PSR, otherwise known as Privately Sponsored Refugees. Under this program, refugees are sponsored by permanent residents or Canadian citizens. Refugees under the PSR program are resettled under the same conditions as those under the GAR program, but the PSR program allows the private sponsors to be more involved (one on one) in the resettlement process and offer protection over and above what is provided directly by the government (i.e. principle of additionality). The third program is called BVOR, the Blended Visa Office-Referred. Under this program, refugees are referred by the UNHCR or other designated referral agencies and identified by Canadian visa officers for participation in the BVOR program based on specific criteria. BVOR refugees receive up to six months of financial support from the Government of Canada, and six months of financial support from their sponsors, plus start-up expenses. Private sponsors are responsible for supporting the BVOR refugees socially and emotionally during their first year of arrival.
One of the main differences between the three programs is the involvement of community members and citizens in the resettlement process. Private-ly Sponsored Refugees and the Blended Visa Office-Referred benefit from that involvement. For the purpose of this research, a deeper study into the Private Sponsorship Program will be conducted in order to present how it is strengthened by the support of local Canadians.
Benefits of Implementing the Private Sponsorship Program
According to the Canadian Council for Refugees (CCR), the private sponsorship program holds a unique position and has strong advantages when it comes to refugee resettlement in Canada. The program is capable of enriching the lives of both refugees and Canadian community members. For Canadians, it is a chance to contribute directly, be a part of a refugee resettlement process, and impact the lives of people who are seeking security and stability. For refugees, it is a chance to connect with people who know the country, and who can guide them through the process of blending in. Refugees tend to rely on their sponsors to overcome the loneliness and the isolation that come from the experience of being a refugee 38 . The CCR also argues that private sponsorship does not rely solely on public resources, but instead collects support and funds from faith and ethnic groups, families, and other community organizations. The support offered through the program is the equivalent of approximately $79 million annually, as well as an estimated volunteer contribution of over 1,600 hours per refugee family 39 .
Jennifer Bond, chair of the Global Refugee Sponsorship Initiative and a professor at the University of Ottawa, talks about the benefits of the private sponsorship program by stating in a media interview that "The provision for private citizens to offer help not only provides a vehicle for tapping into communities, but also allows for individual Canadians to feel and be engaged." One of the strongest pillars of this program is its community engagement component, since it facilitates and creates opportunities for a deeper cultural understanding among both refugees and sponsors. The close relationship that is built up during the one-year sponsorship period tends to create bonds that go 38 P. T. Lenard,op. cit.,p. 304. 39 Canadian Council for Refugees (CCR), The Private Sponsorship of Refugees Program: Current Challenges and Opportunities, April 2006, p. 2, http://ccrweb.ca/en/private-sponsorshiprefugees-program-current-challenges-and-opportunities. beyond the obligation, and soon become life-long friendships. Refugees have also reported a high level of satisfaction with their transitional experiences 40 . Krivenko also states that the private sponsorship of refugees is a key agenda for successful integration through the following: No institution, including government, is able to create personal and lasting links between the refugees and the local community. The private sponsorship program naturally creates these links. It is imperative that any integration policy, and any program, draws on lessons drawn from the PSR and creates mechanisms that allow refugees to establish personal ties with the host community. Multiple forms of public -private partnership are conceivable in this context 41 .
In interviews conducted by the Global Refugee Sponsorship Initiative 42 , sponsors and refugees shared how the overall sponsorship process has strengthened their relationship and allowed them to understand each other's differences in culture and habits. These community relationships have also given these sponsors a way to understand firsthand what the real struggles and challenges are that these refugees have faced prior to their arrival Canada, and this has created public awareness and stimulated compassion towards strangers from other parts of the world. Another advantage of applying the private sponsorship model is that it gives the government fewer financial obligations to fulfill the needs of refugees during their first year, given that the sponsors are the ones who raise the funds to cover the expenses of the refugees, while the government mainly focuses on administering the program 43 .
Role of Community in Resettlement
Treviranus and Casasola 44 state that the strength of sponsors and community members lies in their capacity to dedicate financial resources and time, knowledge in their community, and network as well as personal support. That said, the role of the community in successful refugee integration through the private sponsorship model is also demonstrated by the government's announcement to increase the level planned to four times the average number welcomed in the last 10 years, 2015 and prior 45 .
In a country like Canada, where immigration is considered to be an essential part of its formation, it is no surprise that communities and citizens are able to relate well to the journeys of refugees and migrants, and to respond to them accordingly. With the arrival of Syrian refugees in late 2015, the response of community members was remarkable, and triggered memories of that of the Indochinese influx (Boat People) back in the 1970s 46 . We have also observed how Vietnamese refugees who experienced the same Canadian welcoming spirit have wanted to give back years later and help Syrian refugees integrate and thrive 47,48 . We notice that the involvement of active community members and volunteers can in fact strengthen and improve the overall process of refugee resettlement. Such collaborative actions can have various advantages and benefits to both the ones providing the help and for those receiving it.
Conclusion
The impact that positive online media coverage has had on the Canadian response to the Syrian refugee crisis from 2015-2018 is evident, and it has undoubtedly resulted in successful settlement and integration outcomes for many Syrian refugees. The welcoming culture of Canadian society was critical during this process as it strengthened the country's collective community bonds and brought people from different backgrounds together for a shared national purpose. Our modern and emerging social media platforms, digital storytelling tools and interactive websites have played an essential role in making all these group efforts possible.
The private refugee sponsorship model in Canada was examined to show how government and citizens can work together to achieve better humanitarian, social and economic results. We have seen how the involvement of citizens and community members in the settlement process can bring about positive results to the lives of refugees as they walk their first steps in a new country. The impact is much greater than finding employment, assisting in health-related issues, social support, or cultural integration. The impact is seen and felt in the friendships and community connections that are built among local citizens and refugees throughout the entire sponsorship process and beyond. This, however, does not mean that other refugee resettlement programs are not effective or helpful. It demonstrates that there is an opportunity to build on success and community connections through storytelling and positive media coverage to encourage citizens to take action and create a more welcoming environment for refugees and future citizens.
Moreover, the Canadian response to the Syrian refugee crisis has demonstrated to the world a different approach to civic engagement and humanitarian work. This national humanitarian response may be perceived as a major successful project. However, it also leaves us with many unanswered questions around the topic, and most importantly, questions about the relationship between politics and power, citizenship, culture, online media and public opinions. | 2020-12-17T09:09:47.573Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "928ec8a31e4fa5e89c4e3e724a1e659fe823a770",
"oa_license": "CCBYSA",
"oa_url": "https://bibliotekarzpodlaski.pl/index.php/bp/article/download/474/526",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ce0b9913b3f162c887eebb738361ae29804e620",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
55057892 | pes2o/s2orc | v3-fos-license | Potyvirus Affecting Uchuva ( Physalis peruviana L . ) in Centro Agropecuario Marengo , Colombia
Fruit production and especially fresh tropical fruit trade, has an important relevance on world economy. Refining knowledge on virus diseases affecting tropical fruits is required to improve the understanding of these diseases, their dynamics and consequently, the ability to manage them. In this paper, samples of “uchuva” plants (Physalis peruviana L.) obtained from Centro Agropecuario Marengo (CAM) Municipality of Mosquera, Cundinamarca region of Colombia were analyzed after expressing symptoms of leaf chlorosis, leaf malformation, mosaic patterns and dwarfing. Electron microscopy revealed the presence of two different viral particles congruent with Potyvirus and Tobamovirus genus morphology. The presence of Potyvirus affecting the P. peruviana L. culture was confirmed in the samples analyzed by means of electronic microscopy images and serology. Similarly, the existence of viral particles with coherent characteristics of a putative Tobamovirus was observed. However, its presence could not be confirmed by means of serological tests. Nevertheless, its incidence should not be neglected. The mechanism of Potyvirus disease transmission in P. peruviana L. remains unknown, as well as the vectors associated with this disease. Therefore, complementary work and research should be considered. In addition to serology and electron microscopy, the use of indicator plants for diagnosis is suggested. Finally, a complete molecular characterization of the Potyvirus is recommended for a better understanding of the characteristics of its association with P. peruviana L.
Introduction
Physalis peruviana L., a member of the Solanaceae family, is a worldwide known exotic fruit and one of the most promising fruit crops in Colombia [1], a country that actually stands with the highest production on the largest acreage of the world [2] [3].This plant is grown as an annual or perennial culture according to the local growth conditions [4] [5].Its fruits are characterized for being protected inside the "capacho", a common name given to the widespread calyx that facilitates its transport and also allow it to easily be sold as a fresh fruit product [6] [7].Further potential uses include dehydrated products [8], juices, flesh pulp and sugar containing derivatives like jam [9], chocolate and ice cream [6], due to its excellent nutritional properties [10] [11].In addition, P. peruviana L. is the object of diverse studies in order to take advantage of its secondary metabolites [12], which exhibit wide biological properties that could have potential pharmacological, medicinal [4] [13], and insecticidal use [10].
In Colombia, Fusarium oxysporum is the pathogen which causes the main problems in P. peruviana L. fields.[14].At production stage, most phytopathological problems are caused by fungal pathogens like Alternaria spp., Cladosporium spp., Phytoptora infestans [15], Cercospora spp., Phoma spp.and bacteria as Ralstonia solanacearum [16] [17].Other phytosanitary problems in Solanaceae are caused by several viruses.In case of P. peruviana L., the occurrence of the genus Alfamovirus, Bigeminivirus, Comovirus, Cucumovirus, Fabavirus, Fomovirus, Furovirus, Hybrigeminivirus, Ilarvirus, Luteovirus, Nepovirus, Potexvirus, Potyvirus, Tobamovirus, Tospovirus, Tymovirus [18]- [23] has been reported.In addition, the viroid Potato spindle tuber viroid (PSTVd) was reported to affect plants of P. peruviana L. in Turkey, New Zealand [24] and also in materials of a producer in Germany [25].In Brazil, the presence of a Tospovirus was reported affecting 100% of a commercial plantation [18], which turned out to be, apparently, the first report on the occurrence in natural conditions of Tomato Cholorotic spot virus (TCSV).In Colombia, reports include Cucumovirus, Potyvirus and Tobamovirus genus [26]- [28] as being individually identified.In case of mixed infections, inclusion bodies similar in morphology to Potyvirus/ Cucumovirus genus have been reported [26].The variation of information-though consistent-regarding the production and area sowed with P. peruviana L. [29], gives an important idea of the particular characteristics of Colombia's productive system and its migratory character [14].However, as occurs with many viral diseases, it is difficult to correlate sanitary and economic information.Concretely in this case, no data were found available to associate viral diseases with economic losses in P. peruviana L.
This article shows evidence of the presence of Potyvirus associated with P. peruviana L. plants.The increased scientific and economic relevance of this crop worldwide imply further research and analysis on this topic in order to understand and manage virus diseases on P. peruviana L.
Biological Samples
Leaf samples from P. peruviana L. plants ("Colombia" ecotype) of 18 months after transplant-which showed symptoms of chlorosis, severe defoliation, mosaic and dwarfing-were collected at two different times of the year 2011 from field number five at the Centro Agropecuario Marengo (CAM) (Figure 1), located at 2354 m a.s.l (above sea level).During the first collection (February), samples of approximately five to eight leaves per plant were randomly collected through the field, from those expressing the symptoms above mentioned.In the case of the second collection (March), a systematic procedure was followed on plants previously selectedbased on the first collection results-including only the upper and mid-section of them, with the same amount of leaves.Samples were processed by means of serological tests and electron microscopy.In order to avoid tissue damage due to natural oxidation processes all samples were preserved in cold storage (4˚C) between the different analyses, which also facilitate the description of the symptoms observed on field.The plant suspension used in the different processes was obtained only from the leaves collected and ground in the presence of phosphate buffer.
Electron Microscopy
The suspension extracted from infected leaf tissue was processed by means of negative staining using copper grids covered with the polymer "formvar".The grids were placed on the leaf extract for five minutes and later washed three times with distilled sterile water.Next, the grids were then stained in a watery solution of uranyl acetate for five minutes and finally desiccated for further manipulation under the electron microscope (JEOL JEM 1010), using the software Analysis 3.0 for the measurement procedures.Abnormal leaf tissue growth on symptomatic P. peruviana L. plants.Some leafs conserved their typical "heart shape" form even after been affected by virus diseases.(e).Fruit bearing on a virus affected plant of P. peruviana L. (f).Overview of the initial growing conditions of the P. peruviana L. field sampled.
Immunostrips
Leaves as a source of plant tissue were ground in the presence of phosphate buffer.The homogenate was transferred to 2.0 ml tubes and analyzed with immunostrips specific for Potyvirus and Tobamovirus (Kit Immunostrip Agdia ® ).After an incubation of 15 minutes to allow the reaction of the antigen and the antibody to take place, the immunostrips were evaluated.
PTA-ELISA
In this detection test, the antigen (sample) was covered with a sodium carbonate buffer.A specific monoclonal antibody for Potyvirus was used (Agdia ® ) and the reaction was observed at the absorbance wavelength of 405 nm (nano meter) after 30 minutes (samples first collection) and 60 minutes of incubation (samples second collection), using a ELISA Dynex-MRX reader (Table 1).In all tests a positive control was included for Potyvirus (infected leaf plant tissue), healthy leaf plant tissue was used as a negative control and sodium carbonate buffer without plant tissue was used as a blank control.
Results and Discussions
The obtained results confirmed the presence of Potyvirus associated with P. peruviana L. in Colombia.The images (Figure 2(a) and Figure 2(b)) showed inclusion bodies of flexuous type with a length of more than 500 nm, congruent with the characteristics of this genus [30].Serology tests by the use of immunostrips and PTA- ELISA (Table 1) confirmed also the presence of Potyvirus.In addition, another viral particle (Figure 2(c) and Figure 2(d)) was observed suggesting a mix-infection affecting the plant.In this case, given the rigid characteristics and size between 250 and 300 nm [30] we presumed that it corresponds to Tobamovirus genus, which has been reported to affect members of the Solanaceae family and particularly the genus Physalis [23].Nevertheless, the presence of Tobamovirus could not be confirmed by means of serology tests.It is assumed that many Potyvirus isolates detected in different parts of the world come basically from infected materials of South American origin [31].This region is considered to be the center of origin of several species of the Solanaceae family, which coexist simultaneously with wild species closely related.Therefore it is assumed that such viral diversity is a result of the co-evolution and adaptation process [32] [33].
Mixed infections are common and they impede an accurate detection based on the biological characteristics of the pathogen [34], establishing a big challenge in the diagnosis process.This is because of the synergies established with the host plant, given as a result a wide range of diverse symptoms [35].In the case of Tobamovirus, despite the fact that its presence could only be assumed by means of electronic microscopy but not by serology tests, its presence should not be neglected since the high diversity of both plants and viruses in South America.Negative results of PTA-ELISA for Tobamovirus could be explained either to a low sensitivity of the protocol or due to a low specificity of the antibody used for the Tobamovirus presented in the P. peruviana L. samples [33].Tobamovirus mobility characteristics-cell to cell process-vary and change during the development of the infection and are still not completely known [36].In this case, the use of indicator plants is a helpful option used in the detection of viral diseases, which is usually a result of multiple methods in order to improve the diagnosis process.
Only a few reports indicate problems with viral diseases in P. peruviana L. cultivation.Symptoms due to virus infections can appear or disappear depending on the environmental conditions [37] or behave as conditioning agents to other diseases [38]- [40].Therefore, virus detection is required to avoid or minimize eventual losses due to unnoticed aspects that could lead to mistakes when planning the different management strategies required for this crop [24].A correct identification of virus diseases is needed, since it has been proven that plant susceptibility to other pathogens is higher in those affected by viral diseases [41].
Virus symptoms on plants include leaf deformations, color changes in specific patterns [42] [32], local or systemic necrosis, with changes in tissue structure which finally can result in plant death.In some cases symptoms can be absent or masked [37] although the virus is present in the plant tissue.Consequently, the problem can be associated by mistake with nutrient deficiency, physiological disorders, xenobiotic agents or entomological damages [43] [40].Regarding all these aspects it is very difficult to correlate viral diseases with economic losses suffered by the farmers [39].The symptoms associated with the viral disease observed on P. peruviana L.-commonly expressed as a chlorotic mosaic-can vary their intensity from profound (Figure 3 3(e)) as a result of the expression of the physiological disorder of the plant.These symptoms can be observed on whole leaves or can be restricted to the primary site of infection depending on the level of resistance expressed by the host [37].The abnormal growth of plant cells (Figure 3(f)) causing hypertrophy-abnormal stretching of cells-and consistent malformations (Figure 2(b) and Figure 2(d)) was observed as well.Both external and internal quality parameters of fruits decrease due to the effect of the virus disease resulting in a shortened shelf life [41].However, this last aspect is not completely proven in the specific case of P. peruviana L.
Nonetheless, it is necessary to point out that diagnosis should not be limited to symptom expression [37].In this respect, isothermal methods for nucleic acid amplification [44] [45] are available nowadays for the diagnosis of viral diseases, in replacement of the traditional PCR (Polymerase chain reaction).The advantages of these techniques are based on their versatility, a minimum of technical requirements (no thermocycler required), high specificity, high amplification rates, short reaction times [46] and the possibility of being observed with the naked eye [47].Though at the beginning, these methods have been used in medicine [48], some works show their potential use in Potyvirus detection in plum trees in Germany [47].
Another aspect to be considered is the possibility of alternative host plants for the virus that affect the P. peruviana L. culture, among them the weeds population growing along the fields.For Cundinamarca region of Colombia, Galinsoga spp, Raphanus raphanistrum L., Veronica persica., Hypochoeris radicata L., Holcus lantus L., Rumex acetocella L., Polygonum nepalense M. and Rumex crispus L. are frequently found [49].Special attention should be paid to the last two species mentioned, as they are known for their high frequency on P. peruviana L. fields in Cundinamarca region (28% and 16 % respectively).The observation of mosaic patterns and leaf deformation similar to the ones observed in P. peruviana L. plants affected by virus (Rumex case) and the fact that they have been mentioned in previous reports as alternative host plants for Potyvirus [30] [50] supports this statement.
Mechanical transmission of viral diseases of P. peruviana L. must be considered due to the regular practices and management procedures established for this crop [14] and due to reports about this mode of transmission of several species within Potyvirus and Tobamovirus genus [19] [41] [30] [50].Additionally, although vegetative propagation methods are not common in Colombia for P. peruviana L., they should be considered in the case of virus spreading due to the common pruning practices within this crop.On the other hand, seed transmissible viruses and known viroids affecting [55] P. peruviana L. [5] are also distributed in this way [25] [56].In Colombia, seeds-seed selection, seedlings production and sowing of these seedlings-constitute the main method used by growers for the establishment of their P. peruviana L. fields [2] [14] [57].Thus, the possibility of viral transmission by means of physical methods should not be overlooked.Furthermore, even the identification and determination of promissory materials of P. peruviana L. [58] must be intended by complementary plant pathology criteria to avoid the spread of these virus diseases.
Conclusions
The molecular characterization of Potyvirus associated with P. peruviana L. is required to improve its detection and also to distinguish it from the presence of other potential virus diseases affecting this culture, especially given the intra-specific variability that they possess.In addition to this, there is not a clear identification of the transmission mechanism of this viral disease or its possible associated vectors, and the reason for which it is necessary to coordinate works that intend to clarify these aspects.
Figure 2 .
Figure 2. (a).Inclusion bodies observed and associated with the morphological characteristics of Potyvirus (orange ovals); (b).Chlorotic mosaic and leaf malformation as result of virus diseases on P. peruviana L; (c).Inclusion bodies observed and associated with the morphology of Tobamovirus (orange ovals); (d).Leaf malformation on P. peruviana L. plant infected by virus diseases.
(a) and Figure 3(b)) to mild (Figure 3(d) and Figure
Figure 3 .
Figure 3. (a) and (b) Severe mosaic associated with symptoms generated from viral diseases in P. peruviana L. (c) P. peruviana L. used as indicator plant, expressing typical symptoms (chlorotic mosaic) caused by virus diseases; (d) and (e) Mild mosaic affecting P. peruviana L. (f) Typical hypertrophy observed on plants affected by viral diseases; (g) Severe defoliation on P. peruviana L. plant affected by viral diseases.
Table 1 .
Results obtained by means of PTA-ELISA from samples collected, analyzed and discussed in this paper.
collection a Second collection b Sample Label Results Sample Label Results Absorbance Reaction Absorbance Reaction
a Absorbance at 405 nm after 30 minutes of reaction; b Absorbance at 405 nm after 60 minutes of reaction.The first collection includes random materials from the Universidad Nacional de Colombia (UN) and the Centro Agropecuario Marengo (CAM).The second collection considered only materials at CAM.In the last case the letter and number used to label the plants correspond respectively to the row and number of each of the plants sampled on field.Plus sign (+) indicates a positive reactions while minus sign (−) indicate negative reactions. | 2018-12-06T12:00:08.797Z | 2014-08-07T00:00:00.000 | {
"year": 2014,
"sha1": "e5f031bcc68c8c1b74829a50a1d6a198c408ec71",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=49304",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e5f031bcc68c8c1b74829a50a1d6a198c408ec71",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
239619240 | pes2o/s2orc | v3-fos-license | The effect of rehabilitation protocol using mobile health in overweight and obese patients with knee osteoarthritis: a clinical trial
The objective of this randomized controlled trial (RCT) was to investigate the effectiveness of the lower limb rehabilitation protocol (LLRP) combined with mobile health (mHealth) applications on knee pain, mobility, functional activity and activities of daily living (ADL) among knee osteoarthritis (OA) patients who were overweight and obese. This study was a single-blind, RCT conducted at Teaching Bay of Rehmatul-Lil-Alameen Post Graduate Institute of Cardiology between February and November 2020. 114 knee OA patients who were overweight and obese were randomly divided by a computer-generated number into the rehabilitation group with mHealth (RGw-mHealth) to receive LLRP + instructions of daily care (IDC) combined with mHealth intervention, rehabilitation group without mHealth (RGwo-mHealth) to receive LLRP + IDC intervention and control group (CG) to receive IDC intervention. All three groups were also provided leaflets explaining about their intervention. The primary outcome measure was knee pain measured by the Western Ontario and McMaster Universities Osteoarthritis Index score. The secondary outcome measures were mobility measured by the Timed up and go (TUG) test, functional activity measured by the patient-specific functional scale (PSFS), and ADL measured by the Katz Index of independence in ADL scores. Among the 114 patients who were randomized (mean age, 53 years), 96 (84%) completed the trial. After 3-months of intervention, patients in all three groups had statistically significant knee pain reduction (RGw-mHealth: 2.54; RGwo-mHealth: 1.47; and CG: 0.37) within groups (P < 0.05). Furthermore, patients in the RGw-mHealth and RGwo-mHealth had statistically significant improvement in mobility, functional activity, and ADL within groups (P < 0.05), but no improvement was noted in the CG (p > 0.05). As indicated in the overall analysis of covariance, there were statistically significant differences in the mean knee pain, mobility, functional activity, and ADL changes between groups after 3-months (p < 0.001). The pairwise between-group comparisons (Bonferroni post hoc analysis) of the knee pain, mobility, functional activity, and ADL scores at 3-months revealed that patients in the RGw-mHealth had significantly higher mean change in the knee pain, TUG test, functional activity, and ADL scores compared to patients in the RGwo-mHealth or CG. Reduction in knee pain, improvement in mobility, functional activity, and ADL were more among patients in the RGw-mHealth compared with the RGwo-mHealth or CG. Trial registration National Medical Research Registry: NMRR-20-1094-52911. Date of registration: 05–05-2020. URL: https://www.nmrr.gov.my.
Introduction
Osteoarthritis (OA) causes a considerable burden in the quality of life and medical treatment of patients [1]. In OA, the knee is the most commonly affected weightbearing joint with the cardinal symptoms of pain and loss of function [2,3]. In 2015, knee OA was the most frequent type of OA diagnosed and ranked the thirteenth leading cause of disability globally [4]. Knee OA is a joint destruction and active disease process driven by proinflammatory and biomechanical factors [5]. Knee OA is the most frequent cause of mobility dependency [6] and is highly prevalent in overweight and obese individuals [7].
Patients with knee OA possess 20-40% weaker relative strength of the quadriceps muscles compared to control subjects [8,9]. The weakness of the quadriceps muscles precedes the onset of knee OA and therefore it could increase the risk of disease development, particularly in women [10]. The Ottawa Panel found evidence to support the use of therapeutic exercises, especially strengthening exercises and general physical activity, combined with manual therapy or separately for the improvement of pain and functional characteristics in OA patients [11]. The American College of Rheumatology Foundation reported a guideline in which physical activity is recommended as a core component of knee OA management [12]. There are different trials of exercise and physical activity-based interventions for the treatment of knee OA. These interventions reported improvement in knee pain, function, and other outcomes among knee OA patients [13]. A current systematic review on nonpharmacological interventions for treating symptoms of knee OA in overweight or obese patients concluded that strengthening exercise played a vital role in relieving knee pain and improving function [14]. Non-pharmacological interventions, primarily strengthening exercise and more recently strengthening exercises of the lower limb rehabilitation protocol (LLRP) in non-weight-bearing positions, are recommended as the first line of treatment among overweight or obese knee OA patients [15].
A current systematic review found that mobile health application (mHealth app) users were more satisfied to manage their health than those of conventional care. The mHealth app users have reported a positive impact on health outcomes and health-related behaviors [16]. Smart phone's mHealth apps have the potential to play an important role in supporting personal health management [17] and to increase access to healthcare services [18]. A study reported that consistent contact via phone can improve the clinical status of knee OA patients [19]. Effectiveness of rehabilitation combined with mHealth may provide more objective data than the standard rehabilitation approaches we are using today to treat overweight and obese knee OA patients. However, there is a gap in knowledge and a dearth of information regarding whether mHealth can improve the effects of LLRP among overweight and obese knee OA patients. Hence, the current randomized controlled trial (RCT) investigated the effects of LLRP combined with mHealth on knee pain, mobility, functional activity, and activities of daily living (ADL) among overweight and obese knee OA patients. The novelty of the current study could have been mediated by two factors, firstly it was provided as WhatsApp messages and secondly the researchers designed a LLRP to treat overweight and obese knee OA patients. The training sessions of LLRP are the strengthening exercises of the major muscle groups of the lower limbs in non-weight-bearing positions to reduce the mechanical load on the knee.
Study design and setting
The current study was a single-blind RCT of 3-month duration involving patients with knee OA who were overweight and obese. The study was conducted in the Teaching Bay of Rehmatul-Lil-Alameen Postgraduate Institute of Cardiology (RAIC), Punjab Employees Social Security Institution (PESSI) between February and November 2020. The study was approved by the Ethical Committee of RAIC PESSI with approval number RAIC PESSI/ Estt/2020/33 and the trial was registered in the National Medical Research Registry, Malaysia, with ID NMRR-20-1094-52911. Pre-defined questionnaire of inclusion and exclusion criteria was used for screening of the patients. Written informed consent was obtained from all patients before participation in the study. mortality, the sample size of 114 patients for the three groups was decided (n = 38 per group).
Study patients' recruitment and selection
The inclusion criteria of the patients were as follows: both males and females, age between 45 and 60 years, overweight and obese, diagnosed with 2-mild or 3-moderate OA according to Kellgren and Lawrence radiographic [20] on one or both knees by an orthopaedic surgeon, symptoms of knee OA for more than 3-months, familiar with WhatsApp applications and residing in the Urban community of Lahore, Pakistan. Exclusion criteria were one or more of the following: diagnosed with flat foot or spinal deformities; history of cardiac or hormonal problems; previous surgery of the knee/s; corticosteroid injection of the knee/s for the last 6-months. Eligibility was determined using a predefined questionnaire of inclusion and exclusion criteria.
Patients were recruited using convenience sampling by active recruitment strategies through urban political and welfare organizations. The list of patients with knee OA in the studied area was obtained from the Welfare Organization upon explaining on the potential benefits of study participation. Two study coordinators prepared the list of potential patients in the recruitment area. After obtaining the list of potential patients, the researcher arranged a meeting with the potential patients through phone call. The meeting was held at the teaching bay of RAIC, PESSI, Lahore, Pakistan, in the presence of a medical specialist. Patients were screened for eligibility to participate in the study. Only patients fulfilling the inclusion and exclusion criteria of the study were invited to participate in this study. The experimental procedures, risks, and benefits associated with the study were explained (verbally and through participants' information sheets) to all patients prior to providing written informed consent.
Randomization
After completing the screening, the selected patients were randomized into three groups; Rehabilitation Group with mHealth (RGw-mHealth), Rehabilitation Group without mHealth (RGwo-mHealth) and the Control Group (CG) (Fig. 1), using a simple random technique (computer generated number). Each group consisted of 38 patients. All patients were also given a diary and asked to record the attendance of completion their interventions based on leaflets.
Blinding and allocation
The coordinators collecting data were independent individuals from the trials and were unaware of the group allocation. There were different coordinators at the baseline and post-test evaluation. Individuals performing the statistical analysis were kept blinded by labelling the groups with nonidentifying terms (such as X and Y).
Research procedures Rehabilitation group with mHealth (RGw-mHealth)
Patients in the RGw-mHealth were prescribed with LLRP + IDC combined with mHealth (LLRP + IDC-mHealth) intervention. The LLRP focused on strengthening exercises for the lower limbs in non-weight bearing sitting or lying positions (Additional file 1) to reduce mechanical pressure on the knee. In the current study, the researcher with the help of several experts in the field of rehabilitation, designed a LLRP (Additional file 1) to be used in the RCT. The LLRP is designed to be a progressive exercise program that begins as a low intensity which gradually increases (frequency, intensity, and duration) to high intensity to ensure that patients could cope with the intervention.
The sequence of the training program started with ten minutes' warm-up with whole body range of motion (ROM) and dynamic stretching exercises. Patients performed ten repetitions of ROM of each muscle group and 5 repetitions of dynamic stretching of each muscle group as a part of warm-up. A study demonstrated that dynamic stretching is recommended for warm-up to avoid a decrease in strength and performance [21]. When static stretching is used as part of a warm-up immediately prior to exercise, then it causes harm to muscle strength [22]. After warm-up, the patients performed the strengthening exercises of the lower limbs for 3-months (Additional file 1).
Patients were advised to follow the IDC, which included advice on general guidelines of mobility and healthy eating ( Table 1). The IDC was translated into Urdu language by two language experts to ensure better patients' understanding based on a recent pilot study [15]. After completing the strengthening exercises, the patients performed ten minutes cool-down with whole body ROM and static stretching exercises. Patients performed ten repetitions of ROM of each muscle group and 3 repetitions of static stretching of each muscle group as a part of cool-down. A study explained that after two to four repetitions of static stretching, there is no increase in muscle elongation [23].
Additionally, patients in the RGw-mHealth group receive regular reminders to carry out of the LLRP through mHealth in the form of WhatsApp messages. Two text messages per day for three days a week for a period of 3-months were sent to patients in the RGw-mHealth throughout the study period. Patients in the RGw-mHealth received a total 72 text messages. The text messages were sent between 7:00 to 9:00 a.m. and 5:00 to 7:00 p.m. during the days of Wednesday, Friday, and Sunday. A study reported that sending text messages in the morning was to ensure that the patients have enough time to plan and do exercise during the day [24]. In addition, every patient was actively followed by phone at least once a week to ensure that they read the messages and performed the intervention.
Rehabilitation group without mobile health (RGwo-mHealth)
Patients in RGwo-mHealth received LLRP + IDC intervention, but did not receive any reminders to carry out the LLRP exercises. Patients in the RGwo-mHealth were
Sitting
When there is option of sitting than standing, then prefer to sitting. Prefer your sitting on high stool or chair rather than low level Standing from sitting When you are standing from sitting position, and then initially sit at the edge of bed, chair or stool with the feet on the ground at the level of hips. Use the hands to push up from the bed, chair or stool Walking Do not walk, jog or run as an exercise plan. Walking stick can be used on the opposite hand of the affected knee OA. If both knees are affected, then walker can be used. Use of knee brace and jogging shoes with well cushioned soles during walking is highly recommended Stair climbing Avoid stair climbing. But if there is need of stair climbing then support the side rails with your hands by placing the affected foot first on a stair step then the unaffected foot on the same step Working Prefer working on a high stool or chair Body weight Try to reduce your weight by avoiding taking of sugary foods, drinks and high fat foods. Eat mostly plant-based foods. Add omega-3 fatty acids in your daily diet trained on how to perform the LLRP 3-times a week and to adhere to the IDC for 3-months at home. Each training session started with ten minutes of warm-up, forty-five to sixty minutes of strengthening exercises for the lower limbs, and ten minutes of cool down at the end of the training protocol (Additional file 1).
Control group (CG)
Patients in the CG were only advised to follow the IDC intervention for the duration of 3-months (Table 1). No reminders through mHealth application (WhatsApp) were sent to patients in the CG. The feasibility and acceptability of the IDC among knee OA participants have been proved effective in a recent RCT [25].
Measurements and procedures
All patients were assessed at enrolment (baseline) and again at 3-months follow-up. Patients' assessment includes demographics, exercise adherence, primary and secondary outcome measures of interest. Patients demographic information gathered including age, gender, educational status, and marital status were recorded. The assessment of the patients' self-reported exercise adherence was collected after 3-months of intervention. A self-reported exercise adherence was measured using a numerical rating scale (NRS) ranging from zero = never performed intervention to 10 = always performed intervention. NRS have also been widely used in other trials [26,27]. Outcome measures gathered were categorized into primary and secondary outcome measures.
Primary outcome measure
Primary outcome measure was knee pain symptoms assessed using the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) that was already adapted and validated. The WOMAC score ranges from 0 to 4 on a Likert-type scale. The researcher used the section on pain of the WOMAC questionnaire. There were five items of WOMAC questionnaire for assessing knee pain. The total scores for the 5 items range from 0 to 20: the higher the score, the worse the pain [28].
Secondary outcome measures
The secondary outcome measures were mobility, functional activity, and ADL. Patients' mobility was assessed using the Timed Up and Go (TUG) test as described by Podsiadlo and Richardson [29]. The patients were observed and timed while they rose from an armchair, walked three meters, turned, walked back, and sat down again. Assessment was performed in accord with the technique as described in the literature [29].
The Patient-Specific Functional Scale (PSFS) was used for the measurement of functional activity. This is a valid and reliable tool that allows patients to report on their function at baseline and follow-up [30]. The patients were asked to identify up to three difficult activities to perform at enrolment. The patients were then asked to rate each of their identified activities on a numerical scale ranging from 0 = 'unable to perform an activity' to 10 = 'able to perform activities. After 12 weeks of intervention, the patients were again asked to rate their same difficult activities they had identified at baseline. The mean of the scores was used for the analysis of the nominated activities, with higher scores reflecting greater function.
The Katz Index of Independence in ADL was used to assess patients' ADL. In Katz ADL, six functions were assessed such as feeding, continence, toileting, dressing one self, bathing, and transferring. Each activity has a potential of either zero or one point. One point was used as independence with subheadings, no supervision, direction, or personal assistance. A score of zero is given to indicate that a participant is dependence on subheadings, supervision, direction, personal assistant, or total care in their daily activities. The overall score ranges from zero (patient very dependent) to six (patient fully independent) [31]. In Katz ADL, a total score of 2 or less indicates severe functional impairment, 4 indicates moderate impairment, and a score of 6 indicates full functional independence in ADL [32].
Statistical analysis
The Statistical Package for Social Sciences, version 22, Chicago, IL, was used to analyze the data. Continuous variables were presented as mean (standard deviation [SD]) based on data distribution. The Shapiro-Wilk test was used to assess the normality of all variables. Categorical variables were presented as frequencies (n) and percentages (%). For categorical demographic variables, the One-Way Analysis of Variance (ANOVA) was used to compare for differences between variables. Since all data was normally distributed; the Paired Samples t-test was used to analyze differences between the baseline and 3-months measurements within the groups.
The overall treatment effects on change in clinical outcome measures were estimated using the One Way ANOVA (unadjusted results) and Analyses of Covariance (ANCOVA, adjusted results) for mean changes (95% confidence interval [CI]) from baseline in the continuous outcome data. ANCOVA should be the preferred method for the analysis of pretest-posttest data. The use of ANCOVA in a randomized design is to reduce error variance, because the random assignment of subjects to groups guards against systematic bias [33]. The ANCOVA model included the changes as the dependent variable, with group as a main effect and the baseline scores as an additional covariate. The purpose of using the pretest (baseline) scores as a covariate in ANCOVA with a pretest-posttest design is to reduce the error variance and eliminate systematic bias [33]. The pairwise comparisons between groups were estimated using Bonferroni post hoc analysis. The value of P < 0.05 was considered statistically significant.
Results
There were 114 patients with knee OA enrolled in the 3-months trial. Patients were randomized into LLRP + IDC-mHealth intervention (n = 38), LLRP + IDC intervention (n = 38), and IDC intervention (n = 38). The retention rate in all groups was 84%. The safety coordinator determined that two serious adverse events were unrelated to the study. Both patients were in the RGw-mHealth, one patient had appendix surgery, and the other underwent gallbladder surgery. Patients in the CG had one nonserious adverse event of muscle spasm related to the study. Figure 1 demonstrates the study flow chart. A total of 18 patients (6 LLRP + IDC-mHealth intervention, 6 LLRP + IDC intervention, and 6 IDC intervention) did not complete the study, resulting in 96 patients (32 LLRP + IDC-mHealth intervention, 32 LLRP + IDC intervention and 32 IDC intervention) included in the analysis of WOMAC pain for knee pain, TUG test for mobility, Katz Index of Independence for ADL and PSFS for functional activity scores. Figure 1 demonstrates the study flow chart including reasons given by patients who did not complete the study.
The patients' baseline demographics and clinical outcome measures are described in Table 2. No significant differences were observed in the baseline demographic characteristics between the three groups. No statistically significant difference in the Katz Index of Independence for ADL and the PSFS for functional activity scores between the groups. A significant difference in the WOMAC pain and TUG scores was observed at baseline. No significant differences were observed between patients who completed and those who withdrew on baseline demographic and clinical outcome measures (Table 3). Mean and 95% CI of WOMAC pain, TUG test, Katz ADL and PSFS scores at baseline and 3-month follow-up across the three groups are shown in Fig. 2.
After participation in 3-months of intervention, a statistically significant improvement compared to baseline was observed for knee pain, mobility, functional activity, and ADL scores (p < 0.05) in the RGw-mHealth, and RGwo-mHealth. In the CG, knee pain score was also significantly improved (p < 0.05) ( Table 4). The mean changes in knee pain scores at 3-months from baseline were 2.54 (95% CI 1.99, 3.09), 1.47 (95% CI 0.93, 2.01) and 0.37 (95% CI − 0.16, 0.90) for patients in the RGw-mHealth, RGwo-mHealth and CG, respectively ( Table 5). The pairwise between-group comparisons of WOMAC Pain score at 3-months revealed that patients in the RGw-mHealth demonstrated a statistically significantly higher mean change in WOMAC pain score compared to the patients in the RGwo-mHealth (p = 0.022) and CG (p < 0.001). Additionally, there was also a statistically significant higher mean change in the WOMAC pain score in patients of the RGwo-mHealth compared to the CG (p = 0.013) ( Table 6).
The mean changes in TUG test scores at 3-months from baseline were 2.64 s (95% CI 2.26, 3.02 s), 1.34 s (95% CI 0.97, 1.70 s), and 0.29 s (95% CI − 0.06, 0.65 s) for patients in the RGw-mHealth, RGwo-mHealth and CG respectively ( Table 5). As indicated by the overall ANCOVA, there was a statistically significant difference in the mean change in the TUG test scores between groups after 3-months interventions (p < 0.001). The pairwise between-group comparisons of the TUG test score at 3-months revealed that patients in the RGw-mHealth demonstrated a significantly higher mean change in the TUG test score compared to the RGwo-mHealth and CG (p < 0.001). Additionally, the mean change in the TUG test score among patients in the RGwo-mHealth was significantly higher compared to the CG (p < 0.001) ( Table 6).
Discussion
To the best of our knowledge, this was the first RCT to investigate the effectiveness of LLRP combined with mHealth on knee pain, mobility, functional activity, and ADL among knee OA patients who were overweight and obese. In this study, patients who were assigned to the RGw-mHealth had significantly less pain, faster mobility, better functional activity, and better ADL scores over a 3-month period than patients in the RGwo-mHealth and CG. The results indicated that patients in Fig. 2 Mean and 95% CI of the outcomes measures across the three groups. a mean and 95% CI of WOMAC pain score at baseline and 3-month follow-up, b mean and 95% CI of TUG test score at baseline and 3-month follow-up, c mean and 95% CI of Katz ADL score at baseline and 3-month follow-up, d mean and 95% CI of PSFS score at baseline and 3-month follow-up. WOMAC = Western Ontario and McMaster Universities Osteoarthritis Index; TUG = Timed Up and Go; ADL = Activities of daily living; PSFS = Patient specific functional scale; CI = Confidence interval the RGw-mHealth who received additional reminders in the form of periodic manual WhatsApp messages showed greater improvements in reducing knee pain and improving mobility, functional activity and ADL than did patients in the RGwo-mHealth or CG. In the current study, the RGw-mHealth produced a clinically relevant reduction in knee pain and improvement in mobility, functional activity, and ADL, and this may explain why clinical improvement of outcome measures occurred. A potential reason why the RGw-mHealth had a significant result is that patient adherence to the LLRP combined with mHealth was good.
Two systematic reviews of randomized controlled trials reported that exercise therapy reduces pain for OA of the knee [34,35]. In the current study, the pain score was significantly reduced in all three groups (p < 0.05), but a marked reduction in pain score was reported by the patients in the RGw-mHealth (p < 0.001). This may be due to the reminders of using mHealth that were sent to the patients of RGw-mHealth. The reminders of using mHealth stimulated the patients to follow their intervention more efficiently.
A randomized controlled trial reported significantly greater improvement in mobility scores following dietary intervention combined with an exercise program compared with either a dietary or exercise program [36]. Moreover, mobility improvement was also reported following dietary intervention combined with an exercise group of the arthritis, diet, and activity promotion trial [37]. The current study showed that the patients in all three groups reported a reduction in TUG test scores. However, statistically significant improvement in mobility score was only observed in patients of the RGw-mHealth and RGwo-mHealth. The rating of Katz ADL was recommended by the observation of health care professionals [38]. In the current study, a trained health professional recorded the score of Katz ADL. The patients in the RGw-mHealth and RGwo-mHealth reported statistically significant improvement in ADL score than the CG. A current study demonstrated that a combination of dietary weight loss and exercise intervention was consistently better in improving a combination of performance and functional outcomes among participants with knee OA compared with exercise alone, diet alone, or a control group [37]. Many trials of different physical activity and exercise-based interventions reported improvement of function among knee OA patients [39]. The current study demonstrated a statistically significant improvement in functional activity in patients of RGw-mHealth and RGwo-mHealth, but not in the CG. It is noteworthy that the improvement in functional activity was greater among patients in the RGw-mHealth than the RGwo-mHealth and CG. This may be due to the reminders of using mHealth that were sent to the patients of RGw-mHealth.
Similarly, the results of the current study indicated that patients in the RGw-mHealth reported greater exercise adherence to their interventions compared to patients in the RGwo-mHealth or CG. The patients who got reminders by using mHealth to perform their intervention in the current study reported better adherence than those patients who received their intervention with home exercise programs with an app with remote support [13].
Clinically meaningful results of the outcome measures in the current study could have been mediated by a couple of factors. Apart from serving as mHealth appointment reminders, the LLRP helped to increase the clinical significant results of pain, mobility, functional activity, and ADL.
Based on these findings, this LLRP + IDC-mHealth intervention is expected to be more effective in terms of reducing pain, improving mobility, functional activity, and ADL than any other rehabilitation intervention among knee OA patients who are overweight and obese. In addition, this intervention is easy to use in the home care setting and can also be used for hemiplegia, paraplegia, or wheelchair patients with lower limb weakness.
Study limitations
This study has several limitations. Firstly, the current study was conducted in a single centre to recruit patients. Secondly, patients in the current study were followed up only until 3-months, hence the long-term effects of the interventions cannot be ascertain. Thus, further research across multiple centres and with long-term follow-up are required to confirm the results of LLRP + IDC-mHealth intervention. Thirdly, psychosocial, physical activity, and comorbidity factors may influence the outcomes. Therefore, further research considering these additional factors is required to confirm the findings of the study. | 2021-10-25T13:13:57.982Z | 2021-10-24T00:00:00.000 | {
"year": 2021,
"sha1": "ffffa11b54974640029c8cf05b2e10ec9070584a",
"oa_license": "CCBY",
"oa_url": "https://advancesinrheumatology.biomedcentral.com/track/pdf/10.1186/s42358-021-00221-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb517b7f9068f89a186ff88964191b9528b9791f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.